• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

PS5 Pro is getting PSSR 2.0 between January and March 2026

And be honest, FSR4 is a high standard. It only loses to DLSS4 and even that is not by a huge margin.

Yep, it's objectively better than DLSS3. That will be a massive jump in quality.

Yup, it's a deserved "best of all" title on every iteration so far. I just turn it on and that's it.
What I mean is media rarely pointed flaws besides "it may get a little blurry in perf settings". Until the next version is announced and then they suddenly notice blur, trailing, artifacts. I'm not trashing, I think it's kinda natural when you don't know anything better and is as good as dlss has been since 2.x

Since 4.0 it looks better than native resolution + TAA in many games. And nvidia is still improving things further.

And we can't know how much better it can get until new version arrives (or competition shows up with something more impressive).
 
Yup, it's a deserved "best of all" title on every iteration so far. I just turn it on and that's it.
What I mean is media rarely pointed flaws besides "it may get a little blurry in perf settings". Until the next version is announced and then they suddenly notice blur, trailing, artifacts. I'm not trashing, I think it's kinda natural when you don't know anything better and is as good as dlss has been since 2.x
Good luck with him 😆 because
Joan Crawford Vintage Horror GIF by absurdnoise
 
Last edited:
No one ever said that PSSR is flawless, but for its 1st iteration it is tremendously better than it was DLSS 1.

DLSS1 isn't really relevant tho.
the name is extremely misleading, same with FSR1.
DLSS1 is not even remotely the same technology as DLSS2. the way both of them reconstruct the image is entirely different from eachother.

DLSS1 was a spatial upscaler which tried to do tenporal reconstruction, that tried to add new detail by comparing the image to images from the same game, that it had to be specifically trained on to work.
and 1.9 it didn't even use the tensor cores 🙃 the result was awful image quality and insane ghosting.

same with FSR1, which also isn't a temporal reconstruction method, but simply a spatial scaler with some edge sharpening and "intelligent" edge smoothing. it essentially didn't even look like it did add any detail to the image, but at least it wasn't as awful as DLSS1 in terms of image quality and ghosting (since it barely did anything and couldn't even add ghosting... due to how little it did to the image, and having no temporal component)


FSR2 and DLSS2 are more similar to eachother than either of them is to their respective predecessors.
both are a form of TAAU with intelligent error correction. FRS2 does error correction through a predetermined algorithm, while DLSS2 does it through ML trained on millions of images and videos.

DLSS1 and FSR1 should be seen as completely separate entities. they don't share any characteristics with their successors.
basically like Ridge Racer Unbounded was a Ridge Racer game by name only,
so are FSR1 and DLSS1 only related by name to FSR2 and DLSS2.
 
Last edited:
Although that is true, there is a big caveat with that statement. DLSS1 was a spatial upscaler from Feb. 2019.
You are comparing abandoned tech from 7 years ago with PSSR.

It is machine learning scaler from very start which used Tensor Cores. That's the point.
 
DLSS1 isn't really relevant tho.
the name is extremely misleading, same with FSR1.
DLSS1 is not even remotely the same technology as DLSS2. the way both of them reconstruct the image is entirely different from eachother.

DLSS1 was a spatial upscaler which tried to do tenporal reconstruction, that tried to add new detail by comparing the image to images from the same game, that it had to be specifically trained on to work.
and until 1.9 it didn't even use the tensor cores 🙃 the result was awful image quality and insane ghosting.

same with FSR1, which also isn't a temporal reconstruction method, but simply a spatial scaler with some edge sharpening and "intelligent" edge smoothing. it essentially didn't even look like it did add any detail to the image, but at least it wasn't as awful as DLSS1 in terms of image quality and ghosting (since it barely did anything and couldn't even add ghosting... due to how little it did to the image, and having no temporal component)


FSR2 and DLSS2 are more similar to eachother than either of them is to their respective predecessors.
both are a form of TAAU with intelligent error correction. FRS2 does error correction through a predetermined algorithm, while DLSS2 does it through ML trained on millions of images and videos.

DLSS1 and FSR1 should be seen as completely separate entities. they don't share any characteristics with their successors.
basically like Ridge Racer Unbounded was a Ridge Racer game by name only,
so are FSR1 and DLSS1 only related by name to FSR2 and DLSS2.

It is relevant. It was machine learning upscaler since 1st version despite it was spatial upscaler.

@GrokGAF

Was DLSS1 machine learning upscaler?

Yes, the first version of DLSS (Deep Learning Super Sampling) used the dedicated Tensor Cores found in Nvidia's RTX GPUs.
The original DLSS (sometimes called DLSS 1.0) was a key feature of the GeForce 20 series cards when they launched in September 2018. It was designed to run neural network operations to upscale lower-resolution images in real time, and these operations were accelerated by the dedicated Tensor Cores.
A common point of confusion comes from an initial implementation in the game Control, which used an image processing algorithm that did not use the Tensor Cores; however, the core DLSS technology as a whole from its inception was built around leveraging these specialized AI processors.
Nvidia later released the significantly improved DLSS 2.0 in April 2020, which also used Tensor Cores but did so more effectively with a generalized model that did not require specific training for each game
 
Although that is true, there is a big caveat with that statement. DLSS1 was a spatial upscaler from Feb. 2019.
You are comparing abandoned tech from 7 years ago with PSSR.
Nvidia didn't start working on it in 2018. They delayed it but it was suppose to release in 2018 along side RTX2000 series. They delayed it and still sucked.
 
Last edited:
It is relevant. It was machine learning upscaler since 1st version despite it was spatial upscaler.

it's not relevant because it's not a form of TAAU.
FSR2/3/4, PSSR, TSR, DLSS2/3/4, and XeSS1/2 are all variations of TAAU.

the TAAU part is far more important to the end result than the machine learning part.

hell, UE4's old ass TAAU would be a more relevant reconstitution method than DLSS1 is in this discussion.
 
Very impressive. If we give that quality on PS5 Pro in a couple of months, then WOW!!!

The advantage on a console is that the target is still 60 fps so they can upscale from a high resolution to get to 4K PSSR2 while also using dynamic resolution as input

Basically:

PC = Fixed input resolution - Variable framerate (uncapped)

Console = Variable input resolution - Fixed framerate (usually targeting 60 fps)
 
Last edited:
Yep, it's objectively better than DLSS3. That will be a massive jump in quality.

Since 4.0 it looks better than native resolution + TAA in many games. And nvidia is still improving things further.

And we can't know how much better it can get until new version arrives (or competition shows up with something more impressive).
I remember a DF article with this "better than native" claim before 4.0. IIRC they used Death Stranding and Control as examples. Maybe Control was for the 720p article.
ML will probably get a lot better in the next years since AI is everything, now. Transformer model is already a product of the current AI craze. Can't wait for PSSR 2.0 and next DLSS versions. Except frame gen. I'm not yet into it (3080 10GB owner) :P
 
Can't you see Bojji?
benicio del toro GIF by FilmStruck

When we talking about PSSR walk like this
the usual suspects films GIF by elCinema.com

Ah, as always with you: personal insults and zero facts on the subject.

tenor.gif


I remember a DF article with this "better than native" claim before 4.0. IIRC they used Death Stranding and Control as examples. Maybe Control was for the 720p article.
ML will probably get a lot better in the next years since AI is everything, now. Transformer model is already a product of the current AI craze. Can't wait for PSSR 2.0 and next DLSS versions. Except frame gen. I'm not yet into it (3080 10GB owner) :P

In games with bad TAA implementation older versions of DLSS (2, 3) could already show better results. With DLSS4 you can see that in most games:



I expect big jump with next version of FSR next (I doubt we will see DLSS5 before 6xxx cards).
 
Last edited:
No one gives a shit about DLSS1. Cerny and his team want to compete with what's out there here and now, not what was there 7 years ago. Imagine a new console coming into the market with the performance of a PS2 for $400 and people going "Well, it's their first console. The PS1 wasn't as impressive." No one cares. It's an arms race and being good enough because you beat those guys' efforts from 7 years ago doesn't matter.

DLSS1 is completely irrelevant. PSSR2 needs to keep up with FSR4 and DLSS4/.5.
 
Nvidia didn't start working on it in 2018. They delayed it but it was suppose to release in 2018 along side RTX2000 series. They delayed it and still sucked.

And I never said it was good in any form. In fact, it was worse than a bicubic upscaler.
 
I remember a DF article with this "better than native" claim before 4.0. IIRC they used Death Stranding and Control as examples. Maybe Control was for the 720p article.

well yes, DLSS has been better than native TAA since 2.0 in many games.

DLSS is TAA, but with intelligent error correction, so it's not really surprising that it's better.


you basically can have 2 forms of, let's call it "dumb" TAA.
light forms that use less frames to accumulate information, and heavy forms that use more frames.

the outcome is usually that lighter forms have less smearing and fizzle but also worse edge smoothing. while heavier forms have great edge smoothing but also a lot of ghosting.

then there are the complete disaster implementations that fail at all of these elements at once.

DLSS2/3/4 tries to have the good qualities of both, with as little of the bad qualities as possible.

Death Stranding looks almost broken when using TAA. it has awful edge smoothing with tons of stairstepping still visble, and is also smeary in motion.
while Control is smooth looking, but has tons of temporal smearing and fizzling.

in Death Stranding, basically any form of reconstruction looks better than native TAA, so it's not just DLSS here.
 
Last edited:
It is a machine learning upscaler, so, it is relevant.

that's exactly why it's not relevant.
none of the others are upscalers (aside from FSR1)

spatial upscaling =/= temporal reprojection


It is OK if DLSS1 sucked. DLSS 2 was vastly better. Don't worry. Nvidia fixed the tools.

again, entirely unrelated technologies.
they didn't fix anything, they threw one piece of tech out, and released a completely different one and slapped the same name on it... because it would probably be confusing if they used a new name for it.

although the name DLSS (Deep Learning Super Sampling) lost its original meaning entirely anyway... so maybe a new name would have been good... maybe DLAAU? (Deep Learning Antialiasing Upsampling)
 
Last edited:
No one gives a shit about DLSS1. Cerny and his team want to compete with what's out there here and now, not what was there 7 years ago. Imagine a new console coming into the market with the performance of a PS2 for $400 and people going "Well, it's their first console. The PS1 wasn't as impressive." No one cares. It's an arms race and being good enough because you beat those guys' efforts from 7 years ago doesn't matter.

DLSS1 is completely irrelevant. PSSR2 needs to keep up with FSR4 and DLSS4/.5.
Cat Confuse GIF by Rizal Althur
 
Ah, as always with you: personal insults and zero facts on the subject.

tenor.gif




In games with bad TAA implementation older versions of DLSS (2, 3) could already show better results. With DLSS4 you can see that in most games:



I expect big jump with next version of FSR next (I doubt we will see DLSS5 before 6xxx cards).

You really cant expect an infantile shill to rely on facts too much before resorting to insults.
 
Last edited:
We can see the total frametime spent in the developer image I linked. No speculation, no calculations, no working back from relative framerates. TAAU is about 1.5ms and PSSR is slightly higher at 2.1ms. That's for reconstructing a 4K image at 60fps. So PSSR is about 0.6ms heavier than TAAU for a total rendering budget for the entire solution of 2.1ms. The comparison is the same in a wide spectrum of games I linked. Call of Duty, Alan Wake 2, AC Shadows, GoW: Ragnarok, and one other I didn't show a screenshot of, Control. PSSR is about 0.6-1ms heavier than the regular temporal AA/upscalers, and is around 2ms in total. Which isn't odd, DLSS4 and FSR4 are also both heavier than TAAU/FSR2.

So you are right, just the computational cost for PSSR is under a 1 ms, but it's also correct to say the end to end solution cost is 2ms.
RvMH9URg4btzboAD.png


Just out of interest where is the accompanying info for this readout? //have it in my head it was from a codemasters game readout

because that table when converted to csv files - by OCR - and given a closer look in a spreadsheet suggests something completely different; especially when you stop and ask: What has Anti aliasing got to do with FSR upscaling part? or PSSR's model inferencing? Nothing would be the logical answer, but because the top frame time in each table looks like the sum of the sub list values below we all just assume it is baked into the AA, but what if I tell you that on the PS5 table the difference between the frame-time and the sum of the sub values is 1.15ms - a reasonable scaling time for FSR - and on the PS5 Pro table the difference is just 0.65ms - which could easily be PSSR inferencing?

Please see the tables in text csv format below :), but either way I can't say why the AA value is so much bigger on PS5 Pro without knowing the context of how many jitter accumulation samples were used on each algorithm. but it does stand to reason that ML AI resulting in more accumulation jitter sample history will use more ROPs&Compute/time to do the final AA output, and obviously bigger upscales will increase that by a factor too.


PS5,,,,,,
Component,total,avg,max,min,bdgt,ocr_confidence
Frame,12.57,13.4,16.19,11.26,13.33,98
Sun Shadows,0.23,0.31,0.81,0.21,0.4,98
Spot Shadows,0.08,0.05,0.16,0,0.5,98
DepthHack|Viewmodel,0.62,0.47,0.99,0.24,1,98
Opaque,5.68,5.93,7.64,4.79,5.25,98
Trans,0.35,0.88,2.42,0.22,0.75,98
Effect,0.25,0.28,0.72,0.24,1.6,98
Lighting,0,0,0,0,0.76,98
Volumetrics,0.39,0.36,0.43,0.31,0.9,98
DXR,0,0,0,0,2,98
Post Fx,1.63,1.74,2.26,1.52,1.8,98
Anti-Alias,1.46,1.51,1.92,1.2,2,98
UI,0,0,0,0,0.5,98
Compute,0.65,0.63,0.71,0.56,,98
Resource Pipeline,0.03,0.04,0.22,0.03,,98
System Overhead,0.05,0.05,0.09,0.05,,98
,,,,,,
Sum of listed costs,11.42,,,,,
FSR before AA,1.15,,,,,
,,,,,,
PS5 Pro,,,,,,
Component,total,avg,max,min,bdgt,ocr_confidence
Frame,8.56,8.49,9.68,7.79,13.33,98
Sun Shadows,0.27,0.32,0.89,0.25,0.4,98
Spot Shadows,0.05,0.08,0.27,0,0.5,98
DepthHack\Viewmodel,0.34,0.27,0.38,0.17,1,98
Opaque,2.92,2.85,3.38,2.34,5.25,98
Trans,0.24,0.3,0.83,0.15,0.75,98
Effect,0.16,0.17,0.2,0.15,1.6,98
Lighting,0,0,0,0,0.75,98
Volumetrics,0.08,0.11,0.19,0.07,0.9,98
D:R,0,0,0,0,2,98
Post Fx,1.04,1.06,1.19,1,1.8,98
Anti-Alias,2.19,2.15,2.23,2.07,2,98
UI,0,0,0,0,0.5,98
Compute,0.55,0.53,0.73,0.49,,98
Resource Pipeline,0.03,0.04,0.25,0.03,,98
System Overhead,0.04,0.04,0.04,0.04,0.03,98
,,,,,,
Sum of listed costs,7.91,,,,,
PSSR cost before AA?,0.65,,,,,
 
Last edited:
that's exactly why it's not relevant

They are. Because both PSSR and DLSS 1 are machine learning upscaler. Whatever you ask on Google, X, ChatGPT, results are basically the same.

@GrokGaf

Is DLSS1 a machine learning upscaler?

Yes, DLSS 1 (Deep Learning Super Sampling) is a machine learning upscaler, but with a significant limitation compared to later versions: it required specific training for each game.
NVIDIA's DLSS technology, from its first iteration, has always been fundamentally based on artificial intelligence and deep learning.
Key aspects of DLSS 1:
  • AI-Powered: It used a convolutional auto-encoder neural network to upscale images.
  • Per-Game Training: The main drawback was that the neural network had to be pre-trained on a supercomputer using high-resolution (64x supersampled) reference images for each specific game title, which limited its compatibility.
  • Spatial Upscaling: It primarily used spatial data (information within a single frame) for reconstruction, which sometimes led to a softer image appearance or visual artifacts.
 
No one gives a shit about DLSS1. Cerny and his team want to compete with what's out there here and now, not what was there 7 years ago. Imagine a new console coming into the market with the performance of a PS2 for $400 and people going "Well, it's their first console. The PS1 wasn't as impressive." No one cares. It's an arms race and being good enough because you beat those guys' efforts from 7 years ago doesn't matter.

DLSS1 is completely irrelevant. PSSR2 needs to keep up with FSR4 and DLSS4/.5.
What you said has not sense at all. We are talking about upscaler with machine learning you have to access to the same data models of years of experience to reach the same results, almost impossible if you start from zero, years later to them.
 
Last edited:
What you said has not sense at all. We are talking about upscaler with machine learning you should "steal" the data models of the other companies to keep the same pace of quality if you start from zero years later to them.
What I said makes perfect sense. It's utterly irrelevant what DLSS1 was 7 years ago. Sony is competing with what's out there. Beating tech from 2018 doesn't mean jack to anyone.
 
What I said makes perfect sense. It's utterly irrelevant what DLSS1 was 7 years ago. Sony is competing with what's out there. Beating tech from 2018 doesn't mean jack to anyone.
It's not how machine learning work and in any case PSSR is quite comparable to DLSS3 when used properly.
 
Last edited:
They are. Because both PSSR and DLSS 1 are machine learning upscaler. Whatever you ask on Google, X, ChatGPT, results are basically the same.

@GrokGaf

Is DLSS1 a machine learning upscaler?

PSSR is not an upscaler.
asking AI is retarded.

Upscaling is taking an image, and blowing it up onto a larger pixel grid.
and then (but not necessarily) treating the upscaled image in one way or another

temporal Upsampling is taking information from multiple frames (usually checkerboarded, or similar patterns) and engine inputs to create additional detail, and then construct a final image based on those different parts of multiple frames.
 
Last edited:
Very impressive. If we give that quality on PS5 Pro in a couple of months, then WOW!!!
Even though I have a PS5 pro, I'm not impressed at all. Minor details at 300% zoom doesn't give me the feeling of owning a superior machine. I just bought it because my old PS5 was constantly overheating. Unfortunately, the additional 200€ are still not justified.
 
PSSR is not an upscaler.
asking AI is retarded.

Upscaling is taking an image, and blowing it up onto a larger pixel grid.
and then (but not necessarily) treating the upscaled image in one way or another

temporal Upsampling is taking information from multiple frames (usually checkerboarded, or similar patterns) and engine inputs to create additional detail, and then construct a final image based on those different parts of multiple frames.

Most modern temporal upscalers don't use checkerboarded, such as DLSS2-4, FSR2-4 TAAU, TSR, XeSS. They use jittered samples, often using the Halton sequence pattern.
 
I've seen worse. Dragons dogma 2 had massive flickering in foliage on PC in specific areas early in the game. It went away elsewhere though. I think a lot of this just comes from the fact that console games are analyzed much more heavily than games on PC.

I think some of it is also when you have really good pc hardware you don't realise the magic that dlss or fsr4 is doing.
because I've been playing so much pc, the games look so clean it must stand out. I never played dragons dogma 2 so I've not seen those issues.
 
Most modern temporal upscalers don't use checkerboarded, such as DLSS2-4, FSR2-4 TAAU, TSR, XeSS. They use jittered samples, often using the Halton sequence pattern.

You're right, but jittering is in essence the same idea. you render different parts of the scene in different frames.

I guess it's not the same, but it can result in similar looking artifacts. it's not rare to see checkerboard shaped artifacts, especially when looking at UE5, which might not use a checkerboarded pattern for reconstruction of the entire image, but does employ checkerboard like dithering for a shitload of its rendering by default.

so some of the image is pretty much being checkerboarded in many modern engines. I think Nortlight does that too now, as do many engines now.
 
Last edited:
You're right, but jittering is in essence the same idea. you render different parts of the scene in different frames.

I guess it's not the same, but it can result in similar looking artifacts. it's not rare to see checkerboard shaped artifacts, especially when looking at UE5, which might not use a checkerboarded pattern for reconstruction of the entire image, but does employ checkerboard like dithering for a shitload of its rendering by default.

so some of the image is pretty much being checkerboarded in many modern engines. I think Nortlight does that too now, as do many engines now.

UE4 doesn't use checkerboarding. In some effects it uses dithering, and then tries to clean it up wit TAAU or TSR.
Or better, for those who can, it can use DLSS, FSR or XeSS.
 
UE4 doesn't use checkerboarding. In some effects it uses dithering, and then tries to clean it up wit TAAU or TSR.
Or better, for those who can, it can use DLSS, FSR or XeSS.

I get it, it's not actually the same pattern, but dithering and checkeeboarding is in essence the same concept. you don't render everything, but instead have holes that get filled over time.

my point is that especially UE4/5 rely heavily on that for almost anything with transparencies (hair, foliage, shadows), and other engines are following suit
 
Last edited:
I get it, it's not actually the same pattern, but dithering and checkeeboarding is in essence the same concept. you don't render everything, but instead have holes that get filled over time.

my point is that especially UE4/5 rely heavily on that for almost anything with transparencies (hair, foliage, shadows).

A jittering pattern can sample more neighboring pixels, than checkerboarding.
And because the Halton sequence is more random, it can accumulate more accurately around each new pixel.
 
A jittering pattern can sample more neighboring pixels, than checkerboarding.
And because the Halton sequence is more random, it can accumulate more accurately around each new pixel.

I know, just saying that a ton of elements in modern engines are rendered with holes in them. it actually has been getting worse over time sadly, because no matter how sophisticated your jittering pattern and your TAA is... you can so easily spot that dithering in games 🤢

it's kinda crazy what a nosedive hair renderng for example took in the last couples of years.
 
Last edited:
The advantage on a console is that the target is still 60 fps so they can upscale from a high resolution to get to 4K PSSR2 while also using dynamic resolution as input

Basically:

PC = Fixed input resolution - Variable framerate (uncapped)

Console = Variable input resolution - Fixed framerate (usually targeting 60 fps)

I didn't know this. Thanks!
 
No one gives a shit about DLSS1. Cerny and his team want to compete with what's out there here and now, not what was there 7 years ago. Imagine a new console coming into the market with the performance of a PS2 for $400 and people going "Well, it's their first console. The PS1 wasn't as impressive." No one cares. It's an arms race and being good enough because you beat those guys' efforts from 7 years ago doesn't matter.

DLSS1 is completely irrelevant. PSSR2 needs to keep up with FSR4 and DLSS4/.5.

You don't get it. That's not why people are bringing up DLSS1 or DLSS 2.
 
What I said makes perfect sense. It's utterly irrelevant what DLSS1 was 7 years ago. Sony is competing with what's out there. Beating tech from 2018 doesn't mean jack to anyone.

Sony isn't competing with DLSS at all though. Beating DLSS today isn't going to mean anything to console gamers in any case. If a gamer wants the latest and greatest tech then they buy a PC. Consoles have always lagged behind PC in this regard. Same story, new generation.
 
Sony isn't competing with DLSS at all though. Beating DLSS today isn't going to mean anything to console gamers in any case. If a gamer wants the latest and greatest tech then they buy a PC. Consoles have always lagged behind PC in this regard. Same story, new generation.

The insecurities of PC fanatics are in full display here....

They spend all this time talking about a product they shouldn't even care about

With PC hardware prices going nuts this year, their obsession about PS5 Pro can only get worse...

It will be fun here though once this firmware update comes out
 
Last edited:
Broken in half games? Where it coming this nonsense statics? And broken for the fuck sake with FSR3 on base hardware? Why everytime this stupid bullshit? Most of the games with "broken" PSSR coming from minor studios using UE5 from my personal experience. I understood the disappointment and I don't have the presumptiom to change the other feeling but no need to be overdramatic and irrational. Most of the times PSSR works decently. Not incredibly but did it's job. It's not even true ps5 pro offer just a resolution boost, I found more higher setting on ps5 pro games enhanced than on ps4 pro but eh.
Half of the games I have played. It's/was broken in star wars outlaw, star wars jedi survivor, avatar, Silent Hill, Alan wake 2, and I'm sure I'm forgetting some. PSSR introduces a lot of shimmering in those games.

What does the pro offer than a resolution boost, then? Like I said, the base PS5 fidelity mode outperforms the pro mode 9 times out of 10 fidelity wise.
 
Even when running at high resolution, PS2 games weren't nearly as detailed as the UE5 Silent Hill: F. In fact even games with prerendered backgrounds on a sixth-generation consoles such us RE1 remake on GC never looked as good.

SH2 with mods

Sem_ttulo.png


1.1.png


OHomUtMH5LbTNbk9.jpg


RE1 Remake

tumblr_n9ucbho8Q81tiz823o2_1280.jpg


nowwwww.jpg


Sillent Hill F

SHf-Win64-Shipping-2025-09-23-14-58-46-996.jpg


SHf-Win64-Shipping-2025-09-23-15-10-28-045.jpg


SHf-Win64-Shipping-2025-09-23-15-11-41-721.jpg


SHf-Win64-Shipping-2025-09-23-15-12-07-681.jpg


I wouldn't be surprised if the Hinako model in Silent Hill: F had a higher polygon budget than the entire Silent Hill 2 scene on the PlayStation 2.

My screenshots shows the standard maxed-out settings, but this game has hidden settings that can further improve the lighting quality, making Silent Hill F one of the best-looking PC games currently available.


TXdWuxv10d0i8fcC.jpg


SHf-Win64-Shipping_2025_09_27_00_55_09_195.jpg


Not agreeing
 
Sony isn't competing with DLSS at all though. Beating DLSS today isn't going to mean anything to console gamers in any case. If a gamer wants the latest and greatest tech then they buy a PC. Consoles have always lagged behind PC in this regard. Same story, new generation.
By competing I mean what they aim for. Cerny sure as hell isn't looking at DLSS1. He's looking at 4 and 4.5 and previously looked at 2 and 3. Nobody on the team went, "Well, at least we're better than DLSS1."
 
DLSS 2/3 has better stability, no "film grain" like noise common for PSSR and doesn't add visual artifacts to games with RT and UE5 games (common to PSSR+UE5 combo).

You can try to prove that this is not the case obviously, but so far you always were just talking about it.
Wukong is using UE 5.0 and don't have those problems even in areas with a lot of foliage. Yotei and KCD2 looks fantastic too. Maybe the problems is not the model and the main problems is developers are not testing and tweaking the imperfections like they do with FSR and DLSS
 
Last edited:
By competing I mean what they aim for. Cerny sure as hell isn't looking at DLSS1. He's looking at 4 and 4.5 and previously looked at 2 and 3. Nobody on the team went, "Well, at least we're better than DLSS1."

Yeah, I agree there. The target is obviously FSR 4 at this point. Believe they have said that.
 
Top Bottom