[Digital Foundry] Silent Hill f Review - PlayStation 5/PS5 Pro - Impressive On Base, Problems On Pro

Ampere SM improved upon Turing SM's integer CUDA cores with floating point (FP) capabilities without increasing TMU (texture management units). AMD executed a similar Ampere-like FP double rate increase per SM with RDNA 3.0 CU hardware generation. PC's RDNA 4 CU and RDNA 3.5 CU have double texture sampling rate improvements.

okay... not sure why you're saying all that... especially when we are talking about RDNA2, which has no dual issue FP32.

but just to reiterate my comment, I am simply going by real world performance... and by real world performance my RTX3060ti for example, which is a 16.2 TFLOPS card, is about as fast in raster performance as the PS5's 10.2 TFLOPS RDNA2 GPU... maybe a tiny bit faster in some games.
so that gives me a decent comparison point for Ampere and AMD, and specifically RDNA2.
 
Last edited:
okay... nor sure why you're saying all that... especially when we are talking about RDNA2, which has no dual issue FP32.

but just to reiterate my comment, I am simply going by real world performance... and by real world performance my RTX3060ti for example, which is a 16.2 TFLOPS card, is about as fast in raster performance as the PS5's 10.2 TFLOPS RDNA2 GPU... maybe a tiny bit faster in some games.
so that gives me a decent comparison point for Ampere and AMD, and specifically RDNA2.

3060ti is 24% faster than 11.3TF Radeon 6700 ("The PS5 GPU"). Of course when not VRAM limited.
 
3060ti is 24% faster than 11.3TF Radeon 6700 ("The PS5 GPU"). Of course when not VRAM limited.

I don't really see that in most games. I often run basically the same settings as PS5, and have a very similar performance. slightly better in some games maybe.
if raytracing is involved it absolutely shits on the PS5 of course (like, Pathtraced Cyberpunk is actually somewhat playable on the 3060ti, while the PS5 would explode if you tried)
 
Yeah so far I've only seen it with foliage in this one. The rest looks stable, but I booted up just to test this out. Will report back if my opinion changes as I play through starting tomorrow

And yeah, a simple toggle would resolve this and give gamers the choice. Given nothing like that happened with SH2R, I don't have much hope with NeoBards. May be they will surprise me. We shall see. In either case, I'm not seeing it to be bad enough that one should hold off from playing.
I've only been noticing it on grass. All other vegetation such as tree's and bushes don't seem to exhibit it for whatever reason.
And like you say, it seems to get better when you are moving.
The rest of the image is pretty good, definitely much better than silent hill 2.
Silent hill 2 was unplayable for me.

But a choice should be standard in my opinion
 
I've only been noticing it on grass. All other vegetation such as tree's and bushes don't seem to exhibit it for whatever reason.
And like you say, it seems to get better when you are moving.
The rest of the image is pretty good, definitely much better than silent hill 2.
Silent hill 2 was unplayable for me.

But a choice should be standard in my opinion
Good to know I'm not alone. But I could tolerate SH2, so I was already in the minority.
 
okay... not sure why you're saying all that... especially when we are talking about RDNA2, which has no dual issue FP32.

but just to reiterate my comment, I am simply going by real world performance... and by real world performance my RTX3060ti for example, which is a 16.2 TFLOPS card, is about as fast in raster performance as the PS5's 10.2 TFLOPS RDNA2 GPU... maybe a tiny bit faster in some games.
so that gives me a decent comparison point for Ampere and AMD, and specifically RDNA2.
FYI, RX 9060 XT's RDNA 4 32 CU scale demonstrated its power against PS5 Pro's older RDNA 2/RDNA 3 60 CU design. Starting from RDNA 2, RDNA 3 was an evolutionary design towards RDNA 4.

TFLOPS is meaningless when shader/stream processor units don't have graphics-related load-store units i.e. TMU and ROPS.

Extra FLOPS for Ampere SM are useful for geometry.

Raytracing with BVH is mostly geometry processing with triangles and bounding boxes. Ampere has the superiority with ray tracing over RDNA 2.

PS5 GPU and RTX 3060 Ti have similar TMU strength i.e. 144 vs 152, respectively. Both GPU solutions have 4 MB L2 cache and 448.0 GB/s GDDR6-14000 memory bandwidth.
 
Last edited:
3060ti is 24% faster than 11.3TF Radeon 6700 ("The PS5 GPU"). Of course when not VRAM limited.
RX 6700 has 320 GB/s memory bandwidth via a 160-bit memory bus GDDR6-16000, while 3060 Ti has 448.0 GB/s memory bandwidth via a 256-bit bus GDDR6-14000. 3060 Ti is in another tier on GDDR6 chip count.

PS5's CPU L2/L3 and GPU L2 caches are hiding the shared memory bandwidth.
 
Last edited:
I've only been noticing it on grass. All other vegetation such as tree's and bushes don't seem to exhibit it for whatever reason.
And like you say, it seems to get better when you are moving.
The rest of the image is pretty good, definitely much better than silent hill 2.
Silent hill 2 was unplayable for me.

But a choice should be standard in my opinion
I've shelved Silent Hill F because it's too distracting. I highly doubt they are going to patch this. So another game I wasted money on, on a console I paid a lot of money for. Really starting to regret my PS5 Pro purchase
 
Last edited:
I've shelved Silent Hill F because it's too distracting. I highly doubt they are going to patch this. So another game I wasted money on, on a console I paid a lot of money for. Really starting to regret my PS5 Pro purchase
That's a shame it's too distracting for you.
The game is very good.
Hopefully they patch different options in.
Maybe you could sell your ps5 version and pick it up on a different platform?
 
As someone who finished the game yesterday, I have absolutely zero clue what those YTers were talking about "woke" or "fail" Silent Hill. Game is legit, its terrifying, actually its harder than the previous SH games. Cutscenes are brutal, especially those 3 rites you get to perform on your body. If theres any negative I could say, its that this imo has nothing to do with the old SH titles. SO much so that they could have slapped a different name and game would still do fine. Even the "other world" is handled differently so not sure why they decided to call it SH ever. But the woke comments I really dont get, story is about a love triangle, bullying and domestic violence....no lgbt/dei nonsense anywhere.
 
I've shelved Silent Hill F because it's too distracting. I highly doubt they are going to patch this. So another game I wasted money on, on a console I paid a lot of money for. Really starting to regret my PS5 Pro purchase

it had the warning sign tho:
Unreal Engine 5.

if you see UE5, just think of this symbol
images
 
Man, can they port the HW lumen version of the game to PS6? Might actually tempt me to do a second run. Looks so fucking good! :/



Using some very basic (not ultra) config for hardware lumen:

[SystemSettings]
r.Lumen.HardwareRaytracing=1
r.Lumen.Reflections.HardwareRayTracing=1
r.ReflectionMethod=1
r.Lumen.Reflections.SmoothBias=1
r.Lumen.Reflections.BilateralFilter=1
r.Lumen.TranslucencyReflections.FrontLayer.Enable=1

LF0iReI.jpeg
LALkYwX.jpeg
u2kP3qp.jpeg
mtPDj1u.jpeg


3-4 FPS drop... Why no option like this Konami?

The PS5 Pro has significantly more bandwidth. CPU would also be rather irrelevant in GPU bound scenarios.

For the same settings (as in PS5 version) memory BW may be enough but they usually aim for higher resolutions and/or PSSR ML. Adding RT to the game that don't have it on standard PS5 also has CPU and memory BW cost.
 
lots of stuttering in open areas near the end of the game, playing on PS5 Pro. play like a slideshow at times where the big fat enemy pooping out other enemies. have the reviewers actually play the game to the end? awful camera angles in closed places as well. you can't dodge when you can't even see your own character.
 
The PS5 Pro has significantly more bandwidth. CPU would also be rather irrelevant in GPU bound scenarios.
No it doesn't. It has marginally more bandwidth and significantly worse bandwidth per CU than the base ps5. The CUs are bandwidth starved and this is the second pro console in a row that Cerny has made that mistake. Since they didn't want to change the bus width, they should have used gddr6x or maybe even gddr7. Its one of the main reason the pro underperforms. Microsoft at least never make these type of stupid mistakes. It's why the Xbox One X gapped the ps4 pro.
 
Last edited:
No it doesn't. It has marginally more bandwidth and significantly worse bandwidth per CU than the base ps5. The CUs are bandwidth starved and this is the second pro console in a row that Cerny has made that mistake. Since they didn't want to change the bus width, they should have used gddr6x or maybe even gddr7. Its one of the main reason the pro underperforms. Microsoft at least never make these type of stupid mistakes. It's why the Xbox One X gapped the ps4 pro.
Game consoles with RDNA 2 IGP don't have PC's RDNA 2 dGPU's large infinity cache. Strix Halo's IGP is the 1st AMD APU with a large last-level 32MB cache.

Xbox One X GPU has 2 MB render cache (for ROPS) + 2 MB L2 cache (for TMU). Baseline Polaris doesn't have 4 MB total last-level cache. RX 480 has 2MB L2 cache for TMUs, but it's not connected with ROPS.

Vega 56/64 was AMD's 1st unified 4 MB L2 cache for TMUs and ROPS design.

PS5 Pro has 2GB DDR5 for the OS, hence 256-bit 16 GB GDDR6-18000 is fully dedicated to the game.

Memory compression improvements with RDNA 4 vs RDNA 3. https://chipsandcheese.com/p/amds-rdna4-gpu-architecture-at-hot

Beyond tweaking the cache hierarchy, RDNA4 brings improvements to transparent compression. AMD emphasized that they're using compression throughout the SoC, including at points like the display engine and media engine. Compressed data can be stored in caches, and decompressed before being written back to memory. Compression cuts down on data transfer, which reduces bandwidth requirements and improves power efficiency.

vMhAyN17Nns9n9pB.jpg

RX 9060 XT 32 CU 16 GB being on par with PS5 Pro's older 60 CU shows PC's full RDNA 4 improvements.
 
Last edited:
Looking at it more and more I think T Thief1987 is right, game is SW lumen only. Tweaks in .ini expand roughness cutoff, cleans up reflections and maxes out SW lumen.

Reflections in SHf look like SW lumen in SH2:

twGZRp1PyZ3RlsRd.jpg
When will someone explain to me why devs still do this, even on PC? It can't still be "lack of awareness". There must be a justification we aren't privy to?
 
When will someone explain to me why devs still do this, even on PC? It can't still be "lack of awareness". There must be a justification we aren't privy to?
where were you when mgs delta came out and was very expensive to run causing grifters on youtube, twitter and unreal engine haters to latch on to it for having poor performance. turns out they had hardware lumen implemented and disabled because it would incur another 20% performance hit and they didnt want even more bad press.

yes, we are at a point where devs are purposefully nerfing their games on PCs to avoid the wrath of retarded PC warriors who want to run games on their shitty sub 10 tflops 3060s or cards with 8 gb of vram.
 
where were you when mgs delta came out and was very expensive to run causing grifters on youtube, twitter and unreal engine haters to latch on to it for having poor performance. turns out they had hardware lumen implemented and disabled because it would incur another 20% performance hit and they didnt want even more bad press.

yes, we are at a point where devs are purposefully nerfing their games on PCs to avoid the wrath of retarded PC warriors who want to run games on their shitty sub 10 tflops 3060s or cards with 8 gb of vram.
But why not have it as an "ultra" lumen toggle option or slider instead? Is it not possible to have both sw and hw lumen available in the same build as a selectable user preference? That's a win-win if technically feasible. If it's one or the other, then I guess I understand better. If they were able to do that for sh2 remake, where ps5 was using sw lumen and pro was using hw lumen, then it should be possible on PC right?
 
Last edited:
But why not have it as an "ultra" lumen toggle option or slider instead? Is it not possible to have both sw and hw lumen available in the same build as a selectable user preference? That's a win-win if technically feasible. If it's one or the other, then I guess I understand better. If they were able to do that for sh2 remake, where ps5 was using sw lumen and pro was using hw lumen, then it should be possible on PC right?
because the first thing these retards do is max out the game and then bitch about how their $1000 card cant max it out.
 
because the first thing these retards do is max out the game and then bitch about how their $1000 card cant max it out.
This is a fabrication btw. The users complaining are mostly those on lower-end to mid-tier hardware. They don't complain about maxing out the game but about not being able to get decent visuals and a playable frame rate even at lower settings.

Black Myth Wukong runs at 29fps maxed out on a 5090, yet it didn't cause a shitstorm because you can also run it at 60fps on a 3060 by using conservative settings.

The reason ultra is hidden is more than likely because the setting is experimental and hasn't been properly QA'd. Ubisoft also hid Unobtanium settings in Frontiers of Pandora and it was most definitely not because they were afraid of anyone complaining about not being able to max out their game. It's simply experimental. Games dismantling GPUs at max settings aren't uncommon and they don't get eviscerated provided they can also run well at lower settings. Cyberpunk 2077, RDR2, Frontiers of Pandora, Alan Wake 2, Black Myth Wukong, AC Shadows, and more games are as demanding as Borderlands 4 at max settings, yet didn't attract the ire of gamers. Hell, TW2 had that infamous ubersampling option that just killed GPU and TW3 crushed every mainstream GPU, but neither of these games got treated like BL4.
 
Last edited:
No it doesn't. It has marginally more bandwidth and significantly worse bandwidth per CU than the base ps5. The CUs are bandwidth starved and this is the second pro console in a row that Cerny has made that mistake. Since they didn't want to change the bus width, they should have used gddr6x or maybe even gddr7. Its one of the main reason the pro underperforms. Microsoft at least never make these type of stupid mistakes. It's why the Xbox One X gapped the ps4 pro.
Per CU it has less bandwidth, but the 9060 XT has in total far less bandwidth. The difference in CU/bandwidth is not that much either. It's 10GB/CU vs 9.6GB/CU. The 9060 XT does have Infinity Cache but 32MB of that is of limited use with a 4K frame buffer. Even if you increase the bandwidth of the PS5 Pro it will still likely tie the 9060 XT, or only slightly beat it, which clearly shows off the superiority of RDNA 4 over RDNA 2.
 
where were you when mgs delta came out and was very expensive to run causing grifters on youtube, twitter and unreal engine haters to latch on to it for having poor performance. turns out they had hardware lumen implemented and disabled because it would incur another 20% performance hit and they didnt want even more bad press.

yes, we are at a point where devs are purposefully nerfing their games on PCs to avoid the wrath of retarded PC warriors who want to run games on their shitty sub 10 tflops 3060s or cards with 8 gb of vram.
I don't know man were people so stupid that they tried path tracing in CP2077 with a 3060? Just give us the damn options like PC games have always done.

It would make this game great for r/patientgamers.
 
Game consoles with RDNA 2 IGP don't have PC's RDNA 2 dGPU's large infinity cache. Strix Halo's IGP is the 1st AMD APU with a large last-level 32MB cache.

Xbox One X GPU has 2 MB render cache (for ROPS) + 2 MB L2 cache (for TMU). Baseline Polaris doesn't have 4 MB total last-level cache. RX 480 has 2MB L2 cache for TMUs, but it's not connected with ROPS.

Vega 56/64 was AMD's 1st unified 4 MB L2 cache for TMUs and ROPS design.

PS5 Pro has 2GB DDR5 for the OS, hence 256-bit 16 GB GDDR6-18000 is fully dedicated to the game.

Memory compression improvements with RDNA 4 vs RDNA 3. https://chipsandcheese.com/p/amds-rdna4-gpu-architecture-at-hot

Beyond tweaking the cache hierarchy, RDNA4 brings improvements to transparent compression. AMD emphasized that they're using compression throughout the SoC, including at points like the display engine and media engine. Compressed data can be stored in caches, and decompressed before being written back to memory. Compression cuts down on data transfer, which reduces bandwidth requirements and improves power efficiency.

vMhAyN17Nns9n9pB.jpg

RX 9060 XT 32 CU 16 GB being on par with PS5 Pro's older 60 CU shows PC's full RDNA 4 improvements.
All of this is mostly irrelevant as the ps5 pro is not RDNA 4. It's a hodgepodge of features that's realistically more in line with rdna 2&3 than 4. Excluding ray tracing, we can't see much evidence that other features were ported over. That's why it needs significantly more bandwidth.
Per CU it has less bandwidth, but the 9060 XT has in total far less bandwidth. The difference in CU/bandwidth is not that much either. It's 10GB/CU vs 9.6GB/CU. The 9060 XT does have Infinity Cache but 32MB of that is of limited use with a 4K frame buffer. Even if you increase the bandwidth of the PS5 Pro it will still likely tie the 9060 XT, or only slightly beat it, which clearly shows off the superiority of RDNA 4 over RDNA 2.
Rdna 4 is superior to 2 for sure. However, the ps5 pro gpu is not strictly rdna 2. If the bandwidth is increased, i expect it to easily gap the 9060xt at higher resolutions. At lower resolutions, it should be closer due to infinity cache.

Regardless, once again Cerny has made certain tradeoffs in terms of gpu feature set that fails to deliver bang for area from the silicon. It's extremely disappointing.
 
No it doesn't. It has marginally more bandwidth and significantly worse bandwidth per CU than the base ps5. The CUs are bandwidth starved and this is the second pro console in a row that Cerny has made that mistake. Since they didn't want to change the bus width, they should have used gddr6x or maybe even gddr7. Its one of the main reason the pro underperforms. Microsoft at least never make these type of stupid mistakes. It's why the Xbox One X gapped the ps4 pro.
From what I read it's not exactly the bandwith but the RAM the real bottleneck for the ps5 pro (reported by some developers). Bandwidth is more or less proportionate to the CUs and to the gpu upgrade but the RAM is not enough to support the raytracing perfomance boost in the hardware.
 
Last edited:
When will someone explain to me why devs still do this, even on PC? It can't still be "lack of awareness". There must be a justification we aren't privy to?

DF was talking about it few time, it's some stupid UE5 limitation. You have to switch "HW lumen" toggle at the beggining of development and then you have options for both SW and HW lumen and lumen reflections. If you don't do that you are stuck with SW lumen only.

Sweeney likes to talk shit about developers but this is on him as well (and traversal stutters), and devs start projects like: "We will use SW lumen on consoles anyway so why bother".
 
DF was talking about it few time, it's some stupid UE5 limitation. You have to switch "HW lumen" toggle at the beggining of development and then you have options for both SW and HW lumen and lumen reflections. If you don't do that you are stuck with SW lumen only.

Sweeney likes to talk shit about developers but this is on him as well (and traversal stutters), and devs start projects like: "We will use SW lumen on consoles anyway so why bother".
I doubt it's all about to switch a toggle setting. More realistically developers can't handle a steady performance and they prefer use software lumen from the beginning to have a 60 fps assured on console.
 
Last edited:
I doubt it's all about to switch a toggle setting. More realistically developers can't handle a steady performance and they prefer use software lumen from the beginning to have a 60 fps assured on console.

But this doesn't explain lack of HW lumen option on PC, it can't even be forced in .ini files for many games - they are SW lumen locked.
 
But this doesn't explain lack of HW lumen option on PC, it can't even be forced in .ini files for many games - they are SW lumen locked.
Because probably they were just focused to the console version, japaneses are not new to such carelessness.
 
Last edited:
So...i guess i'll wait for the Pro update for both this game and SH2. If they ever get one.

If the issues are similar to whatever happens on MGS3 Delta (meaning...i won't notice them) i'll probably buy them anyway.
 
So...i guess i'll wait for the Pro update for both this game and SH2. If they ever get one.

If the issues are similar to whatever happens on MGS3 Delta (meaning...i won't notice them) i'll probably buy them anyway.
Nah the infamous probospheric effect especially visible with UE5 is a way more subtle on MGS, not completely gone but definitely less noticeable and miniscule from my personal experience. Also the last patch has improved a lot the fps perfomance, surely not a locked experience, but I checked the first area with the crocodiles which was a stutters fest and it's definitely smoother now.
 
Last edited:
Top Bottom