• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

[MLiD] PS6 & PSSR2 Dev Update

I've got no issue with using AIs like Copilot and ChatGPT in general, but you proxying my point while not advocating for the point that you obviously don't understand the significance of the advent of hardware accelerated z/stencil buffering to 3D graphics, how the last fragment to be shaded in a game frame can't be completed - without tearing - unless it passes that test, and how that becomes the main delay in the critical path of low resolution when shader units and ROPs are in abundance to accelerate the fragment shaders, is hardly the same thing as me conducting the same discussion with ChatGPT myself and correcting it on it misunderstanding contexts, failing to remember that Opengl/Vulkan are both Client/Server models, and that the client submission is single threaded on the main CPU core.

It didn't even seem from your response like ChapGPT accepted that the CPU primary core and GPU for an interactive game logic/simulation is in lockstep - where the GPU can't predict the future before the gamer has interacted with the present rendered frame. So if you think that wall of info covered up your lack of real game rendering knowledge from your previous comment, then whatever,

but it doesn't change the reality that a frame can't correctly finish rendering a v-synched frame in a game to advance the CPU logic until the very last fragment from the projected geometry passes or fails its zbuffer/stencil test, so at low resolution - as we tend to zero - the zbuffer/stencilling becomes the most important constraint on the critical path - even if combined with BVH hidden surface removal lowering the overdraw per pixel to an optimal one and a half.
I mean, ChatGPT 5.2 Codex Max, Claude Opus 4.5, and Gemini 3 Pro all agree you're factually wrong on quite a number of points. The advantage of your work having a paid GitHub copilot license means being able to use these advanced coding models is easy.

And, well, every single YouTube or technical testing I can find, finds the complete opposite as well, which I've provided quite a few, yet curiously you have not.

You should be a game developer, apparently Larian, CDPR, Ubisoft, Asobo, and a few others don't realize it's a simple matter of just lowering resolution to alleviate CPU bottlenecks. The Switch 2, despite having games upscaled from 360p still faces CPU bottlenecks, and that's from some of most technical competent game studios in the world.
 
Last edited:
I mean, ChatGPT 5.2 Codex Max, Claude Opus 4.5, and Gemini 3 Pro all agree you're factually wrong on quite a number of points. The advantage of your work having a paid GitHub copilot license means being able to use these advanced coding models is easy.

And, well, every single YouTube or technical testing I can find, finds the complete opposite as well, which I've provided quite a few, yet curiously you have not.

You should be a game developer, apparently Larian, CDPR, Ubisoft, Asobo, and a few others don't realize it's a simple matter of just lowering resolution to alleviate CPU bottlenecks. The Switch 2, despite having games upscaled from 360p still faces CPU bottlenecks, and that's from some of most technical competent game studios in the world.
Yet again, not following the context. When designed for the hardware the critical path limiting factor at low resolution will be the zbuffer/stencil test clearing the last pixel - to be shaded, and even more so with cascaded shadowmap updates because the PS6 portable probably won't use RT shadows - on any given untorn frame.

All the other reasons you were putting forward are related to software not being tailored (to the PS6 portable's 4-cores), and letting work on 3-cores that could be moved to an underutilised GPU or sub divided remain a bottleneck on the 3 other CPU cores that block the primary core flow control, or have work on the GPU that exceeds the needs of the lower resolution.

You can use other studios as a strawman, but PlayStation is going to be the lead platform even more so when PS6 Portable hits, and if PlayStation 6 will be using a low power mode as base for the main console(non-cross gen games), those games will scale with resolution for PS6 Portable as was my original point, because all the other ChatGPT reasons will already have been moved off the critical path that blocks higher frame-rate (60) above 30fps.

But feel free to contradict yourself and continue a back and forth on this pretty simple issue.
 
Last edited:
I mean, ChatGPT 5.2 Codex Max, Claude Opus 4.5, and Gemini 3 Pro all agree you're factually wrong on quite a number of points. The advantage of your work having a paid GitHub copilot license means being able to use these advanced coding models is easy.

And, well, every single YouTube or technical testing I can find, finds the complete opposite as well, which I've provided quite a few, yet curiously you have not.

You should be a game developer, apparently Larian, CDPR, Ubisoft, Asobo, and a few others don't realize it's a simple matter of just lowering resolution to alleviate CPU bottlenecks. The Switch 2, despite having games upscaled from 360p still faces CPU bottlenecks, and that's from some of most technical competent game studios in the world.
One could say the switch 2 hasn't exactly a perfect designed cpu neither eh.
 
Last edited:
Yet again, not following the context. When designed for the hardware the critical path limiting factor at low resolution will be the zbuffer/stencil test clearing the last pixel - to be shaded, and even more so with cascaded shadowmap updates because the PS6 portable probably won't use RT shadows - on any given untorn frame.

All the other reasons you were putting forward are related to software not being tailored (to the PS6 portable's 4-cores), and letting work on 3-cores that could be moved to an underutilised GPU or sub divided remain a bottleneck on the 3 other CPU cores that block the primary core flow control, or have work on the GPU that exceeds the needs of the lower resolution.

You can use other studios as a strawman, but PlayStation is going to be the lead platform even more so when PS6 Portable hits, and if PlayStation 6 will be using a low power mode as base for the main console(non-cross gen games), those games will scale with resolution for PS6 Portable as was my original point, because all the other ChatGPT reasons will already have been moved off the critical path that blocks higher frame-rate (60) above 30fps.

But feel free to contradict yourself and continue a back and forth on this pretty simple issue.
Mate you started it with your "lack of real first hand know how of graphics programming." Which was hilarious with your claim of "so zbuffering and stencil test has never been done on the main CPU core in 3D games or the CPU's other cores for that matter." when Quake existed. Hence, you know, my entire original point that lowering the resolution used to increase FPS by lowering load on the CPU. And, which I pointed out, hasn't been the case for ages. It's like you didn't even read my original post.

It's frankly simple, lowering resolution frees up GPU resources but it doesn't free up CPU resources. If you are CPU capped at 1080p, be it via game logic, physics, animation, etc, then lowering the resolution or upscaling from 360p ain't really going to alleviate that. In short, lowering resolution does not increase frame rate if you are CPU-limited but lowering resolution can increase frame rate only if you are GPU-limited on pixel/fragment work. That is factually true. And I can bet your next rebuttal will offer zero evidence to imply otherwise.

It's been so obvious and that way for ages now that I'm surprised anyone is arguing against it.
 
Last edited:
waiting-hurry.gif


It already is 2026 in Spain, Mark Cerny!! Your most important market!
 
One could say the switch 2 hasn't exactly a perfect designed cpu neither eh.
I never claimed otherwise, the Switch 2 CPU is very limited. But it proves my point perfectly. CPU power between docked and handheld is basically identical, while GPU power is not, that is almost cut in half. Now a game running on docked mode can face drops due to the CPU, Cyberpunk 2077 comes to mind, dropping all the way to 18fps. If we look at docked mode, the game is running 1080p, upscaled from a DRS range of 720p-1080p. Next, we look at the portable performance mode, upscaling to 720p from a low of 360p via DLSS and yet, what do you know, it still has the exact same CPU drops. Dropping rendering resolution to almost a quarter and basically the minimum any game should probably render at, did nothing to alleviate the CPU bottleneck.

For the PS6 Portable, devs are just going to need to do cutbacks on the portable version of games, as it will have half the number of cores and a much reduced frequency vs the main PS6 console. Lowering resolution is not going to help them at all either the CPU (GPU yes). Game logic and simulations would need to take a cut, and that's exactly what we see happening with the Switch 2 as well. Other than the obvious graphical cutbacks in SW Outlaws and AC Shadows we also see reduction in water and cloth simulation, as well as things like reduced animation effects when compared to the PS5.
 
Last edited:
I think it's more likely for devs to take the 60 FPS settings for the PS6, cut internal res by 4x and frame rate by 2x and call it a day.
That's the most likely scenario, but I'm assuming worst case, a very CPU heavy demanding game. But they should be rare. I think it depends a lot on what the clock speed difference between the two are.
 
Last edited:
Mate you started it with your "lack of real first hand know how of graphics programming." Which was hilarious with your claim of "so zbuffering and stencil test has never been done on the main CPU core in 3D games or the CPU's other cores for that matter." when Quake existed. Hence, you know, my entire original point that lowering the resolution used to increase FPS by lowering load on the CPU. And, which I pointed out, hasn't been the case for ages. It's like you didn't even read my original post.

It's frankly simple, lowering resolution frees up GPU resources but it doesn't free up CPU resources. If you are CPU capped at 1080p, be it via game logic, physics, animation, etc, then lowering the resolution or upscaling from 360p ain't really going to alleviate that. In short, lowering resolution does not increase frame rate if you are CPU-limited but lowering resolution can increase frame rate only if you are GPU-limited on pixel/fragment work. That is factually true. And I can bet your next rebuttal will offer zero evidence to imply otherwise.

It's been so obvious and that way for ages now that I'm surprised anyone is arguing against it.
You are moving the goal posts. How is a native 1080p resolution PS6 Portable LED/OLED being served by a lower resolution and ML AI upscaled by PSSR2 to 1080p going to be CPU capped at 1080p native rendering? That is even a strawman too.

You are also going completely by PC, and as Bojji Bojji 's first set of screenshots - that I could see in the UK - showed there was no real change in VRAM at artificial 50% resolution. Meaning the cascaded shadow maps, that are all part of the lower resolution freeing up the critical path, were never reduced by 50%, of which you would do on a portable, because at that minification size you won't see shadow mapping issues as badly when PSSR2 is scaling up (360p, 480p) using a cascade of sparse 2Ks worth of shadow maps rather than 8k or 16Ks worth on the main PS6 PS6 - or zero if doing RT/PT shadows.

So, for any modern game you believe is fully CPU limited at low resolution - presumably not using RT/PT - try turning shadow maps to lowest too or off, and then see if the VRAM use drops as expected - and the frame-rate increases.

For PlayStation 4 or 5 ports using 6 CPU cores, I suspect they are indeed more CPU limited by porting decisions for the hardware that pushes IO onto those cores, but for the rest that are still mainly GPU heavy and 2-3 CPU cores you should expect that massive lowering of (zbuffer) resolution results in frames finishing much quicker and then allowing the primary core to generate more workload for more frames per second.
 
You are moving the goal posts. How is a native 1080p resolution PS6 Portable LED/OLED being served by a lower resolution and ML AI upscaled by PSSR2 to 1080p going to be CPU capped at 1080p native rendering? That is even a strawman too.

You are also going completely by PC, and as Bojji Bojji 's first set of screenshots - that I could see in the UK - showed there was no real change in VRAM at artificial 50% resolution. Meaning the cascaded shadow maps, that are all part of the lower resolution freeing up the critical path, were never reduced by 50%, of which you would do on a portable, because at that minification size you won't see shadow mapping issues as badly when PSSR2 is scaling up (360p, 480p) using a cascade of sparse 2Ks worth of shadow maps rather than 8k or 16Ks worth on the main PS6 PS6 - or zero if doing RT/PT shadows.

So, for any modern game you believe is fully CPU limited at low resolution - presumably not using RT/PT - try turning shadow maps to lowest too or off, and then see if the VRAM use drops as expected - and the frame-rate increases.

For PlayStation 4 or 5 ports using 6 CPU cores, I suspect they are indeed more CPU limited by porting decisions for the hardware that pushes IO onto those cores, but for the rest that are still mainly GPU heavy and 2-3 CPU cores you should expect that massive lowering of (zbuffer) resolution results in frames finishing much quicker and then allowing the primary core to generate more workload for more frames per second.

VRAM going up or down is purely game dependent, some show big differences, some don't.

Screenshots with 720p output, 1080p output (both with 100% resolution) and 4k output (not imgur):


VRAM increases in (GPU limted) 4k output (and native res).
 
VRAM going up or down is purely game dependent, some show big differences, some don't.

Screenshots with 720p output, 1080p output (both with 100% resolution) and 4k output (not imgur):


VRAM increases in (GPU limted) 4k output (and native res).
Shadow map caches occupy large amounts of that base 6GB VRAM usage, turning off shadow maps or to low, which is all part of the native resolution makes a difference in any game not blocked by some slave CPU core workload or some heavy GPU workload like RT/PT. I mean if you are rendering at 720p, do the shadow map cache need to be more that 2K?
 
Last edited:
You are moving the goal posts. How is a native 1080p resolution PS6 Portable LED/OLED being served by a lower resolution and ML AI upscaled by PSSR2 to 1080p going to be CPU capped at 1080p native rendering? That is even a strawman too.

You are also going completely by PC, and as Bojji Bojji 's first set of screenshots - that I could see in the UK - showed there was no real change in VRAM at artificial 50% resolution. Meaning the cascaded shadow maps, that are all part of the lower resolution freeing up the critical path, were never reduced by 50%, of which you would do on a portable, because at that minification size you won't see shadow mapping issues as badly when PSSR2 is scaling up (360p, 480p) using a cascade of sparse 2Ks worth of shadow maps rather than 8k or 16Ks worth on the main PS6 PS6 - or zero if doing RT/PT shadows.

So, for any modern game you believe is fully CPU limited at low resolution - presumably not using RT/PT - try turning shadow maps to lowest too or off, and then see if the VRAM use drops as expected - and the frame-rate increases.

For PlayStation 4 or 5 ports using 6 CPU cores, I suspect they are indeed more CPU limited by porting decisions for the hardware that pushes IO onto those cores, but for the rest that are still mainly GPU heavy and 2-3 CPU cores you should expect that massive lowering of (zbuffer) resolution results in frames finishing much quicker and then allowing the primary core to generate more workload for more frames per second.
I'm not moving any goalposts. My very first post simply stated that lowering resolution to massively low amount simply won't give you any meaningful performance back if you are already heavily CPU limited. Which is still true.

Disabling other graphical effects (that may very well require CPU overhead like view distance or shadow casting lights, or some forms of object/world detail) or trying to reduce the VRAM footprint is something completely different from what I am claiming. Stuff like that can indeed claw CPU (and GPU) performance back. But not just dropping the resolution, which is what I have been arguing all along.
 
mark-cerny-ps5.gif


He walks on the stage and announces: "First game with PSSR2 support launches in December 2026".
i have no clue what this thread is about but I have to mention this



i did play death stranding 2 on ps5 pro just a week ago and indeed noticed that famous shimmering in foliage along with certain other graphical issues like lights flickering when moving the camera. interesting stuff as this game does not even use ray tracing

there's something seriously wrong with PSSR
 
Last edited:
i have no clue what this thread is about but I have to mention this



i did play death stranding 2 on ps5 pro just weeks ago and indeed noticed that famous shimmering in foliage along with certain other graphical issues like lights flickering when moving the camera. interesting stuff as this game does not even use ray tracing

there's something seriously wrong with PSSR


Interesting, IQ at launch was good overall with some slight foliage problems and aliasing in some parts of the image. It wouldn't be the first game fucked up with PSSR "update". Fucking devs, OPTIONS are needed for this stuff. Of course I don't know if this is true or not.

Launch reconstruction had issues like that for example:

 
Interesting, IQ at launch was good overall with some slight foliage problems and aliasing in some parts of the image. It wouldn't be the first game fucked up with PSSR "update". Fucking devs, OPTIONS are needed for this stuff. Of course I don't know if this is true or not.

Launch reconstruction had issues like that for example:


yeah, funny enough I didn't notice that kind of a problem in the game. there's this comparison shared by that poster



it looks similar to what I've experienced at certain times
 
i have no clue what this thread is about but I have to mention this



i did play death stranding 2 on ps5 pro just a week ago and indeed noticed that famous shimmering in foliage along with certain other graphical issues like lights flickering when moving the camera. interesting stuff as this game does not even use ray tracing

there's something seriously wrong with PSSR

The level of trolling is strong there. In any case it's really hard to judge this weird short clip at 1080p...the only thing I spotted it's the brighter look of the grass after the supposed update. From what I knew this game had PSSR from launch.
In any case it's not only the raytracing which "cause" such broken artifacts and it's not because PSSR has something of "broken" but it has more to do with the resolution buffer of the others graphic setting if I could bet; indirect lighting buffer lean to be quite lower resolution compared to the whole image on console, it happens only for raytracing on pc I guess, but not to the same level. The infamous wobbling shadows effects of the PSSR upscaler from what I experienced is often visible with the indirect lighting coverage or contact shadows (when it's not raytracing) and PSSR struggle more with fraction of buffer effects as most of the TAA upscalers based on accumulation.
Continue to repeat PSSR is broken without even try to find out why such issues appears in particular moments, it's more hyperboling and misinformating stuff and a generic wrong take if I can say.
 
Last edited:
...

Disabling other graphical effects (that may very well require CPU overhead like view distance or shadow casting lights, or some forms of object/world detail) or trying to reduce the VRAM footprint is something completely different from what I am claiming. Stuff like that can indeed claw CPU (and GPU) performance back. But not just dropping the resolution, which is what I have been arguing all along.
The shadow map generation are pure zbuffer writes from different camera angles and are a function of effective game resolution- nothing to do with direct CPU overhead it just returns a flipped frame confirmation quicker.


Is it okay to render at 1280x720 and then have a shadow map cascade cache rendering 16384x16384 pixels into the zbuffer every few frames and say that isn't resolution and zbuffer overhead dependent? Even if the cache doesn't get updated for the zbuffer writes, the sampler lookups and filtering of the lookups is a hidden resolution overhead like RT/PT by the size of the shadow cache, say compared to dropping to 2048x2048 where the samplers and filtering would finish in less time
 
Assuming all this is true, the annoyed part shows how stupid these people are.
What do devs get to win from this? Can't you see most of them don't even release games in a steady state and instead put out patches months after release?

They are so detached from reality, it's not even funny anymore.
Indeed... And there are very few games that take advantage of the PS5... Why the rush to release the PS6? It's like the foolish decision they made to release the PS5 during the Covid-19 pandemic. When there was no need.
 
Indeed... And there are very few games that take advantage of the PS5... Why the rush to release the PS6? It's like the foolish decision they made to release the PS5 during the Covid-19 pandemic. When there was no need.
That's not true at all. Otherwise will see everything run at steady 60 fps on ps5.
 
Last edited:
It will get below 720p for sure with good ML upscaling but it won't help performance in places where CPU is the limit.

And that 4 core Zen 6 should be fine for PS5 ports, it will have so much higher IPC than Zen 2 core. But vs. base PS6? I'm not sure...
PS6 cores will be clocked higher and we likely have 8 of them but with 1-1.5 or so cores dedicated to the OS while PS6 Portable has two extra Zen6 LP cores dedicated to the OS according to rumours. Hopefully they thought it through, I do not want a Series S situation repeat.
 
PS6 cores will be clocked higher and we likely have 8 of them but with 1-1.5 or so cores dedicated to the OS while PS6 Portable has two extra Zen6 LP cores dedicated to the OS according to rumours. Hopefully they thought it through, I do not want a Series S situation repeat.

If portable has enough memory it won't be Series S situation in this aspect. But rumored GPU specs point to MASSIVE 5-6x difference (XSX was 3x faster than SS):

5b71tcOkhi6Kpgvh.jpg


As K KeplerL2 said few times already, 2x FPS drop (from 60 to 30) and 3x/4x resolution drop can potentially match with that difference.
 
I've got no issue with using AIs like Copilot and ChatGPT in general, but you proxying my point while not advocating for the point that you obviously don't understand the significance of the advent of hardware accelerated z/stencil buffering to 3D graphics, how the last fragment to be shaded in a game frame can't be completed - without tearing - unless it passes that test, and how that becomes the main delay in the critical path of low resolution when shader units and ROPs are in abundance to accelerate the fragment shaders, is hardly the same thing as me conducting the same discussion with ChatGPT myself and correcting it on it misunderstanding contexts, failing to remember that Opengl/Vulkan are both Client/Server models, and that the client submission is single threaded on the main CPU core.

It didn't even seem from your response like ChapGPT accepted that the CPU primary core and GPU for an interactive game logic/simulation is in lockstep - where the GPU can't predict the future before the gamer has interacted with the present rendered frame. So if you think that wall of info covered up your lack of real game rendering knowledge from your previous comment, then whatever,

but it doesn't change the reality that a frame can't correctly finish rendering a v-synched frame in a game to advance the CPU logic until the very last fragment from the projected geometry passes or fails its zbuffer/stencil test, so at low resolution - as we tend to zero - the zbuffer/stencilling becomes the most important constraint on the critical path - even if combined with BVH hidden surface removal lowering the overdraw per pixel to an optimal one and a half.
A simulation, if completely decoupled from the GPU rendering, could skip ahead on the CPU side (although it would not help with the framerate), but you are right if the game is processing some logic (think physics) on the GPU then it needs to wait for it or it is at least limited in how much and how well it can race ahead. BVH updates if CPU driven could be queued spending some memory for it to race ahead too.
 
If portable has enough memory it won't be Series S situation in this aspect. But rumored GPU specs point to MASSIVE 5-6x difference (XSX was 3x faster than SS):

5b71tcOkhi6Kpgvh.jpg


As K KeplerL2 said few times already, 2x FPS drop (from 60 to 30) and 3x/4x resolution drop can potentially match with that difference.
As you said, I think that the memory setup hindered XSS more than the raw performance numbers would show.

I think between resolution drop and effects quality reduction there is a lot they can do to minimise FPS reduction. They can clock it higher when docked (people will dock it) and they can offer VRR (and please OLED) in portable mode too encouraging devs to support LFC.
FPS staying mostly in the 45-50 FPS range with some dips LFC would catch would be more than smooth enough for a PS6 experience on the go.

I do think that as more and more games start packing Indiana Jones and DOOM: The Dark Ages like lighting solutions or support RT/PT there is a chance to have lower quality a bit noisier RT on the portable (despite whatever RT reconstruction / denoising PSSR 2/3 will have for the console) and PT on the main console. The main console is likely to have high framerate (120 Hz modes of today) basic RT / rasterised lighting if the higher quality mode has PT for example.

My concern was more in the CPU side, but I think we can be optimistic. So far, Cerny's team has designed wonderful and pragmatic HW (pricing of PS5 Pro is not his team's decision, I doubt it costs them much much much more than the regular PS5 to manufacture).
 
Last edited:

I had to check it myself and yeah, shimmer in foliage and other fine detail is there. Hoping they will implement PSSR2 when it is available.
 
Last edited:
A simulation, if completely decoupled from the GPU rendering, could skip ahead on the CPU side (although it would not help with the framerate), but you are right if the game is processing some logic (think physics) on the GPU then it needs to wait for it or it is at least limited in how much and how well it can race ahead. BVH updates if CPU driven could be queued spending some memory for it to race ahead too.
A simulation, like collision detection also can't skip ahead too much as you could have opposite moving objects which would then randomly appear to change direction if the Nyquist rate (sample rate/frame-rate) doesn't show the collision providing feedback coherence issues, adding a further constraint for why the simulations typically operate in lockstep on a CPU and GPU at a generalized level.
 
I think it's more likely for devs to take the 60 FPS settings for the PS6, cut internal res by 4x and frame rate by 2x and call it a day.

And if PS6 games are targeting 1080p and 30fps?

With improved upscaling developers don't need to target 4K, and next gen features such as path tracing are brutal on the GPU, even an RTX 4080 can't hit 60fps at 1080 in path traced games.
 
Last edited:
And if PS6 games are targeting 1080p and 30fps?

With improved upscaling developers don't need to target 4K, and next gen features such as path tracing are brutal on the GPU, even an RTX 4080 can't hit 60fps at 1080 in path traced games.
Not one game will target 30fps just like PS5 games right now. Resolution like you said is irrelevant with AI upscaling;
 
Not one game will target 30fps just like PS5 games right now. Resolution like you said is irrelevant with AI upscaling;

There are diminishing returns for effective upscaling with lower output resolutions, whilst some effects have costs not tied to resolution. If PS6 is heavily using AI upscaling, it will be to the detriment of the portable.

An ambitious PS6 game targeting 30fps would be a non-starter on the Portable thanks to the gulf in CPU and GPU power. Portable would be mostly carried by the cross gen period.
 
A simulation, if completely decoupled from the GPU rendering, could skip ahead on the CPU side (although it would not help with the framerate), but you are right if the game is processing some logic (think physics) on the GPU then it needs to wait for it or it is at least limited in how much and how well it can race ahead. BVH updates if CPU driven could be queued spending some memory for it to race ahead too.
Like you I'm optimistic that the PS6P will be fine and the missteps Microsoft took with Series S will have been thoroughly addressed with Cerny and his team.

In reality a portable unscaled to a 1080p screen should not face the same type of issues as one that unscales to 4K, particularly if the RAM size is the same as what is in the PS6.

This is not to say the PS6P won't face any issues, every machine will have limitations, nothing is perfect, however just like PC developers have many hardware configurations to cater for with their games, developers for the PlayStation ecosystem only have 2 machines for the next generation initially and the 2 from the current generation.
 
Last edited:
PS6 cores will be clocked higher and we likely have 8 of them but with 1-1.5 or so cores dedicated to the OS while PS6 Portable has two extra Zen6 LP cores dedicated to the OS according to rumours. Hopefully they thought it through, I do not want a Series S situation repeat.
Series S issue was mainly the RAM setup. RAM pools need to match, or you complicate things in consoles.
 
Series S issue was mainly the RAM setup. RAM pools need to match, or you complicate things in consoles.

I feel like making up for a RAM deficit would be easier than making up for a CPU deficit? For Series S, most developers lower texture settings and mesh quality, and remove memory heavy ray tracing effects. For a CPU deficit, there is often no option but to change gameplay and simulation elements.

The GPU on the PS6 Handheld is rumoured to be 1/6 of the PS6 console - that is a performance difference you can't absorb just by changing resolution, especially for games with path tracing.
 
Last edited:
I feel like making up for a RAM deficit would be easier than making up for a CPU deficit? For Series S, most developers lower texture settings and mesh quality, and remove memory heavy ray tracing effects. For a CPU deficit, there is often no option but to change gameplay and simulation elements.

The GPU on the PS6 Handheld is rumoured to be 1/6 of the PS6 console - that is a performance difference you can't absorb just by changing resolution, especially for games with path tracing.
The issue was RAM allocation for game design.

This is why there are far less issues in porting games with matching 16GB of RAM on even more inferior CPU/GPUs (handhelds).
 
Last edited:
The issue was RAM allocation for game design.

This is why there are far less issues in porting games with matching 16GB of RAM on even more inferior CPU/GPUs (handhelds).

There are several games on Series S that don't run well on handhelds. Space Marine 2 being chief among them. CPU/GPU power matters.
 
Last edited:
There are several games on Series S that don't run well on handhelds. Space Marine 2 being chief among them. CPU/GPU power matters.
Devs have already come out and said RAM allocation is the issue.

It's easier to scale back on graphical effects, etc., when the RAM is the same pool, than it is when there are different sized pools when you design your game.

Especially on consoles with very low level abstraction.

It's why they were struggling with co-op with BG3 on the S (which caused a delay and a special exception to be made by MS) and it worked with no issues in the weaker Steam Deck on day 1.

It's not just about games performing (frame rates and graphical effects), it's getting games "feature complete" with less development time in the process since you have a smaller memory pool which creates hurdles (and major delays or content via game design cutting).
 
Last edited:
Top Bottom