This is comparing the CPU in that PC handheld with a Zen 2 CPU with a similar frequency as the PS5 SoC (a bit higher but close enough).
Still, I would want to see benchmarks using resolutions that make sense for each device before taking your faster than 90% Steam PC comment to heart without many pinches of salt.
Geekbench is a terrible CPU benchmark, especially for comparing to gaming performance.
That thing pretty much only stresses the CPU's backend. But most programs and games stress the front-end and caches.
Although AMD has made improvements to the backend, with newer Zen CPUs, most of the improvements have been in the Front-end.
When the Portable has crossplay between my console and this device ill consider buying one.
When i need to buy games twice and has no cross save they can shove it into the grave next to the PSP and Vita.
When the Portable has crossplay between my console and this device ill consider buying one.
When i need to buy games twice and has no cross save they can shove it into the grave next to the PSP and Vita.
I presume by "my console" you mean a PS4/5/6 console. In which case why wouldn't it support crossplay and cross saves? There's zero reason to believe you would need to buy games twice, unless you're deciding to buy physical for the console, I suppose
When the Portable has crossplay between my console and this device ill consider buying one.
When i need to buy games twice and has no cross save they can shove it into the grave next to the PSP and Vita.
When the Portable has crossplay between my console and this device ill consider buying one.
When i need to buy games twice and has no cross save they can shove it into the grave next to the PSP and Vita.
That's why that portable idea would put Sony between a rock and a hard place. If it is a PS6 but it is less powerful than a PS5 (which it should, because you don't negotiate much with costs but certainly not with thermodynamics), then Sony would have no excuse to not just release ALL games on PS5 as well, and in that case, why would I buy a home a PS6 which is just going to be a PS5 Ultra Pro?
And if it doesn't receive all of the PS6 games, then it is not a PS6 and why would I buy such a gimped console?
And if the hook is to play PS4-PS5 games, then it is in essence a retro console and that's a different market.
That's why that portable idea would put Sony between a rock and a hard place. If it is a PS6 but it is less powerful than a PS5 (which it should, because you don't negotiate much with costs but certainly not with thermodynamics), then Sony would have no excuse to not just release ALL games on PS5 as well, and in that case, why would I buy a home a PS6 which is just going to be a PS5 Ultra Pro?
And if it doesn't receive all of the PS6 games, then it is not a PS6 and why would I buy such a gimped console?
And if the hook is to play PS4-PS5 games, then it is in essence a retro console and that's a different market.
Some comments in this thread are funny. The answer isn't just sticking the biggest CPU possible into everything!
Especially not a console-type device where form-factor and power consumption/heat emission are crucial concerns, to say nothing of price point.
Above all else when designing a product, every decision needs to justified in terms of cost to benefit. You could make the best performing and most technologically advanced hardware in existence, but if its price-point reduces its addressable market below what's acceptable... what's the point ?
Just making tech-heads happy isn't a viable business plan!
Some comments in this thread are funny. The answer isn't just sticking the biggest CPU possible into everything!
Especially not a console-type device where form-factor and power consumption/heat emission are crucial concerns, to say nothing of price point.
Above all else when designing a product, every decision needs to justified in terms of cost to benefit. You could make the best performing and most technologically advanced hardware in existence, but if its price-point reduces its addressable market below what's acceptable... what's the point ?
Just making tech-heads happy isn't a viable business plan!
If really everything is cross-generation (I am not against the idea), why even have generations? Unless by generations, they mean shifting to a business model inspired by mobile phones with soft transitions. But I am really, really not sure that the medium is ready for that, whether in terms of hardware, software, or acceptance by the consumers. I am not in charge of Sony and they surely know things I don't, but I do believe that both Nintendo and Sony should remain in their respective lanes because they have a good thing going on and virtually no competition.
If really everything is cross-generation (I am not against the idea), why even have generations? Unless by generations, they mean shifting to a business model inspired by mobile phones with soft transitions. But I am really, really not sure that the medium is ready for that, whether in terms of hardware, software, or acceptance by the consumers. I am not in charge of Sony and they surely know things I don't, but I do believe that both Nintendo and Sony should remain in their respective lanes because they have a good thing going on and virtually no competition.
It's a transition to an ecosystem business model rather than the traditional generations based model.
That is happening regardless of what Sony and Nintendo wish.
No one is currently in a position to tell publishers to simply drop support of current consoles to start making games only for the new box that starts from a zero installed base.
Development costs make that impossible.
Sony itself hasn't done it with PS5 initially and it took a few years to see all their first party games being developed on PS5 only.
It will be worse next gen, the whole gen risks to be fully cross gen until the late years.
So platform holders will have to reason in terms of ecosystem, that is people being active and buying games and services regardless of the particular hw platform.
There is a clear downside to that strategy, adoption of the new box will be slower than in the past because there won't be any urge to upgrade outside of fans and people wanting the latest tech and that will be further cemented by hardware prices rising, we're risking another price hike next year on 5-6 years old devices due to the DRAM situation after an already not particularly strong 2025 holiday season with price deals.
It is what it is, there will be definitely challenges.
I presume by "my console" you mean a PS4/5/6 console. In which case why wouldn't it support crossplay and cross saves? There's zero reason to believe you would need to buy games twice, unless you're deciding to buy physical for the console, I suppose
With a CPU 3 times faster than PS5 CPU and costing as much as the PS5. On PS5 the decompression is costing nothing to the CPU, litterally, on PC it costs every CPU cycles.
With a CPU 3 times faster than PS5 CPU and costing as much as the PS5. On PS5 the decompression is costing nothing to the CPU, litterally, on PC it costs every CPU cycles.
So why not every game loads like that, both on consoles and PC? It's all about game design and developers skills, Ghost of Tsushima has few seconds loading times on fucking 1.6GHz Jaguar and HDD while older Battlefield games have long load times even on fast NVMEs.
With a CPU 3 times faster than PS5 CPU and costing as much as the PS5. On PS5 the decompression is costing nothing to the CPU, litterally, on PC it costs every CPU cycles.
So why not every game loads like that, both on consoles and PC? It's all about game design and developers skills, Ghost of Tsushima has few seconds loading times on fucking 1.6GHz Jaguar and HDD while older Battlefield games have long load times even on fast NVMEs.
the low power mode is there to give the handheld a way to be backwards compatible, because it doesn't have enough raw power to actually run PS5 games normally.
like how the Xbox Series S isn't powerful enough to run Xbox One X versions of games, so it runs base Xbox One versions.
it will have more modern hardware that outclasses the PS5 in some ways. like better RT hardware, ML Acceleration, mesh shaders etc.
so actually optimised games on it that are true native apps and not back compatible apps, will get more out of the hardware than those low power modes.
but it will indeed keep the lower end target low enough for the PS5 to be pretty easy to port to for a very long time.
which is why I predict that the next cross gen period is gonna be essentially endless. the PS5 will be the Xbox Series S of the PS6 in a way.
With the distinction of not having the memory capacity issues that the Series S had. I think that will be key in making the portable work alongside the PS6
With the distinction of not having the memory capacity issues that the Series S had. I think that will be key in making the portable work alongside the PS6
But it will have CPU bottleneck issues in its place, courtesy of 4 underclocked cores, which will make things very interesting of how it will scale with games that do not have high framerates on the main consoles.
And there aren't any "extreme" bottlenecks like Jaguar CPUs from last gen. Despite them, we still had a very long cross-gen. Next gen does not appear to have huge leaps, like Zen 2 and NVMe this gen, unless they figure out how to run LLMs internaly. Even then, the differences in hardware throughput will be the smallest ever. Particularly when we have PS5 Pro as well.
But it will have CPU bottleneck issues in its place, courtesy of 4 underclocked cores, which will make things very interesting of how it will scale with games that do not have high framerates on the main consoles.
true thats going to be quite a challenge. I think thats why Sony is pushing devs to start working on "low power mode" using 8 threads right now. Creating a consistent workflow for 8 thread CPUs. We'll see how it impacts developement but you're right it could be a bottleneck
But it will have CPU bottleneck issues in its place, courtesy of 4 underclocked cores, which will make things very interesting of how it will scale with games that do not have high framerates on the main consoles.
But it will have CPU bottleneck issues in its place, courtesy of 4 underclocked cores, which will make things very interesting of how it will scale with games that do not have high framerates on the main consoles.
It is all relative to what the native resolution is on the portable. PS Vita demonstrated that QHD at that size gave superior IQ to 720p on TV from the PS3.
Low resolution alleviates CPU bottlenecks, just keep in mind that some high frame-rate games of the past operated on PC CPUs as low as 120Mhz, and a fraction of that on consoles, so those "underclocked cores" will in fact be overclocked relative to their workload, when PSSR and the GPU are going to be doing the heavy lifting to target 1080p output.
It is all relative to what the native resolution is on the portable. PS Vita demonstrated that QHD at that size gave superior IQ to 720p on TV from the PS3.
Low resolution alleviates CPU bottlenecks, just keep in mind that some high frame-rate games of the past operated on PC CPUs as low as 120Mhz, and a fraction of that on consoles, so those "underclocked cores" will in fact be overclocked relative to their workload, when PSSR and the GPU are going to be doing the heavy lifting to target 1080p output.
Not true. Try running Quake3 on a raspberry Pi400. As you drop the resolution you gain frame-rate, or you can overclock and gain some smaller frame-rate uplift via the GPU. IIRC I had it running the demo video at 45fps at 512x384 a year ago, up from 17fps at 1080p with a clock of 1.8Ghz - as the power adapter was only 9Watt rather than +10watt for the full clock or overclock power draw - and pretty sure if I went 320x240 by custom ini edits it would hit 60fps.
Not true. Try running Quake3 on a raspberry Pi400. As you drop the resolution you gain frame-rate, or you can overclock and gain some smaller frame-rate uplift via the GPU. IIRC I had it running the demo video at 45fps at 512x384 a year ago, up from 17fps at 1080p with a clock of 1.8Ghz - as the power adapter was only 9Watt rather than +10watt for the full clock or overclock power draw - and pretty sure if I went 320x240 by custom ini edits it would hit 60fps.
Resolution in that game had to scale with something else. In vast majority of modern games resolution only affect GPU, you can be bottlenecked by CPU to 55fps for example both in 720p and 4k.
Resolution in that game had to scale with something else. In vast majority of modern games resolution only affect GPU, you can be bottlenecked by CPU to 55fps for example both in 720p and 4k.
You need to do better than drop people a 34min video of your proxy opinion - which I haven't watched - but let me simplify.
Firstly the person I was responding to SABRE220 was already saying the Ps6 portable will be CPU limited with PS6 games, so you can't have it both ways, not CPU and not GPU limited with regards to that opening comment.
Secondly, the limiting factor with framerate which "typically" ends up tying all games to resolution being CPU - single core - limited is the zbuffer and stencil buffer testing on the GPU.
At a low enough resolution the CPU starts getting a completion response from the GPU at such a quick rate- with zbuffer /stencilling testing being the remaining main GPU wait/delay for the CPU - that the CPUs normal inefficiency to utilise its resources while waiting on the GPU frame-flip, and its normal lack of reactiveness - unlike say a 7800X3D - after waiting no longer becomes an issue, and allows the main CPU core to generate far more CPU->GPU workloads per second until a higher v-sync, GPU rendering, northbridge bandwidth, RAM or main CPU core performance becomes the limiting factor.
Now you might then say, well with all the shaders and geometry and everything else the GPU does it can still bottleneck framerate at low resolutions like 320x240 below 60fps, which in theory they could and any offscreen rendering at resolution independent of the framebuffer or RT certainly would, but outside those situation the graphics vector, geometry and fragment hardware pipelines have been intrinsically built with hierarchical early outs for any redundant work they can escape doing at all or more than once, and as the low resolution framebuffer generates so few fragments from the projected geometry even just to pass the zbuffer or stencil test that most of the work for the GPU is dropping exponentially compared to say a1080p native resolution.
So at low resolution the CPU wait time is being lowered effectively giving the same main CPU core a multi-fold increase in utilisation/efficiency - assuming the game logic is fully unlocked and not using a CPU update cap or stalled by multi-core workload waiting conditions.
If the ps6 portable has the same power as GPD win 5/One player apex on with very strong battery ,and 1080p screen, it will be awesome. This will play most ps5 games 1080p 60fps. Example, I have the GPD win 5. It plays ff7 Rebirth 1080p 60fps with mixed settings consistent
If the ps6 portable has the same power as GPD win 5/One player apex on with very strong battery ,and 1080p screen, it will be awesome. This will play most ps5 games 1080p 60fps. Example, I have the GPD win 5. It plays ff7 Rebirth 1080p 60fps with mixed settings consistent
Back in the day it used to be the case but with modern game engines it won't help much. Especially not with UE5. These days the GPU and CPU are decoupled and pipelined, stuff like Z-testing used to be done by the CPU, but is now GPU exclusive for example. Or how the CPU work now uses deep render queues, frame buffering, and asynchronous compute to not sit idle, it's always kept busy doing useful work. As long as the modern game engine uses multithreading efficiently and leverages async (for example) then, if you are CPU limited, lowering the resolution to even sub 720p levels won't really give you any meaningful performance back. The PS6 handheld will only be 4c/8t dedicated to games, so maybe it could help a bit, but I wouldn't exactly expect miracles.
Plenty of evidence to support this on YouTube. Or you can test it yourself. Disable some cores, lower the CPU clock and load up a UE5 title and see if lowering the resolution past a certain point (to make sure you are fully CPU limited) does anything meaningful.
Back in the day it used to be the case but with modern game engines it won't help much. Especially not with UE5. These days the GPU and CPU are decoupled and pipelined, stuff like Z-testing used to be done by the CPU, but is now GPU exclusive for example. Or how the CPU work now uses deep render queues, frame buffering, and asynchronous compute to not sit idle, it's always kept busy doing useful work. As long as the modern game engine uses multithreading efficiently and leverages async (for example) then, if you are CPU limited, lowering the resolution to even sub 720p levels won't really give you any meaningful performance back. The PS6 handheld will only be 4c/8t dedicated to games, so maybe it could help a bit, but I wouldn't exactly expect miracles.
Plenty of evidence to support this on YouTube. Or you can test it yourself. Disable some cores, lower the CPU clock and load up a UE5 title and see if lowering the resolution past a certain point (to make sure you are fully CPU limited) does anything meaningful.
That comment exposes the lack of real first hand know how of graphics programming.
Hidden surface removal was done on the CPU via simple testing or precomputed for static objects via BVH structures, but since N64 console have had a zbuffer/stencil buffer capability in the GPU, so zbuffering and stencil test has never been done on the main CPU core in 3D games or the CPU's other cores for that matter.
So no, the zbuffer is still the wait state limiting factor for a CPU at lower resolution that ties the CPU and GPU simulation together. For all the pre-calculation of frames, the simulation still runs under the control of the game logic which runs by the primary CPU core. Just because games on PC aren't CPU limited at low resolution, doesn't mean they would be on a console in which the multi core jobs would always be designed to let the primary core run at maximum efficiency.
That comment exposes the lack of real first hand know how of graphics programming.
Hidden surface removal was done on the CPU via simple testing or precomputed for static objects via BVH structures, but since N64 console have had a zbuffer/stencil buffer capability in the GPU, so zbuffering and stencil test has never been done on the main CPU core in 3D games or the CPU's other cores for that matter.
So no, the zbuffer is still the wait state limiting factor for a CPU at lower resolution that ties the CPU and GPU simulation together. For all the pre-calculation of frames, the simulation still runs under the control of the game logic which runs by the primary CPU core. Just because games on PC aren't CPU limited at low resolution, doesn't mean they would be on a console in which the multi core jobs would always be designed to let the primary core run at maximum efficiency.
Run simulation, animation, culling, physics, audio, streaming across many threads
Still have a main thread, yes — but it is often command orchestration, not "all logic"
And critically:
The main thread is not blocked by z-buffer operations
It is limited by its own workload and synchronization, not GPU depth tests
The correct model (short & precise)
Z-buffering is GPU-internal and asynchronous
Lowering resolution removes GPU pressure
If FPS doesn't increase → CPU or submission bottleneck
CPU bottlenecks come from:
Simulation
Draw call submission
State changes
Synchronization
Engine architecture
Not from depth testing.
We can easily probably do a whole back and forth via ChatGPT but, I mean, we can make this rather easy. I can provide proof that lowering resolution on modern game engines provides minimal to no performance gain when CPU limited. Can you provide counter proof?
CPUs Game Benchmarks & Graphics Guides Starfield CPU Benchmarks & Bottlenecks: Intel vs. AMD Comparison September 4, 2023 Last Updated: 2024-02-08 We benchmark over a dozen CPUs in Starfield while exploring potential GPU and RAM bottlenecks The Highlights There is some core scaling between six...
Depth In The Logical Rendering Pipeline Where Does Early-Z Fit In? When Does Early-Z Have To Be Disabled? Discard/Alpha Test Pixel Shader Depth Export UAVs/Storage Textures/Storage Buffers Forcing Early-Z Forced Early-Z With UAVs And Depth Writes Rasterizer Order Views/Fragment Shader Interlock...
You need to do better than drop people a 34min video of your proxy opinion - which I haven't watched - but let me simplify.
Firstly the person I was responding to SABRE220 was already saying the Ps6 portable will be CPU limited with PS6 games, so you can't have it both ways, not CPU and not GPU limited with regards to that opening comment.
Secondly, the limiting factor with framerate which "typically" ends up tying all games to resolution being CPU - single core - limited is the zbuffer and stencil buffer testing on the GPU.
At a low enough resolution the CPU starts getting a completion response from the GPU at such a quick rate- with zbuffer /stencilling testing being the remaining main GPU wait/delay for the CPU - that the CPUs normal inefficiency to utilise its resources while waiting on the GPU frame-flip, and its normal lack of reactiveness - unlike say a 7800X3D - after waiting no longer becomes an issue, and allows the main CPU core to generate far more CPU->GPU workloads per second until a higher v-sync, GPU rendering, northbridge bandwidth, RAM or main CPU core performance becomes the limiting factor.
Now you might then say, well with all the shaders and geometry and everything else the GPU does it can still bottleneck framerate at low resolutions like 320x240 below 60fps, which in theory they could and any offscreen rendering at resolution independent of the framebuffer or RT certainly would, but outside those situation the graphics vector, geometry and fragment hardware pipelines have been intrinsically built with hierarchical early outs for any redundant work they can escape doing at all or more than once, and as the low resolution framebuffer generates so few fragments from the projected geometry even just to pass the zbuffer or stencil test that most of the work for the GPU is dropping exponentially compared to say a1080p native resolution.
So at low resolution the CPU wait time is being lowered effectively giving the same main CPU core a multi-fold increase in utilisation/efficiency - assuming the game logic is fully unlocked and not using a CPU update cap or stalled by multi-core workload waiting conditions.
In THEORY in some games some things can scale with resolution and lower CPU usage (Crysis 2007 is such example) but for most modern games this is not the case.
I see. If that's the case, I do pray all developers optimize their games for portable mode. I'm ok for 900p as long as it plays all games 60fps (except for ps6 games)
In THEORY in some games some things can scale with resolution and lower CPU usage (Crysis 2007 is such example) but for most modern games this is not the case.
On PC because games are typically console ports - although Crysis 2007 certainly wasn't - but this whole discussion was driven by talk of a PS6 Portable with 4 lower clocked cores - than the console PS6 - having games that wouldn't play on the portable - even with lower resolution base for PSSR - because the games needed the CPU of the console PS6.
That isn't going to happen because the engines are tailored to hardware and the simulation won't artificially block a new set of draw calls on the CPU from going beyond 45fp because 3 cores of the CPU haven't finished a 1/45 of a second workload that could have been split or moved to the GPU.
Even in your example you aren't getting a true 50% (960x540?) resolution where the GPU has actually switched modes and the framebuffer resources change completely, you are still just moving an in-engine slider that has resourced a 1080p viewport and set a request to render to 50% resolution back buffers, then rescale to 1080p. When you set to 50% resolution does the VRAM requirement drop by 75-25%.or is it still using the same textures, and same sized backbuffers, but just rendering to 1/4 of the pixels in the final viewport before scaling back to 1080p?
On PC because games are typically console ports - although Crysis 2007 certainly wasn't - but this whole discussion was driven by talk of a PS6 Portable with 4 lower clocked cores - than the console PS6 - having games that wouldn't play on the portable - even with lower resolution base for PSSR - because the games needed the CPU of the console PS6.
That isn't going to happen because the engines are tailored to hardware and the simulation won't artificially block a new set of draw calls on the CPU from going beyond 45fp because 3 cores of the CPU haven't finished a 1/45 of a second workload that could have been split or moved to the GPU.
Even in your example you aren't getting a true 50% (960x540?) resolution where the GPU has actually switched modes and the framebuffer resources change completely, you are still just moving an in-engine slider that has resourced a 1080p viewport and set a request to render to 50% resolution back buffers, then rescale to 1080p. When you set to 50% resolution does the VRAM requirement drop by 75-25%.or is it still using the same textures, and same sized backbuffers, but just rendering to 1/4 of the pixels in the final viewport before scaling back to 1080p?
I can't see any of those images in the UK, and just to be clear, the tensors for PSSR are already much lower than 720p, so the PS6 portable won't be rendering anything like that high to feed PSSR.
I see. If that's the case, I do pray all developers optimize their games for portable mode. I'm ok for 900p as long as it plays all games 60fps (except for ps6 games)
Expect sub 720p reconstructed to 1080p, judging from the performance of certain modern games on PS5. On the plus side, PSSR2/FSR4 should be more than capable of native/better than native results so that will not matter for most games. Game engines that don't play nice with reconstruction will look ugly, but that's the case with any reasonable handhelds (sub 700gm with atleast an hour of battery life).
I can't see any of those images in the UK, and just to be clear, the tensors for PSSR are already much lower than 720p, so the PS6 portable won't be rendering anything like that high to feed PSSR.
Expect sub 720p reconstructed to 1080p, judging from the performance of certain modern games on PS5. On the plus side, PSSR2/FSR4 should be more than capable of native/better than native results so that will not matter for most games. Game engines that don't play nice with reconstruction will look ugly, but that's the case with any reasonable handhelds (sub 700gm with atleast an hour of battery life).
Run simulation, animation, culling, physics, audio, streaming across many threads
Still have a main thread, yes — but it is often command orchestration, not "all logic"
And critically:
The main thread is not blocked by z-buffer operations
It is limited by its own workload and synchronization, not GPU depth tests
The correct model (short & precise)
Z-buffering is GPU-internal and asynchronous
Lowering resolution removes GPU pressure
If FPS doesn't increase → CPU or submission bottleneck
CPU bottlenecks come from:
Simulation
Draw call submission
State changes
Synchronization
Engine architecture
Not from depth testing.
We can easily probably do a whole back and forth via ChatGPT but, I mean, we can make this rather easy. I can provide proof that lowering resolution on modern game engines provides minimal to no performance gain when CPU limited. Can you provide counter proof?
CPUs Game Benchmarks & Graphics Guides Starfield CPU Benchmarks & Bottlenecks: Intel vs. AMD Comparison September 4, 2023 Last Updated: 2024-02-08 We benchmark over a dozen CPUs in Starfield while exploring potential GPU and RAM bottlenecks The Highlights There is some core scaling between six...
Depth In The Logical Rendering Pipeline Where Does Early-Z Fit In? When Does Early-Z Have To Be Disabled? Discard/Alpha Test Pixel Shader Depth Export UAVs/Storage Textures/Storage Buffers Forcing Early-Z Forced Early-Z With UAVs And Depth Writes Rasterizer Order Views/Fragment Shader Interlock...
I've got no issue with using AIs like Copilot and ChatGPT in general, but you proxying my point while not advocating for the point that you obviously don't understand the significance of the advent of hardware accelerated z/stencil buffering to 3D graphics, how the last fragment to be shaded in a game frame can't be completed - without tearing - unless it passes that test, and how that becomes the main delay in the critical path of low resolution when shader units and ROPs are in abundance to accelerate the fragment shaders, is hardly the same thing as me conducting the same discussion with ChatGPT myself and correcting it on it misunderstanding contexts, failing to remember that Opengl/Vulkan are both Client/Server models, and that the client submission is single threaded on the main CPU core.
It didn't even seem from your response like ChapGPT accepted that the CPU primary core and GPU for an interactive game logic/simulation is in lockstep - where the GPU can't predict the future before the gamer has interacted with the present rendered frame. So if you think that wall of info covered up your lack of real game rendering knowledge from your previous comment, then whatever,
but it doesn't change the reality that a frame can't correctly finish rendering a v-synched frame in a game to advance the CPU logic until the very last fragment from the projected geometry passes or fails its zbuffer/stencil test, so at low resolution - as we tend to zero - the zbuffer/stencilling becomes the most important constraint on the critical path - even if combined with BVH hidden surface removal lowering the overdraw per pixel to an optimal one and a half.