• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro Specs Leak are Real, Releasing Holiday 2024(Insider Gaming)

Mr.Phoenix

Member
The problem is compared to even comparable rdna3 gpus there is a bottleneck present that is preventing the compute power from being translated well into real world performance since the 6.7 additional tflops especially rdna3.5+ should not result in only a 45% gain in standard workloads.
Yup. However, its yet to be seen how or if ever devs use the dual issue thing. That could change those estimates significantly. But the leaks do currently still leave a lot open to interpretation. Hence why I am using ballparks. Its not going to be like a 4070, but its going to be more powerful than a 4060. Its raster performance, (for as much of it as we will see, will put it in the range of a 7700Xt and even a 6800XT. But are we really going to be seeing that when games will be locked to 60fps in most cases after whatever the devs did to get there?
It's actually slightly behind the 7700 XT which is 35.17 TFLOPs (17.59). This would make the 7700 XT 5% faster and the 7800 XT over 25% faster.
true, I used the 6800xt in my example to show the clear disparity between AMD TFs and Nvidia TFs especially when RT-based workloads come into play.
 

Bojji

Member
So comparison capability is limited. In my mind, I'm seeing two entirely different environments which isn't ideal when trying to compare GPUs. So there are definitely assumptions being made that the theoretical will translate 1:1 to actual benchmarks. I'm not saying you are wrong. DF analysis very well may prove you right. I do think we should wait for those results though.

That's true, we will see real differences when DF will be able to do some comparisons, at this point everything is speculation.

Api is a software layer, it's there to utilize hardware. I think there is a good chance that ps api is more efficient than dx12 but I doubt this difference is more than few percents, so far digital foundry tests (PS5 vs pc hardware) confirm that.

Biggest difference between dx11 and 12 is in CPU limited places, when GPU can run full speed performance is usually very close or even dx11 outperforms dx12 for some reason (drivers have bigger role here and Nvidia is known to optimize games for developers in the drivers).
 
Isn’t Sony’s biggest concern profit? Isn’t the manufacture of this beast going to severely limit profit due to price of components?
The most components are now cheaper than when the normal PS5 got released. So how is this an issue for Sony?? The margin for a "highend" product will be higher than what they get for a normal PS5.
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
Yh that's the spirit, pump him for more information. My cover unfortunately was stupidly blown by my own overexuberance.
That reminds me of that time way back when, I think 2013. It was a few months before AMD released their R9 290/X. I got lucky enough to be shown a presentation of the card's performance behind closed door at a little private gathering organized by an AMD engineer and exec (forgot their names). They were demoing benchmarks and I recall we had Bioshock Infinite and the Tomb Raider reboot. Only thing we had were the final fps results. Not the clocks, memory speed, power draw, nothing. Obviously, the prices were also still a secret. I tried getting them to indirectly spill the beans by asking how much the whole computer they set up cost. The engineer opened his mouth but at the last second, he went, "Wait a minute! Nice try." The components of the rig were easy enough to figure out so we could have just substracted the card from the total cost of the computer to know its price in advance.

Too bad he didn't fall for it lol.
 

geary

Member
I'm failing to parse what the question here is - can you elaborate what you are actually asking?
Frame Gen and Temporal Image Reconstruction are both a type of signal-processing, and both involve reprojecting pixels based on motion inputs to generate final results - so strictly speaking, they share concepts yes? 🤷‍♀️
I'm not a tech guy, so i'm gonna ask in laymen terms... The new upscale solution will be similar with DLSS2 or DLSS3 (which is basically DLSS2 + FG)?
 

Bojji

Member
I'm not a tech guy, so i'm gonna ask in laymen terms... The new upscale solution will be similar with DLSS2 or DLSS3 (which is basically DLSS2 + FG)?

It's similar to DLSS2 and xess, reconstruction with ai.

They can use fsr3 frame gen software or develop their own to enchanced that with frame generation.
 

SlimySnake

Flashless at the Golden Globes
Can you entertain another hypothetical question for me, just to clear things up. Say a PS5 game has a unlocked Performance mode with unlocked FPS that currently averages 70fps. Further, it has a RT performance mode that doesn't hold perfect 60, but averages 55fps. If there is no CPU bottleneck, what are the average frames you expect for both the modes in this scenario on the pro?
Going up above 60 fps is going to run into CPU bottlenecks even with a 4.5 GHz CPU clock upgrade.

But I can just talk about two games on PS5 that drop to 720p to maintain 60 fps.

First is FF16 which has no ray tracing, and needs to be locked to 720p to hit 60 fps and it drops to 55 fps in some rare scenarios. Since the GPU bump is only 45%, that would let them push the resolution to 900p. Not the biggest of jumps. Now, I believe the game is bottlenecked by the CPU, so we might see a further boost to 1080p 60 fps in this game thanks to the higher CPU clocks. The game currently uses FSR1. If they manage to hit 1080p 60 fps, they can use PSSR to upscale straight to 4k.

Avatar also runs at 720p internally in the 60 fps mode. It also drops below 720p in some rare occasions. They use FSR2 to upscale to 1440p. It has RT in the 60 fps mode. But since the RT upgrade is anywhere between 2-4x, we should have enough headroom to do 1080p internal and then PSSR will be able to upscale to 4k. Thats your DLSS 4k performance which looks even better than FSR2 4k quality with an internal resolution of 1440p.

If they get framegen working, they can take those 60 fps games and push them to 90 fps.
 
Last edited:
Everything is ALL AMD. The only thing proprietary is what combination of AMD technologies Sony chooses to use.

And that is not what RDNA 3.5 means. There technically is no such thing as RDNA 3.5. That monicker is used to describe hardware that is based on a specific GPU gen, but uses some components or technologies from a more advanced GPU gen. So in this case, the core PS5 GPU architecture can be based on RDNA3, so dual issue compute, AI accelerators....etc. But have RT and maybe even better AI schedulers from RDNA4.

That's what I thought "RDNA 3.5" meant: some things from RDNA 3 and some things from RDNA 4
 
Last edited:

shamoomoo

Banned
33 tflops makes no sense. Its too much. Feels more like a next gen console than a pro model.
If 33 tflops is true then we re targeting something like 100-150 tflops for next gen which sounds rudiculous
But that number isn't real,that performance metric is a best case scenario and that scenario in of itself is limited.

In terms of FLOPS of 16+, that's a 60% increase over the base PS5 on top of an increase with regards to pixels and texture fill rate.
 

Panajev2001a

GAF's Pleasant Genius
First is FF16 which has no ray tracing, and needs to be locked to 720p to hit 60 fps and it drops to 55 fps in some rare scenarios. Since the GPU bump is only 45%, that would let them push the resolution to 900p. Not the biggest of jumps. Now, I believe the game is bottlenecked by the CPU
You described a GPU bottlenecked game, not a CPU limited scenario though…

I do hope for a substantial (4 GHz clock at least) CPU frequency boost, but the plethora of 60 FPS game modes that sacrifice rendering detail do not paint the same picture we had in the Xbox One and PS4 days.
 

Gaiff

SBI’s Resident Gaslighter
Going up above 60 fps is going to run into CPU bottlenecks even with a 4.5 GHz CPU clock upgrade.

But I can just talk about two games on PS5 that drop to 720p to maintain 60 fps.

First is FF16 which has no ray tracing, and needs to be locked to 720p to hit 60 fps and it drops to 55 fps in some rare scenarios. Since the GPU bump is only 45%, that would let them push the resolution to 900p. Not the biggest of jumps. Now, I believe the game is bottlenecked by the CPU, so we might see a further boost to 1080p 60 fps in this game thanks to the higher CPU clocks. The game currently uses FSR1. If they manage to hit 1080p 60 fps, they can use PSSR to upscale straight to 4k.

Avatar also runs at 720p internally in the 60 fps mode. It also drops below 720p in some rare occasions. They use FSR2 to upscale to 1440p. It has RT in the 60 fps mode. But since the RT upgrade is anywhere between 2-4x, we should have enough headroom to do 1080p internal and then PSSR will be able to upscale to 4k. Thats your DLSS 4k performance which looks even better than FSR2 4k quality with an internal resolution of 1440p.
I imagine they could offload more RT operations to the GPU to free up CPU resources.
 

SlimySnake

Flashless at the Golden Globes
Its common sense really. First, have it at the back of your mind that consoles always push above their weight, this can simply be attributed to better optimization. Then lets move forward.

At 16.75TF, this puts the PS5pros raw raster perf in the ballpark of 4070 (14.5TF) and 6800XT (20TF). The 6800XT is similar to the 4070 in raw raster performance workloads But grossly gets outperformed when RT workloads come into play. Eg, AC:M non-RT at 1440p; 6800XT 86.6fps, 4070 97fps. Or Avatar with RT@1440p; 6800XT 45fps and 4070 52fps.

So, the simple deduction is to find an AMD GPU that has similar raster performance, and if the PS5pro is to have RT performance that is 3-4x better than that found in the og PS5, then you will end up with a GPU that performs like that similar Nvidia GPU in rater and RT performance too.

Just looking at the TF numbers alone, will put the PS5pro based on RDNA 3.5, in the ballpark of a 4070. And you would need a more powerful AMD GPU from a TF standpoint alone (in this case the 6800XT) to match that kinda performance. Asssuming no RT is being used. But further more, you can also factor in things like the fact that on consoles,, they willl not be running any or everything at ultra settings. And will also use dynamic resolution scaling where applicable because yes, its targeting 60fps. And that is how i arrived at my 4070 levels of performance.
We have seen the PS5 overperform its tflops in some games, but only by 10%. At times its equivalent to a 2080 which has 10% more tflops. At times, it performs exactly like a 2070 Super which has around the same number of tflops. In some first party games like Spiderman and Uncharted, we have seen it perform like a 3070, or a 2080 Ti. But lets face it, two games dont make the rule and they are just bad ports.

In 99% of the games we have seen, the PS5 performs somewhere between a 2070 Super or 2080 in non-RT games. The PS5 simply cant bridge the 30% gap between the 3070 and the 6800xt. Again, the PS5 Pro is NOT overperforming its tflops by Sonys own metrics. The tflops boost is 63% which is only translating into 45% real life performance uplift. I have no idea how you can look at sonys own numbers and say the PS5 will outperform its specs. If anything its underperforming.

So no, in non-RT games, i would not say its a 6800. I would say its around a 3070 and thats me being generous because the 6800 is 10% more powerful than the 3070 and 18% more powerful than the PS5 Pro based on sonys own benchmarks.

Now in RT games, yes sure go nuts. 4070, 3080, 6900xt. Whatever you want. They seem to have done a remarkable job improving their rt performance so i have no issue with you saying its a 4070 in RT games. It might come close, it might not, the cpu might hold it back, the vram might hold it back, but looking strictly at sonys own benchmarks, thats where we expect it to be. But then you have to apply the same logic towards the non-RT benchmarks.

WZ7Mg6G.jpg
 

Skifi28

Member
The whole sub-discussion about building an equivalent PC for cheap is just pointless. Yes, you can do it but you'd have to go really cheap on some parts and that'll come to bite you later when you'll be crashing or stuttering because of RAM or VRAM or having pretty much any other weird issues that the console happily sails through despite having similar or even inferior hardware. Just because you can build a cheap PC doesn't mean you should. If you want to replace your console and play the latest games, do yourself a favor and buy something decent that'll last you for years. When you'll be coming here or on the steam forums to troubleshoot why you're having issues, many of the people suggesting building a cheap PC will be the first to tell you "lol it's on your end just upgrade your hardware".

Just my two cents.

Edit: This is not meant to target any specific users. It's just something I see very often these days and it's just bad advice based on past experiences with entry-level-console-equivalent rigs.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
I imagine they could offload more RT operations to the GPU to free up CPU resources.
Wouldnt you need devs to do that?

Look at how poorly these RT games are ported on PC. Callisto in RT mode is very heavy on the CPU. Forspoken too. Hogwarts. Star Wars. Virtually every rt game is a shitshow on zen 2 CPUs even when you pair it up with a 3080. Alex did a video on these games last year. i lucked out with my i7-11700k but i doubt the ps5 pro cpu will perform like it.

Point is that devs didnt use the extra RT capabilities of nvidia GPUs to offload rt operations. they are lazy and couldnt be bothered to optimize like CD project has. I doubt they will put in the effort for the Pro which will likely top out at a 10-15% market split.
 

Panajev2001a

GAF's Pleasant Genius
Wouldnt you need devs to do that?

Look at how poorly these RT games are ported on PC. Callisto in RT mode is very heavy on the CPU. Forspoken too. Hogwarts. Star Wars. Virtually every rt game is a shitshow on zen 2 CPUs even when you pair it up with a 3080. Alex did a video on these games last year. i lucked out with my i7-11700k but i doubt the ps5 pro cpu will perform like it.

Point is that devs didnt use the extra RT capabilities of nvidia GPUs to offload rt operations. they are lazy and couldnt be bothered to optimize like CD project has. I doubt they will put in the effort for the Pro which will likely top out at a 10-15% market split.
I think you are comparing the PC platform and both the level of abstraction titles deal with as well as how messy it is to support a single GPU performance class with limited sales compared to consoles.
 

Gaiff

SBI’s Resident Gaslighter
Wouldnt you need devs to do that?
Of course, that's what I meant lol.
Look at how poorly these RT games are ported on PC. Callisto in RT mode is very heavy on the CPU. Forspoken too. Hogwarts. Star Wars. Virtually every rt game is a shitshow on zen 2 CPUs even when you pair it up with a 3080. Alex did a video on these games last year. i lucked out with my i7-11700k but i doubt the ps5 pro cpu will perform like it.

Point is that devs didnt use the extra RT capabilities of nvidia GPUs to offload rt operations. they are lazy and couldnt be bothered to optimize like CD project has. I doubt they will put in the effort for the Pro which will likely top out at a 10-15% market split.
You're probably right but I'm hoping that they will. The CPU will probably still be Zen 2 and unless their plan is to go from quarter-resolution to full-resolution while maintaining the same frame rate, then they either need a decent CPU upgrade or a way to offload more tasks on the beefed-up GPU. The PS5 couldn't do either of these. The GPU was too slow to take operations from the CPU and vice-versa. The guys at Massive did offload BVH operations on the CPU in Frontiers of Pandora on consoles specifically. On PC, they're done on the GPU. Now imagine if they could do those operations on a much stronger GPU, freeing the comparatively anemic CPU. It would be the opposite of what they did.
 

Bojji

Member
I imagine they could offload more RT operations to the GPU to free up CPU resources.

Simply enabling RT automatically increases CPU load, it has to do bvh calculations. When Callisto protocol launched no CPU was able to run in 60fps locked and many GPUs (even AMD ones) were running well below 100% utilization so GPU power wasn't the problem Similar story with Hogwarts legacy and Jedi survivor.

That's why this rt performance increase might disappear if CPU upgrade is insufficient (as previous leaks suggested).
 

jroc74

Phone reception is more important to me than human rights
Are you using the Xbox Series X leak quote? lmao.

You guys can stick with this. I can't find any reason to Sony sell without disk.
PS4pro never did any huge numbers, so spliting the PS5pro numbers would be a giant mistake from Sony part.

But is modern Sony we are talking about. They can keep doing the same mistakes over and over again. I wish them luck if they try.
I really dont understand this...

If Sony can create an sku that has a perceived value vs the other one...why wouldn't that do that?

How is what they are literally doing right now with the Slim a mistake if they do it with the Pro?

Or are we now forgetting the base console is the exact same except for a disc drive...and has been this way since Nov 2020?
 
Last edited:

jroc74

Phone reception is more important to me than human rights
Isn’t Sony’s biggest concern profit? Isn’t the manufacture of this beast going to severely limit profit due to price of components?
Whenever they reveal the price, we will find out how concerned Sony are about profits, margins.

I do keep seeing ppl posting that recent article by the current Sony CEO about margins, lol.
 
Nah theybo


Why worry about CPU performance degradation when they've already given you real world RT performance of 2x-3x? That number would be inclusive of any CPU bottleneck you're worried about.

It's always funny to see how CPU matters or doesn't matter just as long it fits the agenda that Playstation sucks compared to PC...

LOL
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
Nah theybo


Why worry about CPU performance degradation when they've already given you real world RT performance of 2x-3x? That number would be inclusive of any CPU bottleneck you're worried about.
Because these are most likely internal benchmarks and tools. They say to refer to the Ray Tracing Programming Guide and the PSR Library for more information. This is what the GPU is capable of in theory. Doesn't make it incompetent-proof and some dev (a lot of them) could simply misuse it in a way that hammers the CPU without properly utilizing the GPU resources to alleviate the bottleneck. The good thing with brute strength is that it makes it much harder to fuck up. Take TLOU on the RTX 4090 for instance, mine ran damn near flawlessly whereas people with lower-tier rigs had major problems. I thought the port was good at first lol.

Simply enabling RT automatically increases CPU load, it has to do bvh calculations. When Callisto protocol launched no CPU was able to run in 60fps locked and many GPUs (even AMD ones) were running well below 100% utilization so GPU power wasn't the problem Similar story with Hogwarts legacy and Jedi survivor.

That's why this rt performance increase might disappear if CPU upgrade is insufficient (as previous leaks suggested).
Interestingly, Ubisoft Massive implies the opposite in the way they did it.

We have a custom solution for the BVH on consoles. Since we are not relying on their APIs, we pre-build the bottom-level BVH for meshes offline to get higher quality. Then we built our own custom solution for the BVH in a way that allows us to build the top-level BVH on the CPU - whereas with DXR and existing APIs, the way you do this is that you send all of your instances to the GPU, and the GPU creates an acceleration structure. We rely on caching a lot, and we only rebuild things that have changed. This allows us to actually efficiently build the top level on the CPU and saves us some GPU time on that.

https://www.eurogamer.net/digitalfo...and-snowdrop-the-big-developer-tech-interview

The way it sounds is that the top-level BVH on consoles is built on the CPU whereas on PC, it's on the GPU. Their objective was to free up GPU resources on consoles. Now that there is a much beefier GPU that is far stronger in RT, they could just do the opposite and build the top-level BVH on the GPU like on PC, freeing up CPU resources instead. I assume both are possible but on PCs, CPUs are often relatively much faster than GPUs so it's better to use them to build the BVH rather than bogging down the GPU even more.
 
Last edited:

Bojji

Member
Nah theybo


Why worry about CPU performance degradation when they've already given you real world RT performance of 2x-3x? That number would be inclusive of any CPU bottleneck you're worried about.

Not all games are super CPU heavy even once rt is enabled. UE games are the biggest offenders here but many games run fine with rt even with medium cpus. End result will depend on many different aspects, that 4x increase will be true for some games for sure.
 

Audiophile

Member
I’m really hoping for some real-time generative ai upscaling, like magnific, with coherency. It would effectively be an automatic remaster machine.
Can see by PS6 Pro we'd have "PlayStation Filters".

You can just decide that today you want to play GTAVI Remastered in Anime mode or Realism mode.

Playing COD? Well today I want to play it in 8-Bit mode!

That said, I doubt devs would be too thrilled with people just completely altering the look of their games on a closed platform.
 
Last edited:

ChiefDada

Gold Member
Because these are most likely internal benchmarks and tools. They say to refer to the Ray Tracing Programming Guide and the PSR Library for more information. This is what the GPU is capable of in theory. Doesn't make it incompetent-proof and some dev (a lot of them) could simply misuse it in a way that hammers the CPU without properly utilizing the GPU resources to alleviate the bottleneck. The good thing with brute strength is that it makes it much harder to fuck up. Take TLOU on the RTX 4090 for instance. Mine ran damn near flawlessly whereas people with lower-tier rigs had major problems. I thought the port was good at first lol.

Theoretical? they provided an upper and lower bounds (2x-4x). If it was theoretical, they would just say "up to 4x RT performance". Same thing with 45% raster. I tend to think that's more likely unoptimized/conservative and not theoretical considering closed box optimization opportunities, but GAF is running around assuming that's a theoretical max and comparing PS5 Pro to a damn 3070. Insanity.
 

Bojji

Member
It's always funny to see how CPU matters or doesn't matter just as long it fits the agenda that Playstation sucks compared to PC...

LOL

Ps doesn't suck compared to pc, it's very competent piece of hardware.

Game performance can be CPU or GPU limited, it's up to developers to achieve good performance.

Because these are most likely internal benchmarks and tools. They say to refer to the Ray Tracing Programming Guide and the PSR Library for more information. This is what the GPU is capable of in theory. Doesn't make it incompetent-proof and some dev (a lot of them) could simply misuse it in a way that hammers the CPU without properly utilizing the GPU resources to alleviate the bottleneck. The good thing with brute strength is that it makes it much harder to fuck up. Take TLOU on the RTX 4090 for instance, mine ran damn near flawlessly whereas people with lower-tier rigs had major problems. I thought the port was good at first lol.


Interestingly, Ubisoft Massive implies the opposite in the way they did it.

We have a custom solution for the BVH on consoles. Since we are not relying on their APIs, we pre-build the bottom-level BVH for meshes offline to get higher quality. Then we built our own custom solution for the BVH in a way that allows us to build the top-level BVH on the CPU - whereas with DXR and existing APIs, the way you do this is that you send all of your instances to the GPU, and the GPU creates an acceleration structure. We rely on caching a lot, and we only rebuild things that have changed. This allows us to actually efficiently build the top level on the CPU and saves us some GPU time on that.

https://www.eurogamer.net/digitalfo...and-snowdrop-the-big-developer-tech-interview

The way it sounds is that the top-level BVH on consoles is built on the CPU whereas on PC, it's on the GPU. Their objective was to free up GPU resources on consoles. Now that there is a much beefier GPU that is far stronger in RT, they could just do the opposite and build the top-level BVH on the GPU like on PC, freeing up CPU resources instead. I assume both are possible but on PCs, CPUs are often relatively much faster than GPUs so it's better to use them to build the BVH rather than bogging down the GPU even more.

Ubisoft have some really competent devs, sadly their games from last few years are so soulless...

It's good to see that developers try new ways to implement RT calculations, more performance is always welcome.
 

Gaiff

SBI’s Resident Gaslighter
Theoretical? they provided an upper and lower bounds (2x-4x). If it was theoretical, they would just say "up to 4x RT performance". Same thing with 45% raster. I tend to think that's more likely unoptimized/conservative and not theoretical considering closed box optimization opportunities, but GAF is running around assuming that's a theoretical max and comparing PS5 Pro to a damn 3070. Insanity.
We're getting into semantics territory. The leaked document says, "I've seen 4x speedup in some cases." When the range is that large, it is theoretical, so 2x is the floor and 4x is the ceiling. It changes nothing to the point I'm making which is that it doesn't mean developers will use the hardware efficiently. Unless we assume they deliberately did something stupid to get 2x to prove a point, then it should also be possible to go below that figure if someone screws up (which will happen).

I mean, it's not like you haven't seen it yourself. Spider-Man 2 maintains a 1080p resolution, 60fps WITH RT on top. FF XVI drops to 720p, without RT, and below 60fps. I'm assuming the environment they conducted these tests were realistic, average scenarios, not ones where a dev fucked up which is what we're alluding to.
 
Last edited:

onQ123

Member
PRO will sure have a drive. DE sales are abysmal in comparison with the drive one, like 8 x 2.


What the point of reduce the price of a premium hardware? Don't even make sense.

You know this is not going to be cheap. If you want a PRO without a disk drive just remove when you get one.
Digital Edition actually been selling pretty good since the slim was released
 

Fake

Member
I really dont understand this...

If Sony can create an sku that has a perceived value vs the other one...why wouldn't that do that?

Because go against what the real purpose of PRO means.

And the sales of a console with disk drive vs the sales of a console without disk tell everything Sony need to know.

They still can do a SKU without disk drive? Of course they can. Its just the numbers don't quite justify this move.

I can't really understand why people here love to use sale numbers when whatever suit them.
 
Last edited:

onQ123

Member
Since PS4/Xbox One the compute to normal rendering pipeline ratio has gone from 1:1 to 4:1 & that's without diving into ML.

Just imagine if you stopped paying attention back in the PS4 days & you seen that PS5 Pro was going to be 67 TFLOPs?

You would think it was a unholy beast lol

Thankfully someone started warning people early about the changes that was coming 😎

Sidenote: PS5 Pro is tied down by PS5 & Series X\S development but next generation will most definitely flip things around & go full Ray-Tracing instead of adding it on to traditional rendering.
 
Last edited:
Top Bottom