• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro Specs Leak are Real, Releasing Holiday 2024(Insider Gaming)

FireFly

Member
There is a pretty simple explanation for the 45% figure not aligning with the 65% tflops increase. And we simply have to take cernys word for it. Increasing CUs doesn’t get us a linear performance increase. Amd had to add infinity cache to each rdna 2 and rdna 3 gpu to get around that bottleneck and thats something Xbox and ps simply can’t have due to costs. That’s why xbox’s 20% advantage didn’t translate into 20% performance in so many games.

The same thing is happening here.
The 7700 XT scales fine when compared with the 6600 XT, even at 1080p where the bandwidth advantage is diminished. PC GPU makers standardly scale performance through increasing unit counts, and the purpose of the Infinity Cache is simply to hit bandwidth targets without having to go for a wider bus, which isn't as power efficient.

If the PS5 Pro had a 384-bit bus, I think it would be fine, but if they're only shipping with faster memory on the same 256-bit bus, then they may be running into bandwidth constraints.
 

hussar16

Member
Very poor, cherry picked shot of AW2. The Order is artistically great, but it's still a PS4 game dude, poor textures and it's soft as all hell. Not to mention that horrendous aspect ratio.
PkgZVFU.jpg
DZG5pPD.jpg

Order 1886 uses effects to simulate cgi . There's no comparison even with your high textures and high resolution on pc
 
Last edited:
Cerny looked to what Nvidia was doing and aimed to emulate that and fix PS5 shortcomings with Pro. We have seen what developers are willing to do to sometimes, dropping to 720p like resolutions and FSR2 completely breaks in this scenario. With Pro they will be able to produce decent picture quality in the end.

Cerny is not some visionary but he is clearly smart.



Of course that full PC costs more, in 2019 machine comparable to PS5 was like this:

CPU R5 3600 - 200$
MB - 100$
GPU - 2070S - 500$
PSU - 50$
Case - 50$

Windows is free (unactivated) and few bucks for M&K = ~900$

Digital PS5 launched for 400$, but I was responding to claim that you needed 1000$ GPU. 2070S still offers better image quality than PS5 thanks to DLSS and this GPU is 5 years old.



Who knows what PS5 Pro price will be.

You forgot a few things:

-CPU cooler (important)
-System cooling (optional if you already have GPU fan & CPU cooler, but might be worth having)
-Controller (for comparable console experience)
-SSD (kind of important)
-Power cable
-HDM cable
-Activated Windows (Unactivated Windows I would not recommend long-term but activation keys are very cheap)

PS6 will be even less impressive upgrade than PS5 was to PS4.

If they only rely on yet prettier graphics then yeah it will be. If they lean into other things (i.e VR/AR or mixed reality), it'll be a much bigger jump when it comes to total immersion & innovation.

But people will have to give up the idea of a 100 TF PS6. Which would be a dumb idea even if it were affordable to do.
 
Last edited:

Bojji

Member
You forgot a few things:

-CPU cooler (important)
-System cooling (optional if you already have GPU fan & CPU cooler, but might be worth having)
-Controller (for comparable console experience)
-SSD (kind of important)
-Power cable
-HDM cable
-Activated Windows (Unactivated Windows I would not recommend long-term but activation keys are very cheap)

3600 had cooling if I remember correctly and yeah I forgot about ram and ssd. But controller in entirely optional.

Power cable comes with psu, HDMI cables are dirt cheap so we are going into too much details here...

Yeah in 2019 pc comparable to (not yet released) PS5 would be over 1000$.
 

Gaiff

SBI’s Resident Gaslighter
>Order uses effects to simulate CGI

What does that even mean lol? Know what makes CGI really stand out these days (and for the past 2 decades)? Ray tracing…that’s much better than copious film grain, shitty lens flares, and so much chromatic aberration that you can’t even tell what you’re looking at.
 

Bernoulli

M2 slut
I'm including that, and the GPU is a 4070 with a Zen 3 CPU.
4070 600€ , Zen 3 Cpu 250-300€ motherboard 200€ and you are already at 1000€
add to that a PSU,ram, case, cooler, ssd gen 5 or 4 1tb fans keyboard mouse and a controller
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
4070 600€ , Zen 3 Cpu 250-300€ motherboard 200€ and you are already at 1000€
add to that a PSU,ram, case, cooler, ssd gen 5 or 4 1tb fans keyboard mouse and a controller
So are you also including a mouse and keyboard with the PS5?
 

Bojji

Member
I'm including that, and the GPU is a 4070 with a Zen 3 CPU.

4070 should be better in all aspects than pro but for sure pro won't cost over 1k$.

Sony maybe won't like to sell it at loss after not so hot fiscal year so 600, 700, 800$?

Or maybe they will drop price of regular PS5 and do some aggressive price on pro model.
 

SlimySnake

Flashless at the Golden Globes
Which IMHO is today’s 3d niche. I’d rather have games run better than have reflective lighting from a trash can. I just really hate how all new tech focuses on some new niche instead of fixing the last gen’s issues first!

Sony should imho double the cpu and gpu components and added 6-8 additional ram. Yes that’s going to cost more. It make the pro 699. Enthusiasts would have no issue paying the premium and the 5 still exists for the peasants.
I think the problem is that they ARE fixing issues they made with the PS5 hardware. Not including any ML hardware and then releasing a half baked RT solution is why they are struggling so mightily to run these next gen games. Many games skip RTGI altogether shipping with just rt reflections or rt shadows. Games that do ship with RTGI end up going all the way down to 720p in their 60 fps modes.

So they are tackling the problem areas instead of just doubling CUs in the GPU. While that might not give us all hard-ons here, i think most people will be happy to get decent image quality in their 60 fps modes. Doubling the CUs wouldnt just cost more, it would turn it into a 300 watt system. Remember the PS5 was already going up to 230 watts. Yes, they are likely on 4nm here but as we can see from the rdna 3.0 cards, the efficiency gains arent really that great. So unless sony wants to fight the EU on releasing a 300 watt machine, i think they probably knew they had to settle for a smaller console. Though like you, i wouldnt have minded paying $699 for a more powerful GPU and CPU. Maybe even dedicated CPUs and GPUs.
 
Biggest disappointement is that it's very likely using 6nm. But I think overall it's going to be a better improvement than PS4 Pro was. They have being very smart with their silicon, pun intended.
 

Gaiff

SBI’s Resident Gaslighter
4070 600€ , Zen 3 Cpu 250-300€ motherboard 200€ and you are already at 1000€
add to that a PSU,ram, case, cooler, ssd gen 5 or 4 1tb fans keyboard mouse and a controller
Zen 3 CPU? For $250? A 4070?

Some of you guys can't help but embarrass yourselves.
 
Dunno why, but this bolded part's gonna trigger a small rant from me.

If Sony don't slow down or stop the PC ports for non-GAAS titles they aren't getting near 12% this time around. Because for the core who'd buy a Pro, there's less reason to do so when they can get those 1P games on an as-good-or-better PC shortly after the PS5 versions release, and still play their multiplats with better settings in the meantime on the same PC.

The fact Sony's inadvertently ported almost all of their big non-GAAS titles since 2020 to PC by now, only halfway through the console gen, and only have a handful of actual exclusive 1P left, is insanely short-sighted of them. If the Nvidia leak's true, the only games 1P that could still exclusive to PS5 that'll be left (non-GAAS) by EOY, are Astro's Playroom and Spiderman 2. A whopping 1.5 games (Astro's more of a "demo" not a full game).

Every single port since 2020 was Sony giving less and less reason for PS5/PC core enthusiasts to consider a PS5 Pro and it's a damn shame. Hopefully they are changing that strategy because, yeah, great tech aside the Pro could face a big challenge hitting even 8% of PS5 lifetime sales when all's said and done, if they don't have the 1P exclusives (actual exclusives, not timed 1-2 year exclusives before porting to PC) to push it. Because besides just that software issue, PSVR2 isn't hitting the same way PSVR1 did (which benefited the PS4 Pro), and there is no 4K TV market rush/growth like when the PS4 Pro was a thing, either.

So in what world is a PS5 Pro whose big selling point to hardcore/core enthusiasts (vast majority of Pro customers) is playing 1P timed exclusives at settings still lower than an inevitable PC ports 1-2 years later (or Day 1 in some cases, for non-GAAS you never know), going to do 12% of the install base numbers let alone higher?

Well sorry for the semi-rant I just had to look at the sales part from the perspective of what drivers are or aren't present to push PS5 Pro the same way PS4 Pro was pushed. But most critically how the biggest driver potentially absent is 100% of Sony's own doing. I hope that's changed internally because both PS5 & the Pro, and also systems like PS6, definitely need that big driver back.

Also one other thing: why are people still obsessed with TFs? I thought that poison went away a couple years ago but people are still determining all performance gains from TF paper specs. Did we learn nothing from the PS5 vs. Series X TF nothingburger?

I almost think they might have a better shot at selling this Pro in comparison to the last one. Just because this has been a strange generation, where we are seeing some impressive new effects but they come with such a high cost to IQ that at times IQ can look worse than last-gen. In some instances the payoff is there (the Matrix demo, I would say AW2 and things like that) but in others the effects don't hit hard enough to make the IQ situation seem ok. That's an opening that could be exploited depending on the price.

If this ML implementation gets them from 1080p to 4k as good as DLSS, that could be huge in comparison to FSR (especially the levels that have been being used on console, FSR performance isn't exactly a looker).

The 7700 XT scales fine when compared with the 6600 XT, even at 1080p where the bandwidth advantage is diminished. PC GPU makers standardly scale performance through increasing unit counts, and the purpose of the Infinity Cache is simply to hit bandwidth targets without having to go for a wider bus, which isn't as power efficient.

I'm not sure if I agree here. Honestly, the 7700XT to 6600XT seems like a perfect example of the RDNA3 TF vs. RDNA2 TF disconnect we've seen. A 3x (300%) TF uplift in trade for a 50% performance gain. But, the 45% gain isn't anything to sneeze at and seems completely believable given how the 6600XT (roughly equivalent to PS5 if not bandwidth starved) and the 7700XT compare. Maybe being in a console will unlock the potential here, as there was obviously some kind of performance miss-fire with RDNA3 as I doubt that AMD was targeting such small gains.
 
Last edited:
I think the problem is that they ARE fixing issues they made with the PS5 hardware. Not including any ML hardware and then releasing a half baked RT solution is why they are struggling so mightily to run these next gen games. Many games skip RTGI altogether shipping with just rt reflections or rt shadows. Games that do ship with RTGI end up going all the way down to 720p in their 60 fps modes.

So they are tackling the problem areas instead of just doubling CUs in the GPU. While that might not give us all hard-ons here, i think most people will be happy to get decent image quality in their 60 fps modes. Doubling the CUs wouldnt just cost more, it would turn it into a 300 watt system. Remember the PS5 was already going up to 230 watts. Yes, they are likely on 4nm here but as we can see from the rdna 3.0 cards, the efficiency gains arent really that great. So unless sony wants to fight the EU on releasing a 300 watt machine, i think they probably knew they had to settle for a smaller console. Though like you, i wouldnt have minded paying $699 for a more powerful GPU and CPU. Maybe even dedicated CPUs and GPUs.

There wasn't an alternative option in 2020 though.

AMD was going to be the APU provider due to cost and they were just rolling out RT and had no real ML hardware solution, so were playing catch up. Furthermore, PS5 launched on 7nm, so there was a limitation in terms of die size and power for additional hardware like that. If you added those extra things, it would have made the compute suffer.

So compute was pretty decent in 2020, but lacking some other features. The Pro is not enhancing compute as much (but still significantly, 70% is no slouch), but focusing on other areas that may add a more tangible benefit than simply making compute go to 100% more.
 

SoloCamo

Member
Very poor, cherry picked shot of AW2. The Order is artistically great, but it's still a PS4 game dude, poor textures and it's soft as all hell. Not to mention that horrendous aspect ratio.

People always mistake style vs actual technical graphics. There are tons of games that are VERY appealing artistically but from a technical level are worse when you understand what you are looking for. That said, I'll always prefer a clean game with a pleasing presentation over technical graphical superiority and that's honestly what these consoles need to push. Get a clean, sharp image out with great textures and a locked 60fps. I used to be somewhat of a graphics snob and these days as long as the image is crisp with good textures I'm pretty happy.

We've hit a point graphically where we are not going to be astonished by generational leaps. When we went from NES to SNES to N64 it was mind blowing, then GC was another big jump. 360 / PS3 era were the last big "jump" for most people. Things like Ray Tracing will NEVER be as amazing as experiencing something like Mario 64 for the first time.
 
Last edited:

hussar16

Member
>Order uses effects to simulate CGI

What does that even mean lol? Know what makes CGI really stand out these days (and for the past 2 decades)? Ray tracing…that’s much better than copious film grain, shitty lens flares, and so much chromatic aberration that you can’t even tell what you’re looking at.
U can put ray tracing on top of 💩and it Wil stil mostly look like 💩
 

ChiefDada

Gold Member
There wasn't an alternative option in 2020 though.

AMD was going to be the APU provider due to cost and they were just rolling out RT and had no real ML hardware solution, so were playing catch up. Furthermore, PS5 launched on 7nm, so there was a limitation in terms of die size and power for additional hardware like that. If you added those extra things, it would have made the compute suffer.

So compute was pretty decent in 2020, but lacking some other features. The Pro is not enhancing compute as much (but still significantly, 70% is no slouch), but focusing on other areas that may add a more tangible benefit than simply making compute go to 100% more.

I swear you are one of very few who gets it. When the DF comparisons come the light bulb will turn on.
 

LordOfChaos

Member
A sort of interesting part about this is Sony making their own neural accelerator based upscaling solution, I'm sure it's heavily based on AMD's FSR, but AMD's doesn't use dedicated neural hardware and still puts everything through its CUs. So I wonder if Sony wasn't satisfied with it as AMD has seemed to fall behind, and this may further distinguish the Playstation from the APUs any competitor can buy.
 

Fafalada

Fafracer forever
It's between 4060 and 4060ti in ai tops (rumored 300 for PS5 pro
That's NVidia quoting 'sparse' TOPS which is about as real of a spec as dual-issue Flops are on RDNA3 (it's literally just doubling the actual TOPS).
Now - sure - you can argue that maybe Sony's doing the same here* - but that would be going against historical precedents (Every single dev-doc - at least for Sony hw - I've seen in past 25 years, quoted real hw-figures - not PR use-case inflated ones).

So until someone proves otherwise - I'd assume PS5 Pro is quoting actual TOPS - in which case it should be compared to the same for NVidia cards - and there 300TOPs lands exactly between 4070 Super and a regular 4080.

*I recall leakers mentioning Sparsity optimisations are available to PS5 Pro as well - meaning either that 300 TOPS can be roughly doubled with the optimizations on - or it is indeed inflated/doubled already - but I already mentioned why I find that unlikely.
 
Last edited:

Loxus

Member
A sort of interesting part about this is Sony making their own neural accelerator based upscaling solution, I'm sure it's heavily based on AMD's FSR, but AMD's doesn't use dedicated neural hardware and still puts everything through its CUs. So I wonder if Sony wasn't satisfied with it as AMD has seemed to fall behind, and this may further distinguish the Playstation from the APUs any competitor can buy.
I'm pretty sure Sony is using AMD's tech still.

Examining AMD’s RDNA 4 Changes in LLVM

Better Tensors

AI hype is real these days. Machine learning involves a lot of matrix multiplies, and people have found that inference can be done with lower precision data types while maintaining acceptable accuracy. GPUs have jumped on the hype train with specialized matrix multiplication instructions. RDNA 3’s WMMA (Wave Matrix Multiply Accumulate) use a matrix stored in registers across a wave, much like Nvidia’s equivalent instructions.

RDNA 4 carries these instructions forward with improvements to efficiency, and adds instructions to support 8-bit floating point formats. AMD has also added an instruction where B is a 16×32 matrix with INT4 elements instead of 16×16 as in other instructions.

Machine learning has been trending towards lower precision data types to make more efficient use of memory capacity and bandwidth. RDNA 4’s support for FP8 and BF8 shows AMD doesn’t want to be left out as new data formats are introduced.
 

LordOfChaos

Member
I'm pretty sure Sony is using AMD's tech still.

Examining AMD’s RDNA 4 Changes in LLVM

Better Tensors

AI hype is real these days. Machine learning involves a lot of matrix multiplies, and people have found that inference can be done with lower precision data types while maintaining acceptable accuracy. GPUs have jumped on the hype train with specialized matrix multiplication instructions. RDNA 3’s WMMA (Wave Matrix Multiply Accumulate) use a matrix stored in registers across a wave, much like Nvidia’s equivalent instructions.

RDNA 4 carries these instructions forward with improvements to efficiency, and adds instructions to support 8-bit floating point formats. AMD has also added an instruction where B is a 16×32 matrix with INT4 elements instead of 16×16 as in other instructions.

Machine learning has been trending towards lower precision data types to make more efficient use of memory capacity and bandwidth. RDNA 4’s support for FP8 and BF8 shows AMD doesn’t want to be left out as new data formats are introduced.

That doesn't say anything about a dedicated neural accelerator, that could be adding instructions to their CUs and continuing to put everything through them

AI Accelerator, supporting 300 TOPS of 8 bit computation / 67 TFLOPS of 16-bit floating point
 

SlimySnake

Flashless at the Golden Globes
Lol come on man. Don’t downplay it. The 3070/2080 Ti are at best 50% faster. More commonly they’re 20-30% faster. It seems to be a 6800 in terms of raw raster specs.
I owned a 2080 just a few years ago and i know exactly how much weaker it was compared to the 2080Ti and then the 3070 which was virtually identical being sold for $700 fewer dollars. 35%. The PS5 is roughly a 2080 at times. 2070 Super other times. If we go by 2070 super, you get that 45% sony is estimating. In some games, it will outdo the 3070. In others, it will be roughly around the same. In most ray tracing games, it will be virtually identical thanks to the new rt tech.

On paper, I agree its a 6800. But that 45% figure straight from sony does not lineup with the tflops so i wont say its a 6800. Its a lot more powerful than a 6700xt and not quite as powerful as a 6800. I dont know much about where these 7000 series cards land, but im very familiar with the 3000 series cards and their performance profiles. i just dont think its a 3080. maybe a 3070 ti best case scenario.
 
Last edited:

FireFly

Member
I'm not sure if I agree here. Honestly, the 7700XT to 6600XT seems like a perfect example of the RDNA3 TF vs. RDNA2 TF disconnect we've seen. A 3x (300%) TF uplift in trade for a 50% performance gain. But, the 45% gain isn't anything to sneeze at and seems completely believable given how the 6600XT (roughly equivalent to PS5 if not bandwidth starved) and the 7700XT compare. Maybe being in a console will unlock the potential here, as there was obviously some kind of performance miss-fire with RDNA3 as I doubt that AMD was targeting such small gains.
I was ignoring RDNA 3's dual issue capabiltities when making the comparison, because they are barely used on PC.

Basically there is a ~5% gain overall in rasterisation for a doubling of "theoretical" FLOPS.

 

LordOfChaos

Member
Are you sleeping under a rock?
According to AMD, RDNA3 has dedicated AI Cores.
lzRPbC6.jpg
FF6Hcss.jpg
HqTf6qN.jpg
Q0kbim4.png

FWC4VDw.jpg

Or you're not reading well or don't understand what you're talking about. Your own images are showing that's part of the CU in adding instructions and beefing it up to deal with it, where the PS5 Pro rumor seems to indicate a dedicated accelerator a la neural engine.
 
Last edited:

Loxus

Member
Or you're not reading well or don't understand what you're talking about. Your own images are showing that's part of the CU in adding instructions and beefing it up to deal with it, where the PS5 Pro rumor seems to indicate a dedicated accelerator a la neural engine.
You didn't even read the article.
 

shamoomoo

Banned
For backwards compatibility.
The code name for PS5 Pro is Trinity.
Trinity means 3.

3 Shader Engine with 18 CU each for backwards compatibility.
18 CUs (1 SE) for PS4
36 CUs (2 SE) for PS4 Pro/PS5
54 CUs (3 SE) for PS5 Pro

Mark Cerny said he likes running the GPU at high clocks.
54 CUs @ 2.425 GHz = 33.5 TF
60 CUs @ 2.18 GHz = 33.5 TF


As for the CPU, you can't go with some things Kepler says and leave out the other stuff.



I mean, why on earth do you want this from the PS5 Pro? I doesn't make sense.

But there will be over 36 CU with 2 shader engines like the base PS5,why can Sony shut off the necessary amount to emulate the PS5? Because that would imply a theoretical PS6 couldn't be under 80CU but over 60.

Why would the Pro be clocked at 2.18 GHz? An even 2.2 GHz will still get 67 FP16 TFLOPS.
 
Last edited:

winjer

Gold Member
Are you sleeping under a rock?
According to AMD, RDNA3 has dedicated AI Cores.
lzRPbC6.jpg

That slide shows the AI Matrix accelerator inside the Vector unit. Which is part of the shader core.
It's not a fully dedicated unit. It's an extra set of instructions inside the Shader Unit.
And it uses the same resources, such as the instruction cache.
WMMA is much faster than DP4A at Matrix calculations, but it's still an instruction set inside the shader core.
And the new LLVM from RDNA4 ads even more performance, but it seems that it's just an improvement of WMMA.
This has advantages, such as using less die space. But it also means that it can do shaders or ML.
 

SlimySnake

Flashless at the Golden Globes
That's NVidia quoting 'sparse' TOPS which is about as real of a spec as dual-issue Flops are on RDNA3 (it's literally just doubling the actual TOPS).
Now - sure - you can argue that maybe Sony's doing the same here* - but that would be going against historical precedents (Every single dev-doc - at least for Sony hw - I've seen in past 25 years, quoted real hw-figures - not PR use-case inflated ones).

So until someone proves otherwise - I'd assume PS5 Pro is quoting actual TOPS - in which case it should be compared to the same for NVidia cards - and there 300TOPs lands exactly between 4070 Super and a regular 4080.

*I recall leakers mentioning Sparsity optimisations are available to PS5 Pro as well - meaning either that 300 TOPS can be roughly doubled with the optimizations on - or it is indeed inflated/doubled already - but I already mentioned why I find that unlikely.
Only problem is that these TOPs are only going to benefit in AI upscaling. They will not improve actual performance before the image is upscaled by PSSR. Maybe cerny has figured out a way to use these extra TOPs to make his upscaling solution even better than DLSS where PSSR 4k performance is better looking than DLSS 4k performance but i highly doubt cerny alone is that much better than nvidia's entire engineering team.
 

Loxus

Member
That slide shows the AI Matrix accelerator inside the Vector unit. Which is part of the shader core.
It's not a fully dedicated unit. It's an extra set of instructions inside the Shader Unit.
And it uses the same resources, such as the instruction cache.
WMMA is much faster than DP4A at Matrix calculations, but it's still an instruction set inside the shader core.
And the new LLVM from RDNA4 ads even more performance, but it seems that it's just an improvement of WMMA.
This has advantages, such as using less die space. But it also means that it can do shaders or ML.
Which is what the Pro is using.

 

Zathalus

Member
Are you sleeping under a rock?
According to AMD, RDNA3 has dedicated AI Cores.
lzRPbC6.jpg
FF6Hcss.jpg
HqTf6qN.jpg
Q0kbim4.png

FWC4VDw.jpg
It says right there in the image it uses the Vector unit as a matrix accelerator. Basically the regular compute is used to accelerate AI tasks, it doesn't have a seperate Tensor like core to accelerate AI while running regular compute at the same time. It's even explained here:
 

Loxus

Member
It says right there in the image it uses the Vector unit as a matrix accelerator. Basically the regular compute is used to accelerate AI tasks, it doesn't have a seperate Tensor like core to accelerate AI while running regular compute at the same time. It's even explained here:
Check my post above.
 

Loxus

Member
But there will be over 36 CU with 2 shader engines like the base PS5,why can Sony shut off the necessary amount to emulate the PS5? Because that would imply a theoretical PS6 couldn't be under 80CU but over 60.

Why would the Pro be clocked at 2.18 GHz? An even 2.2 GHz will still get 67 FP16 TFLOPS.
In order to get 33.5 TF with 60 CUs it has to be clocked at 2.18 GHz.

Do the math.
 
Top Bottom