[MLiD] PS6 Full Specs Leak: RTX 5090 Ray Tracing & Next-Gen AI w/ AMD Orion!

Cerny proofs many times (ps5pro, ps4pro) that he can be "retarded" in terms of bw limitation ;d tough its just cost cut
I think Cerny had his hands tied in many respects with the PS5 Pro. He should be able to deliver a more balanced console with fewer bottlenecks with the PS6.
 
Last edited:
The 9070 XT already beats the 5070 in hybrid RT workloads.

relative-performance-rt-2560-1440.png


And in pure RT workloads too.

3DMark-speed-way-rx-9070-xt-tous-les-tests-gpu-overclocking.jpg


I think 9070 XT performance in raster + 5070 Ti performance in RT is more likely. It should basically perform like a 5070 Ti most of the time even with RT.
Considering ps6 would have more ram 30/40 gig if thats the case that would have more ram if ps6 comes equipped with 30gig and 24gig is used for gaming this would be equivalent to a 5080/5090 level. Since 5080 has 16gig of gddr7 and the ps6 gpu will definitely be more powerful then a 9070xt @16gig with gddr6

Ps6 probably able to compete with a nvidia 4090.
 
Last edited:
9070xt is not bw limited
And the Navi 48 is a much bigger die with many, many more transistors so they really ain't that comparable.

Edit: ohh, sorry, you compared it to ps6 apu. My comment was more about the somewhat weird 9070xt vs 5070 comparisons made earlier in the tread.
 
Last edited:
I think Cerny had his hands tied in many respects with the PS5 Pro. He should be able to deliver a more balanced consoles with fewer bottlenecks with the PS6.
he will have even more limitation in ps6 development as it must be console for masses not enthusiasts only
 
Last edited:
he will have even more limitation in ps6 development as it must be console for masses not enthusiasts only
Nah, I think the PS5 Pro was a weird case of having to constrain the consoles in many aspects to ease the development and not make it a headache for developers.

The PS5 is much better balanced than the PS5 Pro for instance.
 
Why is 640 GB/s not enough?

Isn't the data compressed?
Has to share bandwidth with the CPU and with more attention dedicated to RT, you need more bandwidth. 640 just doesn't seem like a big enough amount for a console that will have to last 7 years. That's just 43% over the base PS5.
 
Last edited:
Based on what Kepler has been saying.
MLiD is more or less accurate in regards to RT, no overall performance.

If that turns out to be true both next gen console owners will be laughing stock among pc community, i cant believe ps6 will only be at 9070xt power lvl, maybe its some basic version and sony launches proper "pro" version alongside it?
If ps6 is at similar power lvl as 9070xt it means its similar/bit weaker compared to rtx 4080, which launched back in sept of 2022, so come holidays 2027 playstation fans would only have access to similar lvl of tech as over 5yo gpu? Fuck thats sad, and i bet console wont be cheap either even at such gimped specs :messenger_astonished:
 
Last edited:
If that 160W TDP is right, we can see Sony has given up on trying. With Xbox consoles being dying, they've decided to hustle their consumer. Selling low end hardware at a markup. Wtf is 160w tdp? 640gb/s? This thing sounds slightly better than a laptop 5070ti which would be tremendously disappointing in 2027.

No where close to the 5090 for sure lmao.
 
Why is 640 GB/s not enough?

Isn't the data compressed?
In theory they could have significantly improved the compression algorithm? Maybe?

If that turns out to be true both next gen console owners will be laughing stock among pc community
Do you actually think grown adults care about these epeen measuring contests? We're here to play games. Console hasn't outperformed PC since the 90s so it's not a big shock.
 
Last edited:
Has to share bandwidth with the CPU and with more attention dedicated to RT, you need more bandwidth. 640 just doesn't seem like a big enough amount for a console that will have to last 7 years. That's just 43% over the base PS5.
I had it in mind most data would be smaller in size or compressed due to the use of AI upscaling, denoising, etc. thus reducing the bandwidth demands.
 
i cant believe ps6 will only be at 9070xt power lvl, maybe its some basic version and sony launches proper "pro" version alongside it?
Nah. That's raster performance. It's plenty good for raster as everything will be AI upscaled anyway. Ray tracing is where next gen is at. And if RDNA 5 is now matching Nvidia 50 series (say 5070 Ti or 5080) RT performance, then we are eating good. Console owners are a laughing stock amongst PC bros anyway. What's new?
 
9070XT has the same amount but it doesn't have to share it with CPU and has 64MB of L3 cache to help it (PS6 won't have it).
CPU is more latency sensitive than bandwidth, probably 30 GB/s.

PS5 worked very fine with only 8MB of CPU L3 cache and no Infinity Cache.
 
I had it in mind most data would be smaller in size or compressed due to the use of AI upscaling, denoising, etc. thus reducing the bandwidth demands.
Sure, but upscaling is already in use and still requires a lot of bandwidth when RT is involved. Denoising with techniques such as ray reconstruction needs additional bandwidth. Couple that with perhaps full RTGI+Shadows+Reflections+AO and all the rest, the bandwidth requirements will increase by quite a lot.

I guess we'll see in the end, but 640 just doesn't seem to be much when we're looking at an RT-focused machine that needs to carry an entire generation. It has way less than the 3080.
 
Last edited:
I think Cerny had his hands tied in many respects with the PS5 Pro. He should be able to deliver a more balanced console with fewer bottlenecks with the PS6.
In Cerny's PS5 pro technical seminar video he walks the audience through the - Moore's law roadmap - limitations and diminishing returns they are working with in advancing hardware, and the main take away he wanted us all to have - IMHO - was that getting the ability to run AI with minimal latency and have a fully PSSR fused U-net - a full tensor - in WGP register memory to get a multi-fold ML AI performance gain was his biggest target going forward.

The thing is, if the Pro's total 15MB's of CU register memory they use with PSSR becomes the 140MBs he needed for a fully fused U-net or even 70MBs with 2:1 efficiency gain, that huge bump in CU memory will have a multi-factor benefit to RT on PS6 too, because the RT is GPU cache bottlenecked in RDNA AFAIK, so I don't think the broader specs will tell us much without knowing what PS5 Pro like customizations to the CUs the PS6 is getting.

If the PS6 has gone bigger on CUs by a factor of two and increased register memory per CU by a factor of four it in theory gives x8 times the register memory and would allow the PS6 to run conservative CU clocks for 160watt TDP and still get massive performance gains in GPU cache limited workloads by being able to use more of the PS6 TOPs (presumably 500 TOPS based on CU count) per second because of the zero latency of keeping the active data in the bigger register caches.
 
Denoising with techniques such as ray reconstruction needs additional bandwidth
AI denoising is mostly matmul-bound
I guess we'll see in the end, but 640 just doesn't seem to be much when we're looking at an RT-focused machine that needs to carry an entire generation. It has way less than the 3080.
640GB/s is a little low but RDNA5 has better compression and way more cache than Ampere.
 
Sure, but upscaling is already in use and still requires a lot of bandwidth when RT is involved. Denoising with techniques such as ray reconstruction needs additional bandwidth.
That's not really how that works - postprocessing pixels has largely fixed costs - it has to run in something short like a 1-2ms window, and it only uses 'bandwidth' in that time-window.
Ie. the memory is either fast enough for it - or it's not - there's no 'but adding other parts of the frame' in this conversation.

Flipside also works btw - we have multiple fully path-traced pieces of software out there now - so it's pretty viable to work out where they get bandwidth constrained or not as a worst case (raster based alternatives you listed will generally be more conservative).
Though caveat is always going to be that you can still increase material (and thus shader) complexity practically indefinitely if you have extra compute/memory to burn - but that has little to do with feature lists you were discussing.

I guess we'll see in the end, but 640 just doesn't seem to be much when we're looking at an RT-focused machine that needs to carry an entire generation.
In the end I agree with you to a point - but more because this needs to be an ML focused machine to carry a generation (and that gobbles bandwidth even faster). If all they produce is slightly better looking pixels, consoles are all headed the XBox way in short order.
 
That's not applicable for an APU where both the CPU and the GPU will be made on the same process node regardless.

That's the problem. I have seen these claims many times before. People have heard that in some types of configurations chiplets can be more efficient but doesn't apply for all configurations. And certainly not in configurations relevant to this topic.

Process node is irrelevant.

It's physics. The longer the wire an electrical signal has to travel through, the more electrical resistance it has to overcome and so the more power you burn to drive the current. So since 100mm (horizontal chip length) >>>>> 0.5 mm (die depth), it's pretty obvious what the benefit is.

You're just wrong.
 
I'm not going to latch onto the 5090 comparison which is obviously effective rage bait.

The overall case made makes sense. Save money on the diminishing returns parts, focus on stuff people can actually notice. Sounds cerny-esque. Remember Ms ate $200 a unit simply to have the number "12" next to "tf". Sony took a bunch of slings and arrows for it but they were the ones who made the right decision in the end.
 
Top Bottom