[MLiD] PS6 Full Specs Leak: RTX 5090 Ray Tracing & Next-Gen AI w/ AMD Orion!

Cerny proofs many times (ps5pro, ps4pro) that he can be "retarded" in terms of bw limitation ;d tough its just cost cut
I think Cerny had his hands tied in many respects with the PS5 Pro. He should be able to deliver a more balanced console with fewer bottlenecks with the PS6.
 
Last edited:
The 9070 XT already beats the 5070 in hybrid RT workloads.

relative-performance-rt-2560-1440.png


And in pure RT workloads too.

3DMark-speed-way-rx-9070-xt-tous-les-tests-gpu-overclocking.jpg


I think 9070 XT performance in raster + 5070 Ti performance in RT is more likely. It should basically perform like a 5070 Ti most of the time even with RT.
Considering ps6 would have more ram 30/40 gig if thats the case that would have more ram if ps6 comes equipped with 30gig and 24gig is used for gaming this would be equivalent to a 5080/5090 level. Since 5080 has 16gig of gddr7 and the ps6 gpu will definitely be more powerful then a 9070xt @16gig with gddr6

Ps6 probably able to compete with a nvidia 4090.
 
Last edited:
9070xt is not bw limited
And the Navi 48 is a much bigger die with many, many more transistors so they really ain't that comparable.

Edit: ohh, sorry, you compared it to ps6 apu. My comment was more about the somewhat weird 9070xt vs 5070 comparisons made earlier in the tread.
 
Last edited:
I think Cerny had his hands tied in many respects with the PS5 Pro. He should be able to deliver a more balanced consoles with fewer bottlenecks with the PS6.
he will have even more limitation in ps6 development as it must be console for masses not enthusiasts only
 
Last edited:
he will have even more limitation in ps6 development as it must be console for masses not enthusiasts only
Nah, I think the PS5 Pro was a weird case of having to constrain the consoles in many aspects to ease the development and not make it a headache for developers.

The PS5 is much better balanced than the PS5 Pro for instance.
 
Why is 640 GB/s not enough?

Isn't the data compressed?
Has to share bandwidth with the CPU and with more attention dedicated to RT, you need more bandwidth. 640 just doesn't seem like a big enough amount for a console that will have to last 7 years. That's just 43% over the base PS5.
 
Last edited:
Based on what Kepler has been saying.
MLiD is more or less accurate in regards to RT, no overall performance.

If that turns out to be true both next gen console owners will be laughing stock among pc community, i cant believe ps6 will only be at 9070xt power lvl, maybe its some basic version and sony launches proper "pro" version alongside it?
If ps6 is at similar power lvl as 9070xt it means its similar/bit weaker compared to rtx 4080, which launched back in sept of 2022, so come holidays 2027 playstation fans would only have access to similar lvl of tech as over 5yo gpu? Fuck thats sad, and i bet console wont be cheap either even at such gimped specs :messenger_astonished:
 
Last edited:
If that 160W TDP is right, we can see Sony has given up on trying. With Xbox consoles being dying, they've decided to hustle their consumer. Selling low end hardware at a markup. Wtf is 160w tdp? 640gb/s? This thing sounds slightly better than a laptop 5070ti which would be tremendously disappointing in 2027.

No where close to the 5090 for sure lmao.
 
Why is 640 GB/s not enough?

Isn't the data compressed?
In theory they could have significantly improved the compression algorithm? Maybe?

If that turns out to be true both next gen console owners will be laughing stock among pc community
Do you actually think grown adults care about these epeen measuring contests? We're here to play games. Console hasn't outperformed PC since the 90s so it's not a big shock.
 
Last edited:
Has to share bandwidth with the CPU and with more attention dedicated to RT, you need more bandwidth. 640 just doesn't seem like a big enough amount for a console that will have to last 7 years. That's just 43% over the base PS5.
I had it in mind most data would be smaller in size or compressed due to the use of AI upscaling, denoising, etc. thus reducing the bandwidth demands.
 
i cant believe ps6 will only be at 9070xt power lvl, maybe its some basic version and sony launches proper "pro" version alongside it?
Nah. That's raster performance. It's plenty good for raster as everything will be AI upscaled anyway. Ray tracing is where next gen is at. And if RDNA 5 is now matching Nvidia 50 series (say 5070 Ti or 5080) RT performance, then we are eating good. Console owners are a laughing stock amongst PC bros anyway. What's new?
 
9070XT has the same amount but it doesn't have to share it with CPU and has 64MB of L3 cache to help it (PS6 won't have it).
CPU is more latency sensitive than bandwidth, probably 30 GB/s.

PS5 worked very fine with only 8MB of CPU L3 cache and no Infinity Cache.
 
I had it in mind most data would be smaller in size or compressed due to the use of AI upscaling, denoising, etc. thus reducing the bandwidth demands.
Sure, but upscaling is already in use and still requires a lot of bandwidth when RT is involved. Denoising with techniques such as ray reconstruction needs additional bandwidth. Couple that with perhaps full RTGI+Shadows+Reflections+AO and all the rest, the bandwidth requirements will increase by quite a lot.

I guess we'll see in the end, but 640 just doesn't seem to be much when we're looking at an RT-focused machine that needs to carry an entire generation. It has way less than the 3080.
 
Last edited:
I think Cerny had his hands tied in many respects with the PS5 Pro. He should be able to deliver a more balanced console with fewer bottlenecks with the PS6.
In Cerny's PS5 pro technical seminar video he walks the audience through the - Moore's law roadmap - limitations and diminishing returns they are working with in advancing hardware, and the main take away he wanted us all to have - IMHO - was that getting the ability to run AI with minimal latency and have a fully PSSR fused U-net - a full tensor - in WGP register memory to get a multi-fold ML AI performance gain was his biggest target going forward.

The thing is, if the Pro's total 15MB's of CU register memory they use with PSSR becomes the 140MBs he needed for a fully fused U-net or even 70MBs with 2:1 efficiency gain, that huge bump in CU memory will have a multi-factor benefit to RT on PS6 too, because the RT is GPU cache bottlenecked in RDNA AFAIK, so I don't think the broader specs will tell us much without knowing what PS5 Pro like customizations to the CUs the PS6 is getting.

If the PS6 has gone bigger on CUs by a factor of two and increased register memory per CU by a factor of four it in theory gives x8 times the register memory and would allow the PS6 to run conservative CU clocks for 160watt TDP and still get massive performance gains in GPU cache limited workloads by being able to use more of the PS6 TOPs (presumably 500 TOPS based on CU count) per second because of the zero latency of keeping the active data in the bigger register caches.
 
Denoising with techniques such as ray reconstruction needs additional bandwidth
AI denoising is mostly matmul-bound
I guess we'll see in the end, but 640 just doesn't seem to be much when we're looking at an RT-focused machine that needs to carry an entire generation. It has way less than the 3080.
640GB/s is a little low but RDNA5 has better compression and way more cache than Ampere.
 
Sure, but upscaling is already in use and still requires a lot of bandwidth when RT is involved. Denoising with techniques such as ray reconstruction needs additional bandwidth.
That's not really how that works - postprocessing pixels has largely fixed costs - it has to run in something short like a 1-2ms window, and it only uses 'bandwidth' in that time-window.
Ie. the memory is either fast enough for it - or it's not - there's no 'but adding other parts of the frame' in this conversation.

Flipside also works btw - we have multiple fully path-traced pieces of software out there now - so it's pretty viable to work out where they get bandwidth constrained or not as a worst case (raster based alternatives you listed will generally be more conservative).
Though caveat is always going to be that you can still increase material (and thus shader) complexity practically indefinitely if you have extra compute/memory to burn - but that has little to do with feature lists you were discussing.

I guess we'll see in the end, but 640 just doesn't seem to be much when we're looking at an RT-focused machine that needs to carry an entire generation.
In the end I agree with you to a point - but more because this needs to be an ML focused machine to carry a generation (and that gobbles bandwidth even faster). If all they produce is slightly better looking pixels, consoles are all headed the XBox way in short order.
 
That's not applicable for an APU where both the CPU and the GPU will be made on the same process node regardless.

That's the problem. I have seen these claims many times before. People have heard that in some types of configurations chiplets can be more efficient but doesn't apply for all configurations. And certainly not in configurations relevant to this topic.

Process node is irrelevant.

It's physics. The longer the wire an electrical signal has to travel through, the more electrical resistance it has to overcome and so the more power you burn to drive the current. So since 100mm (horizontal chip length) >>>>> 0.5 mm (die depth), it's pretty obvious what the benefit is.

You're just wrong.
 
I'm not going to latch onto the 5090 comparison which is obviously effective rage bait.

The overall case made makes sense. Save money on the diminishing returns parts, focus on stuff people can actually notice. Sounds cerny-esque. Remember Ms ate $200 a unit simply to have the number "12" next to "tf". Sony took a bunch of slings and arrows for it but they were the ones who made the right decision in the end.
 
I think Cerny had his hands tied in many respects with the PS5 Pro. He should be able to deliver a more balanced console with fewer bottlenecks with the PS6.

Ye working around the internal cost caps around that is the most impressive part about the product. It's actually profitable!
 
Process node is irrelevant.

It's physics. The longer the wire an electrical signal has to travel through, the more electrical resistance it has to overcome and so the more power you burn to drive the current. So since 100mm (horizontal chip length) >>>>> 0.5 mm (die depth), it's pretty obvious what the benefit is.

You're just wrong.
Lol, I didn't realize it first but what the hell are you talking about? We are not talking about 3D stacking with TSVs here.

The CPU and GPU chiplets sit side-by-side on the interposer, not vertically stacked, so the signals still has to go horizontally. Your argument is totally irrelevant.
 
These specs just seem so lopsided. Does Cerny want to make another incredibly unbalanced console like the PS5 Pro? 160bit and 640 GB of bandwidth is absolutely pathetic and a minuscule upgrade over even the PS5. Why would they neuter their console with a bus much smaller than even the PS5? We saw with the pro, even the middling increase in compute, it was still badly throttled by bandwidth to the point it couldn't even effectively utilize the main advantage of RDNA4, and the touted rt massiv uplift was limited once again by bandwidth. Now with a supposed next-gen increase, a 100gb/s will supposedly suffice when we bring in pathtraced games running next-gen effects and alphas...mind numbing stuff.

Second, why in God's name would they throttle their console with the 160W power limit? The slow node advancements mean that getting a meaningful gen uplift is already difficult with TDP being one of the only constraints they can ease up on to gain performance, and now they are suddenly going to go significantly below vn a bas ps5. I truly hope the leaks aren't final because otherwise, Cerny has lost it.
 
Lol what BS. PS5 base was equivalent to Nvidia 2070 gpu so I expect PS6 base to be equivalent to 5070 but with more vram ofcourse likely 18gb similar to upcoming 5070 super 18gb. That is the most realistic expectation.
 
These specs just seem so lopsided. Does Cerny want to make another incredibly unbalanced console like the PS5 Pro? 160bit and 640 GB of bandwidth is absolutely pathetic and a minuscule upgrade over even the PS5. Why would they neuter their console with a bus much smaller than even the PS5? We saw with the pro, even the middling increase in compute, it was still badly throttled by bandwidth to the point it couldn't even effectively utilize the main advantage of RDNA4, and the touted rt massiv uplift was limited once again by bandwidth. Now with a supposed next-gen increase, a 100gb/s will supposedly suffice when we bring in pathtraced games running next-gen effects and alphas...mind numbing stuff.
This. PS6 should be at least 1TB/s. That's less than a 2X increase over the already 5 years old XSX, that by then will be 8 years old. It's not even an older generational increase. It's just bare minimum.
 
Lol what BS. PS5 base was equivalent to Nvidia 2070 gpu so I expect PS6 base to be equivalent to 5070 but with more vram ofcourse likely 18gb similar to upcoming 5070 super 18gb. That is the most realistic expectation.
In terms of ballpark raster performance, yes. But I expect RT/PT-performance to be better though. But obviously not 5090 good, that's just MLiD being dumb.

But until we know what TDP they are aiming for it's very risky to guess performance.

24 GB/192 bit (or 20GB if 160 bit bus is true, which it hopefully isnt) is just as, or even more, likely.
 
Last edited:
This. PS6 should be at least 1TB/s. That's less than a 2X increase over the already 5 years old XSX, that by then will be 8 years old. It's not even an older generational increase. It's just bare minimum.
Should? Why should Sony ignore economics? That would be just dumb.

Edit: Ok, 256 bit 32 Gbps is actually just over 1 TB/s. That's not completely unrealistic but I am allergic to claims that only considers performance numbers. That's not how engineering mass market products work.
 
Last edited:
Should? Why should Sony ignore economics? That would be just dumb.

Edit: Ok, 256 bit 32 Gbps is actually just over 1 TB/s. That's not completely unrealistic but I am allergic to claims that only considers performance numbers. That's not how engineering mass market products work.
They are earning a fuck ton and I'm not asking for some crazy performance like Bojji Bojji that is disappointed that tb Pro isn't a 5070. I'm asking for a little of decency in a product where they usually eat costs and still make a fuck ton of money. Or else: disappear. There will be better options if the hardware is not subsidised.

I was worried that the amount of RAM was going to be minimal next gen but seeing all of these rumours about a starving bandwidth at this point I'd gladly take 16GB of whatever they can muster that is fast enough.
 
So always take his rants with a very big dose of skepticism.
I'd say in this case just disregard his interpretation and do your own based on the specs he has leaked. I suspect comparing to any current card is going to be misleading as the whole performance profile is going to balance completely differently. Especially if some of the less talked about optimisations around ML bear fruit. In which case a PS6 could well outperform some of the lower expectations, but at the same time newer PC cards in a few years might also outperform expectations. Uplift across the board.
 
What's happened to Microsoft and the Xbox brand this gen is bad for nextgen. Sony doesn't need to shoot for the stars technically anymore. Their name will carry them even if it's only 20% better than PS5.
 
Considering ps6 would have more ram 30/40 gig if thats the case that would have more ram if ps6 comes equipped with 30gig and 24gig is used for gaming this would be equivalent to a 5080/5090 level. Since 5080 has 16gig of gddr7 and the ps6 gpu will definitely be more powerful then a 9070xt @16gig with gddr6

Ps6 probably able to compete with a nvidia 4090.

7CoSgWt.gif


Extra VRAM wont make the chip any faster.
A 32GB 9070XT will perform the same as a 16GB 9070XT unless you get out of VRAM of which there arent many games, near none that can actually fully choke 16GB of VRAM

If you think the PS6 is going to be 4090 or even close to 5090 class at anything you are in for disappointment.
 
It is yes but it's a sign of the times. Sony wants to keep costs low after the PS5 price struggle.
Problem is I dont think the cost will be that low because they will want profit from the get go... anything below 599 will hugely surprise me ... and Im still betting on 699 with the gimped docked handheld being the cheaper option.

Than you get an expensive mid spec AMD console to get through 8 years of gen without exclusive games (meaning everything will comeout for ps5/pc) .

Just like the ps5pro looks right now at 699 .. it looks like a bad deal imho.
 
What's happened to Microsoft and the Xbox brand this gen is bad for nextgen. Sony doesn't need to shoot for the stars technically anymore. Their name will carry them even if it's only 20% better than PS5.
Price is the number one concern for next gen. They are aiming for hardware which is fully utilised all the time, rather than shooting for the stars with impressive specs that maybe only a few of the most demanding games will make full use of.

They already went that way with the PS5, less CUs compared to the Xbox but higher clocks.
 
Last edited:
Right, so he thinks the PS6 will be half the 5090 at raster. Now just looking at a game like Borderlands 4 - you still need power to generate the base rasterised image, before any DLSS / PSSR shenanigans. So raster will be the (large) limiting factor, assuming what else he says is correct. RT is basically 'solved' on the 5000 series anyway.
 
I've said it before but this guy lives off the hopes and dreams of sony and AMD fans.

He knows putting titles like this increases his viewing stats and thus his payouts from YouTube.

Its pure manipulation because he knows hes doing it.

Not a fan at all.

Edit. This has happened with so many generations abd we keep falling for it.
 
Last edited:
I've said it before but this guy lives off the hopes and dreams of sony and AMD fans.

He knows putting titles like this increases his viewing stats and thus his payouts from YouTube.

Its pure manipulation because he knows hes doing it.

Not a fan at all.

Edit. This has happened with so many generations abd we keep falling for it.
This time around, he has legit sources.

Right now RT core per core, Nvidia is about ~40% better with RDNA4.

With not much RT improvements between 4000 and 5000, AMD has a good chance at evening out next gen.

I still believe his claims is with AI Rendering enabled (upscaling/frame generation).
 
A ~40TF GPU will never be able to have the same RT capability as a 5090 (which is ~100TF GPU).
MLID is click baiting and people are buying into it.
 
This time around, he has legit sources.

Right now RT core per core, Nvidia is about ~40% better with RDNA4.

With not much RT improvements between 4000 and 5000, AMD has a good chance at evening out next gen.

I still believe his claims is with AI Rendering enabled (upscaling/frame generation).

I mean I hope so as that would be great for all of us. But ill wait out to see what the reality is and not hype by these people.

All that stuff about ps5 and ps4 and series x and all that. Its all marketing.
 
Top Bottom