[MLiD] PS6 Full Specs Leak: RTX 5090 Ray Tracing & Next-Gen AI w/ AMD Orion!

Didn't Kepler already leaked the RT has been vastly improved and provided patents as well?
Which doesn't matter because unless AMD has hired sorcerers, a 9070 XT-tier ain't clawing back the 87% deficit it has against the 5090 in hybrid workloads in a single generation.
 
Last edited:
160W sounds low, but it is rumoured to be using chiplets which could be the reason why.
MLID says monolith. See OP's second picture.
7900XTX is chiplet as well and it draws a lot of power. GPU seems reasonable with that amount of cores but low TDP will throttle it...

He is talking about 9070 to 5080 level of power for raster and 5090 level of power for RT. This doesn't make any fucking sense...
Perhaps efficiency gains in the chiplet design on the GPU side have been made since the 7000 series?
PS5 was 200W on a monolithic design.
A decrease from that makes sense given the change to chiplets.
MLID says monolith. See OP's second picture.

Also MLID is claiming 10MB of cache but I don't think its clear whether thats CPU or GPU side.
 
Which doesn't matter because unless AMD has hired sorcerers, a 9070 XT-tier ain't clawing back the 87% deficit it has against the 5090 in hybrid workloads in a single generation.
Kepler said 9070 XT, and I expect that in raster, AI and RT. Plus architecture improvements. A small jump from my Pro but I'll take it!
 
Which doesn't matter because unless AMD has hired sorcerers, a 9070 XT-tier ain't clawing back the 87% deficit it has against the 5090 in hybrid workloads in a single generation.
I don't think he was referring to RT when he said that.
 
I don't think he was referring to RT when he said that.
But he was. He said it will be around a 5090 in PT, which is nonsense. The way he came to that conclusion is also utterly moronic.

He took Alan Wake 2 running PT on a 7900 XTX at around 30fps, divided 30 by 3 to get the theoretical PS5 performance, and then multiplied the resulting number by 6-10x to get the PS6 number in Alan Wake 2 running PT, coming to the conclusion that it would theoretically run at 60-120fps what a 7900 XTX runs at 30.

Absolute moron.
 
MLID says monolith. See OP's second picture.

Perhaps efficiency gains in the chiplet design on the GPU side have been made since the 7000 series?

MLID says monolith. See OP's second picture.

Also MLID is claiming 10MB of cache but I don't think its clear whether thats CPU or GPU side.
These guys only read the title and don't keep tabs on previous leaks.

They don't even know that we are getting leaks from various places besides just Kepler and MLiD on AMD stuff.

It's pointless trying to explain anything.
 
True about 60 - 70 being best sellers.

But then what reason is there for them to suddenly want to make a die that big with rdna5 then?

Ohh I dont buy the whole PS6 vs 5090 thing.
I dont see them doing a range topper for a minute they will probably always hunt down xx80s and below until they can have their Ryzen moment with Radeon. (software and hardware being top notch)

My point was that RDNA4 could have been 4090 class relatively easily......they just didnt bother going that direction like they did with RDNA1 because every time (in recent history) that they aim for the top basically no one buys that range topper and it becomes pretty much a waste.

The 7900GRE/XT/XTX are all absolutely amazing value.....I dont think they have ever charted on the hardware survey while the 76/7/800 class cards have.
(Dont quote me on that).


P.S This is what I was replying to:

Funny thing is AMD cant get even remotely close to 4090 performance years after and a 5090 has 150w higher tdp.

They need to make a gpu that outperforms a 4080 first.


Why wouldn't they? If they really can compete at the top level, then they will win mind share, media coverage, and market share. This trickles down to the midrange. Many people buying xx60/xx70 over an AMD offering in the same range do so because of all the xx90 hype and coverage.

I very much doubt that.
AMDs problem isnt that they dont have a halo product to take on the xx90s.


We can have a pretty long discussion about why AMD GPUs dont sell as well as Nvidia, NOT having a halo product is gonna be pretty low on the list of reasons.
 
But he was. He said it will be around a 5090 in PT, which is nonsense. The way he came to that conclusion is also utterly moronic.

He took Alan Wake 2 running PT on a 7900 XTX at around 30fps, divided 30 by 3 to get the theoretical PS5 performance, and then multiplied the resulting number by 6-10x to get the PS6 number in Alan Wake 2 running PT, coming to the conclusion that it would theoretically run at 60-120fps what a 7900 XTX runs at 30.

Absolute moron.
I meant Kepler but didn't MLiD also said the PS6 would be using FSR5/PSSR2 to achieve that?
 
I meant Kepler but didn't MLiD also said the PS6 would be using FSR5/PSSR2 to achieve that?
Both Kepler and MLID said RDNA5 will have much improved RT capabilities, but you don't need an inside source to know that.

MLID made the 5090-tier RT performance claim excluding upscaling. It was simply "hey, in an RT-bound workload like PT Alan Wake 2, the PS6 GPU will perform like a 5090".
 
Both Kepler and MLID said RDNA5 will have much improved RT capabilities, but you don't need an inside source to know that.

MLID made the 5090-tier RT performance claim excluding upscaling. It was simply "hey, in an RT-bound workload like PT Alan Wake 2, the PS6 GPU will perform like a 5090".
I'm still going with that his claims is with FSR5/PSSR2 enabled due to the fact FSR performance was enabled in Daniel Owen video.


Either that or along with all the RT improvements, AMD/Sony also double the Ray Accelerator in each CU.
 
Last edited:
I'm still going with that his claims is with FSR5/PSSR2 enabled due to the fact FSR performance was enabled in Daniel Owen video.


Either that or along with all the RT improvements, AMD/Sony also double the Ray Accelerator in each CU.

You can watch the video, there's no mention of FSR.
 
You can watch the video, there's no mention of FSR.
So you're going with the doubling of the Ray Accelerator?

Can't just this discredit his claims due to the fact he leaked both Zen6 and RDNA5 lineup, as well as Magnus, Canis and Orion.

And was also spot on with his PS5 Pro specs and RT performance leak.
 
So you're going with the doubling of the Ray Accelerator?

Can't just this discredit his claims due to the fact he leaked both Zen6 and RDNA5 lineup, as well as Magnus, Canis and Orion.

And was also spot on with his PS5 Pro specs and RT performance leak.
He's completely incorrect in his assessment. 6-10x RT performance means nothing in a vacuum. Cerny himself said the Pro has 2-4x the RT performance of the base PS5, yet we saw from AC Shadows that it doesn't come all that close to even 2x, the lower end of the spectrum.

I discredit his incorrect extrapolation. The math he used to arrive at his conclusion was hilariously wrong.
 
Oh I see what has happened, he sees x amount of RT performance over the base PS5 and then just multiples the FPS from a benchmark. He did the same with the PS5 Pro. That's not really how that works, you have an improved RT pipeline, not the entire FPS number being multiplied by a certain amount. That's how people vastly inflated the PS5 Pro RT performance as well. Assuming this information is even correct.
I got the sign I needed from this post. So I'm back to thinking performance in RT heavy workloads is comparable or better than 9070xt, mayyyy be closer to a 5080. If only Mr. White would say something…

Anticipation Popcorn GIF
Ummmm…. :messenger_expressionless:
 
Last edited:
Thats not how it works. They haven't bothered releasing a chip that is in the same size class as the AD102 chip [used by the 4090]. What's funny about that?

If the leaks are correct (and if they decide to release them) gfx-cards based on AT0 and AT1 will easily outperform the 4080 (even AT2 based cards will probably do that).

But as for consoles they don't need to release them since AT0 and AT1 aren't suitable for your typical console.
Magnus AT2 = Xbox X
Magnus AT3 = possibly Xbox S variant
Magnus AT1 = Xbox PC
Magnus AT0 = Xbox Cloud
True about 60 - 70 being best sellers.

But then what reason is there for them to suddenly want to make a die that big with rdna5 then?
The reason is MS money.
He's completely incorrect in his assessment. 6-10x RT performance means nothing in a vacuum. Cerny himself said the Pro has 2-4x the RT performance of the base PS5, yet we saw from AC Shadows that it doesn't come all that close to even 2x, the lower end of the spectrum.

I discredit his incorrect extrapolation. The math he used to arrive at his conclusion was hilariously wrong.
While the math is obviously wrong, the idea is that a fully path traced game not requiring rasterized lighting frees up GPU resources for other stuff. So AC Shadows isn't best example. There is some slight truth to that, but his end result is highly exaggerated.
 
Last edited:
'Huge' is a weird term to use here in any context.
Just within the context of the data he's presenting.

Just talking presentation here. He could have used 300% increase over PS4 and Magnus is 25% faster on paper.

Or he could have used 4x increase over PS4 and Magnus is 1.25x faster.

Instead he's mixing units, using 4x in one instance and 25% in the other.

Maybe it's just a silly pet peeve but consistency helps to make sure when console warriors are fighting, they're at least arguing over the same thing.
 
Magnus AT2 = Xbox X
Magnus AT3 = possibly Xbox S variant
Magnus AT1 = Xbox PC
Magnus AT0 = Xbox Cloud

The reason is MS money.

While the math is obviously wrong, the idea is that a fully path traced not requiring rasterized lighting frees up GPU resources for other stuff. So AC Shadows isn't best example. There is some slight truth to that, but his end result is highly exaggerated.
We saw how long it takes for the PS5 vs PS5 Pro strictly for the ray tracing.

SlYme6O.png


The best we have is the Pro taking 52% of the PS5's rendering time. We never approach 25%.

Again, I don't doubt the RT uplift will be relatively speaking higher than the raster, but MLID's doesn't know what he's talking about.
 
Last edited:
We saw how long it takes for the PS5 vs PS5 Pro strictly for the ray tracing.

SlYme6O.png


The best we have is the Pro taking 52% of the PS5's rendering time. We never approach 25%.

Again, I don't doubt the RT uplift will be relatively speaking higher than the raster, but MLID's doesn't know what he's talking about.
So why are the comparisons running at different resolution for series S? That chart looks to me like the final rendering modes used for capturing frame time profiles. And if so, then Ps5 pro and 4080 would include RT reflections running in parallel right? Had brought this up earlier in another thread. Ubisoft's own architect had claimed they managed to achieve 3x boost across RT tasks. That's still not the overall frame rate, so your point still holds, but those slides may be taken out of context.
 
Last edited:
So why are the comparisons running at different resolutions for series s and PC? That chart looks to me like the final rendering modes used for capturing frame time profiles. And if so, then Ps5 pro and 4080 would include RT reflections running in parallel right? Had brought this up earlier in another thread. Ubisoft's own architect had claimed they managed to achieve 3x boost across RT tasks. That's still not the overall frame rate, so your point still holds, but those slides may be taken out of context.
Where do you see a different resolution for PC? They're all 1440p except for the 900p XS.
 
Where do you see a different resolution for PC? They're all 1440p except for the 900p XS.
My bad. I misspoke. Different resolution for Series S. Just edited it. When devs present these slides, they are presenting it to show how their game is running in reality and the frametimes of each technique. Not meant to compare platform performance with respect to each other.
 
Last edited:
I'll remind people we got a game like RDR2 running on crusty old PS4, a piece of hardware that released in 2013 and planned in the late 00's. The game still impresses even on that platform. (30fps i know)

We're now in 2025 and honestly i have more to worry about than graphics. I'll, probably take a while to upgrade from my ps5 pro at this point.
Even a game released back in 2018 still blows people away today. Around 70% of the games that come out with similar characteristics just don't have the quality or depth of Red Dead 2. And honestly, there are practically no games like that nowadays. This generation has been widely disappointing.


Since there can't really be that kind of mind-blowing technical leap anymore, companies are trying to make their consoles more versatile instead of more powerful. Nintendo started down that road about 8 years ago, Valve doubled down on it, and now Microsoft wants a piece of the pie alongside Sony.


The only game on the horizon that really looks like a true qualitative and generational leap is GTA VI.
 
MLID's doesn't know what he's talking about.
Though the specs are probably true.
After some thought, I'm going to have to go with you on this one.

Some details like disabling 1 CPU core was never done by Sony before.

I also believe the CU number is already with disabled CUs.

2CU/1WGP disabled per SE.
Canis: 20CUs total/2SE, 16CUs active.
Orion: 60CUs total/3SE, 54CUs active.
Magnus: 80CUs total/4SE, 72CUs active.
(or 10CUs disabled for 70CUs)

Yields being so good that only 2CU/1WGP is disabled per chip in a console seems weird to me.

And the die pictures are just estimated layouts.
 
MLID says monolith. See OP's second picture.

Perhaps efficiency gains in the chiplet design on the GPU side have been made since the 7000 series?

MLID says monolith. See OP's second picture.

Also MLID is claiming 10MB of cache but I don't think its clear whether thats CPU or GPU side.
These guys only read the title and don't keep tabs on previous leaks.

They don't even know that we are getting leaks from various places besides just Kepler and MLiD on AMD stuff.

It's pointless trying to explain anything.

I should have taken a screenshot but the video has been updated with monolith, when I commented it had said chiplet.
 
Though the specs are probably true.
After some thought, I'm going to have to go with you on this one.

Some details like disabling 1 CPU core was never done by Sony before.

I also believe the CU number is already with disabled CUs.

2CU/1WGP disabled per SE.
Canis: 20CUs total/2SE, 16CUs active.
Orion: 60CUs total/3SE, 54CUs active.
Magnus: 80CUs total/4SE, 72CUs active.
(or 10CUs disabled for 70CUs)

Yields being so good that only 2CU/1WGP is disabled per chip in a console seems weird to me.

And the die pictures are just estimated layouts.
I don't doubt that the specs are at least real. What I don't believe for one second is his extrapolation of them.
 
Though the specs are probably true.
After some thought, I'm going to have to go with you on this one.

Some details like disabling 1 CPU core was never done by Sony before.

I also believe the CU number is already with disabled CUs.

2CU/1WGP disabled per SE.
Canis: 20CUs total/2SE, 16CUs active.
Orion: 60CUs total/3SE, 54CUs active.
Magnus: 80CUs total/4SE, 72CUs active.
(or 10CUs disabled for 70CUs)

Yields being so good that only 2CU/1WGP is disabled per chip in a console seems weird to me.

And the die pictures are just estimated layouts.
Didn't MLID say before that Canis is 16 CUs without any disabled?

Also, Orion is going to have an uneven setup regardless if it's 9 or 10 WGPs. It's 3 SE, with 3 SA each.

And 80 CUs (40 WGPs) for Magnus was the other leak but MLID has been saying 68 CUs for Magnus, which would put it closer to the 25% CU advantage over Orion according to MLID (its 30%+ though). So 36 WGPs, 72 CUs total, 4 CUs disabled.

I personally think MS will use AT3 die (Medusa Point Halo) for the cheaper S tier SKU, 24 WGP, 48 CUs total, enough to target 1440p/120 comfortably. And 36 WGP, 72 CUs total for Magnus AT2. MS stated "first party consoles" as in plural.
MS also needs a cheaper SKU for use on xcloud.
 
Didn't MLID say before that Canis is 16 CUs without any disabled?

Also, Orion is going to have an uneven setup regardless if it's 9 or 10 WGPs. It's 3 SE, with 3 SA each.

And 80 CUs (40 WGPs) for Magnus was the other leak but MLID has been saying 68 CUs for Magnus, which would put it closer to the 25% CU advantage over Orion according to MLID (its 30%+ though). So 36 WGPs, 72 CUs total, 4 CUs disabled.

I personally think MS will use AT3 die (Medusa Point Halo) for the cheaper S tier SKU, 24 WGP, 48 CUs total, enough to target 1440p/120 comfortably. And 36 WGP, 72 CUs total for Magnus AT2. MS stated "first party consoles" as in plural.
MS also needs a cheaper SKU for use on xcloud.
I'm specifically talking about the layout on die.
Look at the CUs on the 9070 and 7900, they are evenly laid out side by side.
RC2i51Hbn5bz8g4u.jpg
bgbHUjvHCUTQ5RzM.jpg


Now add a extra WGP to a Shader Engine and you'll see what I'm talking about.
 
Another one I dont get while would sony disable a cpu core? I can understand stand ps4 / ps4 pro they didn't disabled any cores on ps5 / ps5 pro. And since its using LP cores for operation system why disable the bigger cores?
 
Top Bottom