[MLiD] AMD Magnus APU Full Leak: RDNA 5, Zen 6, 110 TOPS NPU = XBOX Next-Gen Console!

Just to be clear, this new console is just one SKU right? No Series S situation? I'm not completely caught up with the rumor mill
In June 2025, Microsoft and AMD announced a multi-year deal to co-engineer custom silicon for the next generation of Xbox gaming devices, including both consoles and handhelds

October 2025: "We are actively investing in our future first-party consoles and devices designed, engineered and built by Xbox. For more details, the community can revisit our agreement announcement with AMD."
 
instead of having a standard and a budget model, that would be like having a budget and a pro model, with nothing in-between. cornering the PS6 on both ends. 1 console that's cheaper, and 1 console that has superior performance.
This sounds familiar - why does it sound ... oh wait - that was literally the entire Series strategy, except launching at even less competitive prices.
Hell we had like... 12 months of people talking about the 'brilliance of the sandwich strategy' on this very forum in the lead up to launch.


But yes - as always when suggesting corporation should keep doing the same thing that failed over and over again
Friday Movie GIF

I made a comment in another thread on this - but the only time XBox SKUs were selling competitive with Playstation - was when they were 20-40% cheaper throughout their respective lifetimes.
There was never an XBox competitive at price parity, and the less said about more expensive SKUs - the better.

Btw - I'm one of the people that was cheering for expensive XBox SKU that is basically PC in a shiny case at apple prices because I would also be a target audience. But that was before MS went full retard and fired their only good Hardware leader (Panay) and scuttled their only good hardware product line (Surface Book).
And I also acknowledge it would never be a mainstream product regardless.

I'm not well versed on the latest CPU tech. Can someone educate me on what's with the heterogeneous mixture of zen 6 and zen 6c cores? What are the strengths of each and why is there a mix?
Power scaling and silicon utilization. It's completely useless as a concept on consoles.

But an NPU is a dedicated unit, just for AI. While on a GPU, it's spread out across the shaders.
That's largely irrelevant when there's 3000 TOPs on the GPU and 100 TOP NPU next to it.
The only 'performance' rationale for it would be the same as with CPU - if NPU is more general purpose/flexible to the point where certain workloads become more power efficient.
But I fear the actual rationale (having a CoPilotCoProcessor in hardware) is a lot more likely. There's a lengthy history of all 3 companies doing this with their consoles (adding a small bit of extra silicon for a single function that 98% of their gaming audience couldn't give less of a crap about), and MS has done it the most of the 3 to date.
 
This sounds familiar - why does it sound ... oh wait - that was literally the entire Series strategy

no it wasn't. current gen strategy was 1 normal console + 1 budget console.
the idea I propose is 1 budget console + 1 pro console, with no middle ground.
 
That's largely irrelevant when there's 3000 TOPs on the GPU and 100 TOP NPU next to it.
The only 'performance' rationale for it would be the same as with CPU - if NPU is more general purpose/flexible to the point where certain workloads become more power efficient.
But I fear the actual rationale (having a CoPilotCoProcessor in hardware) is a lot more likely. There's a lengthy history of all 3 companies doing this with their consoles (adding a small bit of extra silicon for a single function that 98% of their gaming audience couldn't give less of a crap about), and MS has done it the most of the 3 to date.

The problem is that having the Tensor Units in the shaders means that it uses these resources.
So the shader is either doing TFLOPs or TOPs.
While with an NPU, it's dedicated to just TOPs, and having instructions using it won't use up the shader cores.
 
An NPU or Tensor Units in the shader cores, can be used for upscaling, texture compression, ray-reconstruction, etc.
Did a little more reading about npu you are right it can be used for all those however it seems it takes up decent amount of die space
 
Did a little more reading about npu you are right it can be used for all those however it seems it takes up decent amount of die space

Yes, but it makes up for it.
For example, instead of having a ton of shaders to render at 4K, we can have much fewer and render at 1080p, then upscale it, with great image quality.
DLSS4 and FSR4 have proved that it's possible to do so, with very good image quality and performance.
 
no it wasn't. current gen strategy was 1 normal console + 1 budget console.
Everyone online and Microsoft themselves thought X was the premiere console that would comfortably lead the pack. Just because they miscalculated on the hardware front doesn't mean it wasn't built for it - and lets be honest - it 'is' the more performant console of the two, even if delta is largely irrelevant.
But then again hw deltas were never particularly relevant inside of one gen.

the idea I propose is 1 budget console + 1 pro console, with no middle ground.
You're proposing a rehash of Series where 'Pro' is some unspecified % faster than what X turned out to be, but if that % failed (let's say - 50%) next iteration we'd be saying it should have been 200% and around we go. Again - MS needs their 'budget' console to perform at parity (or above) of the respective Playstation - what you're really looking for is:

price/performance %

Playstation 6 = 499/100%
XBox S2 = 399/120%
Xbox X2 = 599(or higher)/150% (or higher)

That sandwich might actually make a difference
 
The problem is that having the Tensor Units in the shaders means that it uses these resources.
That really isn't a problem at all.
Again - when you have 3000 TOP GPU - a 100TOP workload completes in 0.5ms, or 3% of the frame time. You still have 97% of your GPU left for shader exclusive work.

While with an NPU, it's dedicated to just TOPs, and having instructions using it won't use up the shader cores.
See above - unless NPU is dramatically more efficient at certain workloads (which is the CPU equivalence I suggested already) it's irrelevant and only matters for dedicated work (eg. system enabled CoPilot that is Always-On). Happy to take bets on which one is the more likely scenario.
 
Last edited:
The mistake is calling it "a console" in the first place

It's a mini-pc

If the rumor about steam is true ? Then this will basically be running some version of Win 11 and allow you to choose Steam or Xbox etc. May be it is to get PC gamers into it and that could be the reason it literally PS6 Pro specs as well.
 
That really isn't a problem at all.
Again - when you have 3000 TOP GPU - a 100TOP workload completes in 0.5ms, or 3% of the frame time. You still have 97% of your GPU left for shader exclusive work.


See above - unless NPU is dramatically more efficient at certain workloads (which is the CPU equivalence I suggested already) it's irrelevant and only matters for dedicated work (eg. system enabled CoPilot that is Always-On). Happy to take bets on which one is the more likely scenario.

3000 TOPs? Where did you get such a huge number?
 
Everyone online and Microsoft themselves thought X was the premiere console that would comfortably lead the pack. Just because they miscalculated on the hardware front doesn't mean it wasn't built for it - and lets be honest - it 'is' the more performant console of the two, even if delta is largely irrelevant.
But then again hw deltas were never particularly relevant inside of one gen.


You're proposing a rehash of Series where 'Pro' is some unspecified % faster than what X turned out to be, but if that % failed (let's say - 50%) next iteration we'd be saying it should have been 200% and around we go. Again - MS needs their 'budget' console to perform at parity (or above) of the respective Playstation - what you're really looking for is:

price/performance %

Playstation 6 = 499/100%
XBox S2 = 399/120%
Xbox X2 = 599(or higher)/150% (or higher)

That sandwich might actually make a difference

I don't think perfomance parity for the budget version is a necessity. not the way things are progressing anyway.
you can get broadly similar experiences on 2 vastly differently specced GPUs nowadays, simply due to 1: ML based temporal reconstruction, and 2: every game employing dynamic resolution scaling on top of that.

if a budget Series S2 would be 25% or even 30% slower in GPU perfomance than the PS6, what you'd see is a slightly grainier image, and not much more.
just look how close the Switch 2 is able to come to the Series S, simply by having DLSS and good RT hardware. it's about half as powerful, but if you compare SW Outlaws, you'd never guess that at first glance.

imagine the X2 running a game at 1440p FSR4 Quality Mode, the PS6 running it at 1440p FSR4 Balanced mode, and the S2 running it at 1440p FSR4 Perfomance Mode. would you instantly see a huge difference? and this difference would get smaller the higher the resolution. if we replace 1440p here with 2160p, the difference would be very very small.
 
Last edited:
Yes, but it makes up for it.
For example, instead of having a ton of shaders to render at 4K, we can have much fewer and render at 1080p, then upscale it, with great image quality.
DLSS4 and FSR4 have proved that it's possible to do so, with very good image quality and performance.
Could be very useful then, I would be surprised if Sony has it to be honest they seem to have a small silicone budget around 250mm
 
Ars tecnika article on the on the amd video said the radiance core were a patent for ray tracing filed by playstation on 2022 but I don't know that could be wrong
 
3000 TOPs? Where did you get such a huge number?

My hunch is it was a misreading of Kepler's earlier statements (with all due respect to F Fafalada ) , Magnus being 3000 Teraflops on FP4 compute not 3000 TOPS

I have said before but 99% their "greatest leap ever" is going to be AI perf comparison using FP16 Vector on Series X (24 TFlops) vs Sparse Matrix FP4 on Magnus (~3000 TFlops)
 
Last edited:
3000 TOPs? Where did you get such a huge number?
Kepler posted it earlier, and anyway - the 5090 comparisons in RT/ML would put it there as well.
It's 'only' 2x the 9070xt number, so not that outlandish to see it in closed box 2-3 years from now.

you can get broadly similar experiences on 2 vastly differently specced GPUs nowadays, simply due to 1: ML based temporal reconstruction, and 2: every game employing dynamic resolution scaling on top of that.
If all these consoles (all of them) bring to the table is 'slightly better looking' pixels we're not going to be discussing relative sales but more whether anyone still owns a console at all. Either they find new differentiation through ML compute (and by that I meant not cosmetically enhancing pixels where everyone already looks nearly identical as of FSR4), or none of this will matter.
And if that does happen - the hw delta will suddenly matter a lot more again than % of resolution.

if a budget Series S2 would be 25% or even 30% slower in GPU perfomance than the PS6, what you'd see is a slightly grainier image, and not much more.
Yes, and we've done this already and XBox bombed even with highly aggressive price promotions. The whole 900p vs 1080p delta was not exactly particularly noticeable (it was only because framerate usually wasn't at par that people took note).
The only time XBox didn't bomb was when the PS was the one with usually slightly grainier image and cost 30% more.
 
So the current gen strategy, but worse…

nah, better. a budget console with RDNA2 was bound to have massive issues. a budget console on RDNA5 has the same technological advantages that help the Switch 2 punch way above its weight.

you can't make a 720p game look presentable on a 55" TV on RDNA2. but you will be able to on RDNA5
 
Last edited:
Kepler posted it earlier, and anyway - the 5090 comparisons in RT/ML would put it there as well.
It's 'only' 2x the 9070xt number, so not that outlandish to see it in closed box 2-3 years from now.

Even a 9070XT, with a die 357 mm², in N4, only for the GPU, only has 1557 TOPs, in Int4 with sparsity.
But things like FSR4 don't use Int4. It uses FP8. So in reality, the value is 389 TOPS.
I really doubt a console APU, with a die size of 250-300mm2 will have more units for ML, than a dedicated GPU. Much less having twice as much.
We'll be lucky if the PS6 has 300-400 TOPs in Int8.
 
I really doubt a console APU, with a die size of 250-300mm2 will have more units for ML, than a dedicated GPU. Much less having twice as much.
We'll be lucky if the PS6 has 300-400 TOPs in Int8.
I'd be willing to entertain hat/modem eating bets that PS6 would have at least 4x the ML throughput of the Pro (which would - be slightly lower but still close-ish to that number). Actually since Kepler quoted 3POPs for Magnus (and that's where the NPU sits), 2.4 for PS6 might fit 🤷‍♀️
Anyway - the more apt question would be what TOPS is NPU spec quoting but AMD is being really cagey about it and never specifying in their PR, so it likely isn't larger operations either. Maybe they don't manipulate the number with sparse-ops at least.
 
Last edited:
I'd be willing to entertain hat/modem eating bets that PS6 would have at least 4x the ML throughput of the Pro (which would - be slightly lower but still close-ish to that number). Actually since Kepler quoted 3POPs for Magnus (and that's where the NPU sits), 2.4 for PS6 might fit 🤷‍♀️
Anyway - the more apt question would be what TOPS is NPU spec quoting but AMD is being really cagey about it and never specifying in their PR, so it likely isn't larger operations either. Maybe they don't manipulate the number with sparse-ops at least.

The question is why would Sony want to spend so much die space with ML units.
The 9070XT already does very well with 389 TOPs.
Remember that die space is at a premium in an SoC, especially in N3, which costs 25-27k per wafer.
 
The question is why would Sony want to spend so much die space with ML units.
The 9070XT already does very well with 389 TOPs.
Remember that die space is at a premium in an SoC, especially in N3, which costs 25-27k per wafer.
As Mark mentioned, ML compute expansion is pretty cheap - they could have gone wider in the Pro easily but the memory subsystem just wasn't there to keep up with it and they couldn't afford to redesign 'that'.
PS6 is a different story so it'll be a lot more viable to go wider. As for 'why' - as I allude above, the differentiator for this coming gen will have to come somewhere other than painting pixels and we've hit a wall with standard compute as well - ML accelerators are the most possible venue I can think of.
 
As Mark mentioned, ML compute expansion is pretty cheap - they could have gone wider in the Pro easily but the memory subsystem just wasn't there to keep up with it and they couldn't afford to redesign 'that'.
PS6 is a different story so it'll be a lot more viable to go wider. As for 'why' - as I allude above, the differentiator for this coming gen will have to come somewhere other than painting pixels and we've hit a wall with standard compute as well - ML accelerators are the most possible venue I can think of.

But it's not cheap. It will always use up die space, that is at a premium in an SoC.
Sony can't afford to use up die space for ML, just for the sake of it.
 
Why are we complicating this whole ML/NPU stuff?

Watch the project merthyst video, there is a reason sony and AMD are talking about neural arrays and not just plopping an NPU in there.

An NPU is ideal for an APU that doesn't have a sizeable GPU. How many matrix ops can you really do with 16-36 cores? But when that core count goes up, or can go up, then having a fully separate block of die space dedicated to something that the shader cores are fully capable of doing starts becoming redundant or even ill-advised.

The reasons GPU shader cores have matrix bottlenecks are primarily scheduling, communication, and memory issues. And that's what the whole neural arrays thing is designed to fix. So rather than take up die space slapping on an NPU, if you have enough shader cores, you are better off improving them in a way that you slve those isses. Which is what Sony and AMD are trying to do.

And I dont even see why people want this, the best NPUs on the market right now in other APUs or SOCs, or CPUs, tops out at 50TOPS int8. 50. An RX9070xt can do 790TOPs.

The question is why would Sony want to spend so much die space with ML units.
The 9070XT already does very well with 389 TOPs.
Remember that die space is at a premium in an SoC, especially in N3, which costs 25-27k per wafer.
They don't. That was the whole point of the neural arrays thing in that video they just released.
 
Last edited:
nah, better. a budget console with RDNA2 was bound to have massive issues. a budget console on RDNA5 has the same technological advantages that help the Switch 2 punch way above its weight.

you can't make a 720p game look presentable on a 55" TV on RDNA2. but you will be able to on RDNA5
On paper, sure. But given the likely pricing, this sounds to me like a play for capturing the prebuilt gaming PC market. Not the traditional console market. Confidence in MS is at a point where people think even one SKU won't get past the finish line. Let alone 2.
 
Specs for only 1 have been leaked and confirmed. If there are other models in the works, it's unlikely to be launch aligned. There is a chance that K KeplerL2 is hiding it from us. But he doesn't seem like a sneaky mofo…
If Magnus SoC can be paired with AT3 GMD then they could have a lower end SKU, but so far I have not seen any documentation suggesting that (and I don't think MLID mentioned that either).
 
But it's not cheap. It will always use up die space, that is at a premium in an SoC.
Sony can't afford to use up die space for ML, just for the sake of it.
Hold on... you do realize that NPUs are typicvally built in the same die with the CPU and GPU, right? Of everyone that makes NPUs, only Intel has it on a seperate die within the same SOC package. So in intels method, you have a CPU, GPU, and NPU all connected by some sort of fabric thingy. Everyone else has them on the same die with their CPU and GPU (AMD, Qualcom, Apple..etc).

But the main thing here though, is that with better shader tooling or wiring, you should not even need a standalone NPU in a system that already has the GPU muscle.
 
Hold on... you do realize that NPUs are typicvally built in the same die with the CPU and GPU, right? Of everyone that makes NPUs, only Intel has it on a seperate die within the same SOC package. So in intels method, you have a CPU, GPU, and NPU all connected by some sort of fabric thingy. Everyone else has them on the same die with their CPU and GPU (AMD, Qualcom, Apple..etc).

But the main thing here though, is that with better shader tooling or wiring, you should not even need a standalone NPU in a system that already has the GPU muscle.

AMD also has NPUs on an SoC, such as Strix Halo. This one is based on XDNA2.
The thing you have to remember is that if the Tensor Units are in the shader cores, then the shader cores will either be processing shaders or processing ML.
And of course, having ML instructions in a shader core also uses die space.
There is no free lunch.
 
Last edited:
AMD also has NPUs on an SoC, such as Strix Halo. This one is based on XDNA2.
The thing you have to remember is that if the Tensor Units are in the shader cores, then the shader cores will either be processing shaders or processing ML.
And of course, having ML instructions in a shader core also uses die space.
There is no free lunch.
Don't the current slew of shader cores have AI units as part of the CU? As does Even Nvidia GPUs have them too.
 
I don't see how PS6 can match Magnus, it has fewer CPU cores, lower CPU frequency, fewer CUs, fewer ROPs, lower GPU frequency, less cache and memory bandwidth. It's not a huge difference but Magnus should have better performance in 100% of games unlike this gen where it's more of a 50/50.
PS6 does have some advantages: 1) It'll be significantly cheaper and it'll almost assuredly be the lead platform for devs so it'll get the most optimization attention, this has usually been the case for playstation because Playstation has the biggest market share.

2) PS6 will potentially have a lower level API. Xbox in comparison will be more PC like which depending on how PC like it is it could end up with a higher level API hurting optimization potential. Xbox may also have higher overhead with it's more of PC like OS.

Out of these the worse one for Xbox is the lead platform conundrum, Xbox will sell worse for sure this gen, possibly by an unprecedented margin, that'll be an easy excuse for devs to prioritize Playstation instead.
 
Last edited:
We will have to wait until we get official confirmation, but it seems the next Xbox will be roughly 30% more powerful than the PS6, and judging by the alleged specs it's also going to cost significantly more.
PS6 gets 46 fps in a game, 30% means the Xbox gets 60. That's the difference between a 5060ti 16gb and the 5070.
 
Last edited:
We had people that were apparently "In the know" saying Series S was more powerful than PS5.

Basically. No one fucking knows a thing right now.
So, we apparently had a few idiots in 2020, therefore no one knows anything today? Ok, got it.
 
PS6 does have some advantages: 1) It'll be significantly cheaper and it'll almost assuredly be the lead platform for devs so it'll get the most optimization attention, this has usually been the case for playstation because Playstation has the biggest market share.

being the lead platform isn't the silver bullet anymore that it once was.
the PS5 is lead platform for every single game that isn't made by Microsoft, yet even here we see the Series X slowly but surely show performance advantages in most games now.
and these are 2 consoles that are not only closer in terms of potential performance, but also the Series X doesn't have an advantage across the board even.


2) PS6 will potentially have a lower level API. Xbox in comparison will be more PC like which depending on how PC like it is it could end up with a higher level API hurting optimization potential. Xbox may also have higher overhead with it's more of PC like OS.

Microsoft always had custom Direct X versions for their consoles with lower level hardware access. I doubt it will be different here either.


Out of these the worse one for Xbox is the lead platform conundrum, Xbox will sell worse for sure this gen, possibly by an unprecedented margin, that'll be an easy excuse for devs to prioritize Playstation instead.

not really. one AMD APU won't magically have an advantage over another AMD APU that uses the exact same architecture, just because the devs work a bit harder on one of them than the other.

Especially when so many use Unreal Engine now, which is a highly generalised engine that developers rarely customise or adjust to fully squeeze out every bit of performance of any given console.
 
Last edited:
Should be ~1200 TOPs (not that INT8 matters when FP4/6/8 exists)
Hey now - I called dibs on 4x Pro first - pick your own number:messenger_horns:

You mean like the PS2 that was $299, which is equal to about $549 today?
The thing is though - PS2 cost 99$ in its 5th year (and it had added hardware on top of it), not 50$ more than at launch like the PS5.

But it's not cheap. It will always use up die space, that is at a premium in an SoC.
Relatively - compared to other elements that use up space. Obviously elements are designed for a particular balance, but that isn't decided based on 'fair dice roll' like the PS3 security Keys were, or some fans spec-dreams.
Your assertion of what is 'good enough' is based on past software information and almost entirely centered on 5 year old use-case of pixel-upscalers. But hw-design choices are almost never 'that' regressive.

Why are we complicating this whole ML/NPU stuff?
Because of the rumor claiming that XBox/PC/3DO-next will have a dedicated 100 TOPs NPU in addition to the GPU onboard.
The debate was centered around 'why' - I'm still placing bets on 'CoPilot that's why' - but there are other contenders.
 
Last edited:
If Magnus SoC can be paired with AT3 GMD then they could have a lower end SKU, but so far I have not seen any documentation suggesting that (and I don't think MLID mentioned that either).
You can't build a portfolio of devices if Magnus couldn't be paired up with any of the 5 GMDs.

Besides, if Magnus is the CPU SOC only with the media die included, it will have to be paired up with AT0 regardless, if AT0 is what they use for xCloud.

What they will likely do is 2-4 Magnus SOCs paired with one AT0 GMD for the custom xCloud server blades.

That would mean each AT0 could be split into two instances of AT1, or 4 instances of AT3.

96 x 2 for 192 total, or 48 x 4 for 192 total CUs.

So if a Magnus AT3 S tier Console is 40-48, AT0 could run 4 instances per setup. And each Magnus SOC would be able to run and hardware encode each of its own streams using AV1 from the media die.

Only reason for AT0 to exist is for xCloud, and MS isn't going to run a 192 CU chip on the cloud to serve 1080-1440 users. They would like to chop up GPUs into instances, similar to GFN.

Currently xCloud runs on Series S profiles on custom Series X servers with 8 APUs per server blades. MS showed off each APU being able to run 4 One S instances. But they couldn't run two instances of Series S per X APU due to games being hard coded for Series CPUs.

So the Magnus AT0 setup would allow MS to run 4 instances of next gen S profiles, which each instance would be more powerful than current Series X. And also allows them to increase capacity from 8 to 12-16 instances per server blades, based on how many they can fit on there. Efficient and powerful Cloud Gaming while still saving MS energy and cooling costs.
 
Last edited:
I bet the 100 TOPs figure is from the standard npu built into all zen6 IOD. Not sure is it worth the silicon costs unlike the 2 cu graphics of zen4/5, that's a life saver for troubleshooting.

I have 8700g with 16 tops npu, and 5090 with many many tops, but i dont feel a different in windows use..

I also wonder when sarah will bundle copilot subs with gp ultimate. We need a ultimate MS subs for consumer! All our subs in one!

Next gen consoles will have to come with generative AI assistance, kinect is back! Gamers can talk to their console and get walkthroughs or create meme AI slop videos. And MS is going to win this part 🫶🏻.

Sony is doom. The next Xbox faster hardware, smarter AI integration
 
The thing is though - PS2 cost 99$ in its 5th year (and it had added hardware on top of it), not 50$ more than at launch like the PS5.
PS2 was, by the time it reached $99, also ancient... Tech moved really fast back then. And that had it's downsides too.

But yes, there is a reason why I asked for a Switch 2TV. Cheaper hw would be possible today too, under the right circumstances.
 
Last edited:
I ask my AI copilot buddy about ps6 30gb memory. AI tops ftw

Clamshell Mode Overview

Clamshell mode lets each 32-bit GDDR channel split into two 16-bit sub-channels, driving two chips per channel at half the width but interleaving commands so the effective bandwidth stays the same A. That doubles the number of VRAM devices you can hang off a given bus width.

---

Packing 30 GB on Narrower Buses

160-bit Bus (5 Channels)

• Channels: 5 × 32 bit = 160 bit
• In clamshell: 5 channels × 2 chips/channel = 10 chips
• With 3 GB (24 Gb) GDDR7 modules you get 10 × 3 GB = 30 GB B
• Bus stays 160 bit, bandwidth per pin unchanged


This is the simplest way to hit exactly 30 GB on a sub-192 bit interface.

192-bit Bus (6 Channels)

• Channels: 6 × 32 bit = 192 bit
• In clamshell: 6 channels × 2 chips = 12 chips
• No standard 2.5 GB chips exist, so to reach 30 GB you'd have to mix 6 × 2 GB + 6 × 3 GB across sub-channels
• Mixing densities per 32-bit group complicates address mapping and timing


You can mathematically reach 30 GB that way, but symmetric 3 GB-only on 160 bit is far cleaner.

---

How Clamshell Works in Practice

• Each 32-bit command/address bus is time-multiplexed between two 16-bit chips.
• Data strobes (DQS) toggle between the two devices every access, so aggregate throughput equals a full-width channel.
• Address and command cadences double up, so while each chip sees half the bus width, it services every other cycle.


---

Next Steps

1. For a 160-bit clamshell design, verify trace-length matching for those five 32 bit groups and their doubled DQ pairs.
2. Update your PCB schematic: each channel gets two DRAM symbols wired to the same CA bus but separate DQ and DQS nets.
3. Size your VRM rails for the extra chip count (10 vs. 5 devices).
 
Nextbox vs ps6 memory bandwidth using the 40gbps gddr7 coming 2027

Bus Width | Channels | VRAM Total | Bandwidth | Power (W) | Thermal (W)
----------|----------|------------|------------|-----------|------------
160 bit | 5 | 30 GB | 800 GB/s | 32.0 | 32.0
192 bit | 6 | 36 GB | 960 GB/s | 38.4 | 38.4
 
Nextbox vs ps6 memory bandwidth using the 40gbps gddr7 coming 2027

Bus Width | Channels | VRAM Total | Bandwidth | Power (W) | Thermal (W)
----------|----------|------------|------------|-----------|------------
160 bit | 5 | 30 GB | 800 GB/s | 32.0 | 32.0
192 bit | 6 | 36 GB | 960 GB/s | 38.4 | 38.4
First party Consoles with 36 GB VRAM, 1 TB storage.

Third party Consoles with 48 GB VRAM, 2 TB storage, water cooled, clocked higher.

I can see that scenario unfolding. The OEM consoles would simply be the "Pro" variants.
 
Last edited:
Should be ~1200 TOPs (not that INT8 matters when FP4/6/8 exists)
Interesting, I remember you said that it was unlikely that Sony would limit itself to 160W like mlid leaked? Is that still the case? Please tell me they didn't nerf the console more with a paltry tdp limit below even past gens.
 
nah, better. a budget console with RDNA2 was bound to have massive issues. a budget console on RDNA5 has the same technological advantages that help the Switch 2 punch way above its weight.
you can't make a 720p game look presentable on a 55" TV on RDNA2. but you will be able to on RDNA5
Nothing will make xbox competitive in mass market at this point. And for sure not weak garbage again.
RDNA5 will be everywhere and the moment it appears devs will start using heavier tech like RT to utilize extra.
And if RDNA5 can make 720p presentable on 55" - it will be target spec for PS6, not for S2. And S2 will be 500p internal and it'll be ugly as ML dependent on internal resolution.
 
Top Bottom