• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

[MLiD] XBOX Magnus RDNA 5 Finalized

That's another huge bonus of the approach MS is taking with this next Xbox. Developers are no longer just developing a game for Xbox. They are developing it for PC which has a huge install base. There isn't the risk of only selling to aa small installed Xbox console. Their title, when coded for the next Xbox, can sell to anyone who owns a PC. The next XBox will have crazy 3rd party support for this reason as anything PC is made for it.

Yeah, that's true except for the fact this implies it's not Xbox but PC, there won't be Xbox version of games etc.
Xbox as it's own thing simply disappears.
 
Problem is you outright assuming Magnus Console is a PC and would function with ONLY PC SKUs. The Console is a Console, with likely Console SKUs for the Xbox ecosystem. So in that scenario, it would be 30 GB vs 36 GB. For the Steam/Epic SKUs, the games have to use the unified memory as system ram and vram separately, so 16/20 split or 16/16/4 (OS) split is fine.

There won't be Xbox SKUS of games anymore, I thought everyone has assumed it by this point. You will buy the PC versions of games, and if you buy then in the MS Store it will have a banner of Xbox PC, that will be all.
 
Yeah, that's true except for the fact this implies it's not Xbox but PC, there won't be Xbox version of games etc.
Xbox as it's own thing simply disappears.

Xbox games are already just PC games that use a semi-custom API.

you can run PC apps on Xbox, and you could run Xbox games on PC if you got around the DMR and if they weren't locked down to only work properly with one API targeting specifically Xbox hardware.

people are already working on an API translation layer for Xbox One games, similar to how Proton lets you run Windows .exe apps on Linux.

basically, all a developer would need to do to, both make an Xbox specific version and an open PC version, is have support for 2 APIs (generic Direct X, and Xbox Direct X) and specialised settings for the Xbox DX version.

and that's not anything new really. there are games that support both Vulkan and Direct X12, or games that have Steam Deck/Windows Handheld specific graphics modes (Doom has such a mode for example that reduces the RT quality below the lowest normal preset).


developers also do this on Apple devices already. if you play a game on an iPad, or an iPhone, it will automatically change its settings and performance targets depending on which model you have, which Apple silicon chip it has etc., while on a Mac you have more PC like settings.
and mobile devices are just mini PCs in principle.


so none of these concepts are new. developers already accommodate open platforms and locked down ones with the same relases.
 
Last edited:
If they wanted to "add something that they can't turn off" youre talking about "RAM for CPU" from that 12.5GB pool which they can just use more of. For VRAM there is plenty they can add which uses more than that "9GB VRAM" on PS5 and you can easily 'turn off' for it. Higher quality textures for one.
What is this?

The question I was asked if developers will maximize the use of 30GB. My point was that the need to support PS5's frame buffer would constrain them (alongside multiple other platforms).
9GB VRAM is not the high end of what can be allocated for only GPU but lets assume it absolutely was.
It is.
Why even mention PS5s upper VRAM usage when answering a question about the PS6?
Because it's relevant information. I have already explained in detail why I use this convention repeatedly.
You had posted an image of BG3 using 4.7GB "Top RAM" separated from "Top VRAM"
If you don't understand what I posted don't comment on it. The same Developer commented first image is system RAM And second image is VRAM. 5.3 GB was their series S's system use before "optimization".
how does that answer the question of PS6 "using its memory to its fullest extent"? You can easily scale settings to use more VRAM.
There are limits to this. Can 3GB VRAM play PC games? No, because not everything can be scaled down and the resources for supporting different configs are limited.

The point of my answer is that VRAM usage can always be trivially bloated up. But when it's built to meaningfully use a large quantity of memory, it's not always possible to shrink it down considerably without sacrificing a lot of performance. Imagine a game that uses LLMs for dynamic NPCs. PS5 would not be able to spare GBs for NPCs. So developers have to write replacement lines for PS5.

Nah, they will just build for PS5 and then uncompress audio / textures for PS6. Maybe PS6 gets PC's path tracing too.

This scenario is why I expect developers to build for PS5 / low end PC targets and then add modular easy to disable features like PT to PS6 and high end PC. Rather than build games that truly depend on an absurd memory buffer.

this. If you wanted to say PS6 memory will be underutilised due to installbase of PS5/PC how does that prevent easily utilising the VRAM for higher settings in both higher end PCs and PS6?
They are underutilized. Duh. Why do you keep asking me things when the issue is your knowledge or understanding?

How many PC games do you think can't be absolutely maxed at 16GB?

Developers don't care about >16GB PC currently. It's like less than 5% of PC's user base. Some game might allocate over 16GB but none would have issues with 16GBs.
I've never heard people call these differences (like 15% lower clock) "Slower at frontend"
It's not just because it's running 15% slower clock that it's a slower front end. It's because it has the same size front end (geometry engines, ROPs, etc) as PS5 but running it at a 15% lower clock.

They added more CU's without adding another Shader Engine. They just stuffed the extra CUs into the existing shader engines.

I've never heard people call these differences (like 15% lower clock) "Slower at frontend" but hey what do I know, maybe in your circles that's a colloquial or even somehow formal term. You seem to be constantly spitting out pretty random information at me though when my question is not related to that info.
That's what it's called. Sorry to break it to you. You literally asked me specifically to explain it. Then when I explained it in detail you don't get it, complain you don't know the term and call it random info. Why are you commenting if you have nothing to say?

I am done here. My quota for addressing nuance trolling is filled. I have responded in as much detail and in as good faith as possible. Have a nice day.
 
Last edited:
There won't be Xbox SKUS of games anymore, I thought everyone has assumed it by this point. You will buy the PC versions of games, and if you buy then in the MS Store it will have a banner of Xbox PC, that will be all.
That's NOT how things work. MS isn't going to be selling Steam or Epic versions of games. There would be Xbox ecosystem versions regardless. We don't yet know if Magnus will have Console SKUs or not, assuming anything other than status quo would be wrong as the Series consoles will still be cross gen basically forever.
 
VR would be good, but I wanna know about Thunderbolt 5 ports and if eGPUs would work, since they work with an Xbox Ally X.
TB5 is slower in practice than Oculink for eGPUs.

Also what you gonna attach to it? RTX 5090? Not many cards gonna perform better than RX 10070XT/AT2 with basically over 20GB of VRAM, and then you subtract 20% performance off that for Oculink connection (or - 35% for TB5, or - 50% for TB4/USB4).
 
If people want something, they'll find the money. It's not like console gaming was cheap in the 80's and overlooking your wind-up drivel.
People pay loads for phones, even ones that aren't trouncing the competition.

The PC gamer and console gamer are not the same.

The "market" you are alluding to is tiny, compared to the reach of a successful traditional console. But, yes, there will be people interested, certainly.
 
What is this?

The question I was asked if developers will maximize the use of 30GB. My point was that the need to support PS5's frame buffer would constrain them (alongside multiple other platforms).
Ok, now I get it, you don't actually have a clue. PS5's framebuffer? 😄 Do you know what a framebuffer is? Man, and here I thought you were actually going to explain some of the out there claims you were making.
I am done here. My quota for addressing nuance trolling is filled. I have responded in as much detail and in as good faith as possible. Have a nice day.
Yes, we are done. Have a nice day too, all the best.
 
I still believe the ps6 console will be $599. It will be well designed and will not rely on brute force but on proper game developmenr technics, tools, and AI. The pspnext, well I dont know the price estimate.
PS6 €699 and thanks to PS5 Pro price it won't seem too bad. Tbh I'd rather a €700 machine Vs €500.
 
That's another huge bonus of the approach MS is taking with this next Xbox. Developers are no longer just developing a game for Xbox. They are developing it for PC which has a huge install base. There isn't the risk of only selling to aa small installed Xbox console. Their title, when coded for the next Xbox, can sell to anyone who owns a PC. The next XBox will have crazy 3rd party support for this reason as anything PC is made for it.
One negative is all MS first party games will be made with a weaker PS6 in mind which will hold back the potential.
 
My man, that's not his question.
chair table GIF by South Park
 
Problem is you outright assuming Magnus Console is a PC and would function with ONLY PC SKUs. The Console is a Console, with likely Console SKUs for the Xbox ecosystem. So in that scenario, it would be 30 GB vs 36 GB. For the Steam/Epic SKUs, the games have to use the unified memory as system ram and vram separately, so 16/20 split or 16/16/4 (OS) split is fine.
Devs aren't making console optimized versions for a console that costs $1500 and will probably have less than 7m units install base. (Xbox right now is selling sub 2m/year)
 
Last edited:
It has 12.5GB Accessible to developers, the OS/FW/etc reserves 3.5GB. (DF)

Of that 12.5GB, a few GB's will be used by the CPU. ~3.5GB for the CPU is a reasonable estimate for the lower bound of PS5 CPU usage (BG3 Console is 5.3GB, CP2077 PC is 4.4GB, SoTR PC is 2.9GB, Wukong PC is 5.2GB, etc).

CPU / GPU work on different data. They get different allocations. Even on PC with UMA and on Xbox you see developers manage the usage of each separately. You can't game with just GPU memory.

So that leaves us with ~9GB VRAM for PS5 GPU as a high bound. And it explains why copy paste PS5 ports struggle on PC. As most PC gamers can only afford 8GB VRAM GPUs. The small discrepancy butchers PC's performance if not managed.

But ofc, you can call it 12.5GB RAM instead. In the same way NS2 is 9GB, XSS is 8.8GB, and PS5 Pro is 13.7GB.

Why do I say it like this? Because it takes the false magic out. So much bullshit was said about PS5 when it came out about how you will need 12GB VRAM to match it when most of the time PC's 8GB VRAM is enough and when it's short it's short by MBs not GBs.

This will become more important with PS6's 30GB. Since the GPU will probably get 20~22GB of it at most. PC's 18-20GB dGPU SKUs will be the closest equivalent.

And lastly, PC is more important than PS5 for gaming. So I use PC conventions as standards. Sony may have beaten Xbox, but they have not beaten Nvidia yet.
This is nonsense. From the 16GB the OS reserves 3.5GB for itself. It leaves 12.5GB useable by developers, they can allocate whatever they want to their games.

3 pools are possible : CPU, shared CPU+GPU, GPU. They can allocate 12GB for GPU alone and 0.5 GB for CPU if they want.

One game, The Touryst is supposedly running at a higher resolutoin on PS5 (8K vs 6K on XSX) because they can allocate more fast ram for the GPU (say 12GB at 448GB/s) than XSX (10GB max at 560GB/s).
 
Last edited:
The PS4 had a very powerful and advanced GPU for the time. Perfectly capable of pushing 60fps in almost all multiplatform titles of the time, as was tested with a 7870 back in the day. Yet most games were locked at 30fps, with quite a few struggling to reach a locked 30fps. Can you guess why? As if some part of the console was far weaker.

Would you say a PC with a 5080 and a old 6700k Intel CPU had no bottlenecks?
 
Last edited:
The PS4 had a very powerful and advanced GPU for the time. Perfectly capable of pushing 60fps in almost all multiplatform titles of the time, as was tested with a 7870 back in the day. Yet most games were locked at 30fps, with quite a few struggling to reach a locked 30fps. Can you guess why? As if some part of the console was far weaker.

Would you say a PC with a 5080 and a old 6700k Intel CPU had no bottlenecks?
I'm not talking about the PS4. The PS4 had a very obvious bottleneck that was caused because the industry thought back then that the consoles were dead and that mobile gaming was going to swall it all. The consoles of those times reflect that belief and they were cheapen in their design in order to not take risks in case they were right. Thank fuck they weren't, but that let us with a very stupid generation hardware wise.
 
That's NOT how things work. MS isn't going to be selling Steam or Epic versions of games. There would be Xbox ecosystem versions regardless. We don't yet know if Magnus will have Console SKUs or not, assuming anything other than status quo would be wrong as the Series consoles will still be cross gen basically forever.

My God, full denial. Microsoft will let you buy Steam and EGS version of games. Keys etc I don't think so as most are illegitimate. Then you will be able to buy PC versions of Games for Windows rebranded as Xbox PC, it will have the banner Xbox here and there but will be Windows versions of games executable on any PC. So no, no Xbox SKU at all.
 
3 pools are possible : CPU, shared CPU+GPU, GPU. They can allocate 12GB for GPU alone and 0.5 GB for CPU if they want.
It's theoretically possible yes. But the average game actually uses substantial amounts of memory for the CPU. You can try many games and see how much memory they use on UMA systems. Even 3GB is on the low end.

You lot are bringing up extreme examples to cope. (Running 8K on very low end RX 6650XT Class hardware LMAFO).

If one accepts that PS5 only needs MB's for CPU through "console magic", then PC would game perfectly would only 4GB RAM as long as VRAM is sufficient. But there are basically very few Gen 8/9 games that allow that.

One game, The Touryst is supposedly running at a higher resolutoin on PS5 (8K vs 6K on XSX) because they can allocate more fast ram for the GPU (say 12GB at 448GB/s) than XSX (10GB max at 560GB/s).
This is a bullshit explenation. In this extreme low CPU RAM usage game example, Xbox Series X GPU can use 12GB VRAM and it's effective bandwidth would drop to only 504 GB/s. Still more than PS5.

They probably didn't want to bother with NUMA or were bound by the frontend.
 
Last edited:
It's theoretically possible yes. But the average game actually uses substantial amounts of memory for the CPU. You can try many games and see how much memory they use on UMA systems. Even 3GB is on the low end.

You lot are bringing up extreme examples to cope. If one accepts that PS5 only needs MB's for CPU through "console magic", then PC would game perfectly would only 4GB RAM. But there is basically very few Gen 8/9 games that allow that.


This is a bullshit explenation. In this extreme low CPU RAM usage game example, Xbox Series X GPU can use 12GB VRAM and it's effective bandwidth would drop to only 504 GB/s. Still more than PS5.

They probably didn't want to bother with NUMA or were bound by the frontend.
It doesn't work like that at all. In this game they need one fast Vram pool becaues of the extreme high resolutions. I doubt they would want to make use 2 vram pools, if that's even possible in their game.

And if they did it (which I doubt in such small budget game), average bandwidth would be way lower than that because of contention due to this specific Xbox Series X memory architecture. you probably missed plenty of discussion about it at the start of the gen. You can only use one pool (slow or fast) simultaneously. Each time you use the slow memory pool, the fast memory chips can't be used at all and the whole GDDR6 bandwidth will be reduced to 336 GB/s for the time it's used.

There is a reason no GPU use the same ram chip settings (mixing chip sizes).
 
Devs aren't making console optimized versions for a console that costs $1500 and will probably have less than 7m units install base. (Xbox right now is selling sub 2m/year)
First party will do it. Third party devs too if Microsoft provides some incentives( money, help with development of the game)
 
Man, next gen is going to be heavy for PC gamers. Thank Nvidia and AMD for skimping on VRAM. 16 GB is going to be minimum for next gen only games.
 
It doesn't work like that at all. In this game they need one fast Vram pool becaues of the extreme high resolutions. I doubt they would want to make use 2 vram pools, if that's even possible in their game.
You're using a game that runs at 8K on very very low end hardware (RX 6650 XT class GPU) as your go to example. I make a point that is broadly true and then you pull up a ridiculous counter example thinking it negates it. It's just an anecdotal fallacy.

average bandwidth would be way lower than that because of contention due to this specific Xbox Series X memory architecture
You don't understand how average bandwidth with 2 memory pools works.

No average bandwidth would be 504 GB/s.

> Eff BW = TOTAL MEM / ( ( POOL1 / POOL1_BW) + (POOL2 / POOL2_BW) ...)

12 / ( (10 / 560) + (2 / 336)) = 504

That's what effective bandwidth would work out to for XSX GPU if it needed 12GB. (In your ridiculous example game that runs 8K on very low end hardware). They likely didn't want to bother with mapping memory on the second RAM partition. Or were bounded by XSX inferior rasterization or geometry.

I don't care about preconceived notions. I care about what's true.

I know what I am talking about. I have even verified the relationship with a CUDA kernel.
 
Last edited:
If people want something, they'll find the money. It's not like console gaming was cheap in the 80's and overlooking your wind-up drivel.
People pay loads for phones, even ones that aren't trouncing the competition.

The PC gamer and console gamer are not the same.
The phone argument has been made for over a decade and has mostly been proven to not apply to video game consoles.
The way people pay for phones is entirely different than how consoles are paid for.
Almost everyone rolls their phone hard ware costs into their monthly payment for the phone service.
Unless you want Sony/ MS / Nintendo to start doing lease to own services to get their hardware, it's not a good argument.
 
You're using a game that runs at 8K on very very low end hardware (RX 6650 XT class GPU) as your go to example. I make a point that is broadly true and then you pull up a ridiculous counter example thinking it negates it. It's just an anecdotal fallacy.


You don't understand how average bandwidth with 2 memory pools works.

No average bandwidth would be 504 GB/s.

> Eff BW = TOTAL MEM / ( ( POOL1 / POOL1_BW) + (POOL2 / POOL2_BW) ...)

12 / ( (10 / 560) + (2 / 336)) = 504

That's what effective bandwidth would work out to for XSX GPU if it needed 12GB. (In your ridiculous example game that runs 8K on very low end hardware). They likely didn't want to bother with mapping memory on the second RAM partition. Or were bounded by XSX inferior rasterization or geometry.

I don't care about preconceived notions. I care about what's true.

I know what I am talking about. I have even verified the relationship with a CUDA kernel.
Average bandwidth might be 504 GB/s if you aren't doing any processing and just copying to the LLC, but that number lowers once you start processing the 2GBs with the same workload density as the 10GB because the lower bandwidth processing needs a timeslice that is 166% (x1.666) the duration of an equivalent 2GB in the 10GB so it proportionally costs more in bandwidth. And the longer you need to stay on the slower 2GB for deeper processing, the average moves further away from the faster bandwidth figure - and proportionally reduces effective theoretical FLOPs too.

So the truth is the CPU's ballpark ~40GB/s bandwidth (10% of the time)use for games as Cerny mentions in the Road to PS5 is lowering effective bandwidth to the GPU on PS5, but the XsX, has that already to its slower pool, and if you then try and use 12GB as VRAM as you suggested where you need to take 2/3rds longer on the slower 2GBs of processing, you are definitely dropping below the PS5 bandwidth and probably for processing too.
 
Ok, now I get it, you don't actually have a clue. PS5's framebuffer? 😄 Do you know what a framebuffer is? Man, and here I thought you were actually going to explain some of the out there claims you were making.

Yes, we are done. Have a nice day too, all the best.
To summarize the babbling

Xbox next gen has already beaten ps6
Cerny is a dumbo
Switch 2 is going to get cross gen games from ps5 and ps6.
 
PS6 Pro will be more powerful in any case.
More than a 2030-31 XboxMagnus Pro??? 🤔

Based on the leaked Xbox Magnus design (chiplet CPU-GPU)... A first party XboxNextPro version would be very easy to design without touching the base (Magnus) and simply adding a superior/more modern GPU, higher clock speed and/or more RAM.
However, we're also still waiting to find out what these supposed XboxPC OEMs based on Magnus actually are...
 
I couldn't believe people bought the PS5 Pro at the price it was. I surely will be in for a treat seeing folks buying this Xbox if the price is as stupid as its rumored to be, lmao.

Just never thought I'd be in a timeline like this, lol
 
Average bandwidth might be 504 GB/s if you aren't doing any processing and just copying to the LLC
XSX iGPU doesn't even have a fucking LLC. It's just L2 and the L2 is tiny. With all due respect, you're just arguing to argue.

the formula I gave you was the literature one for effective bandwidth across different memory pools. Let me explain it much more intuitively:

6/7 of accesses at 1x speed
1/6 of accesses at 0.6x speed

6/7*1+1/7*0.6 = 0.943x

This isn't like PC where PCIe is a turtle.

The issue with XSX has never been memory bandwidth. What they did with memory is annoying but they still have a 320-bit bus.

It's the fucking front end. Spot the odd one:

N23: 32CU | 4SA @ 2.38 GHz
PS5: 36CU | 4SA @ 2.15 GHz
XSX: 52CU | 4SA @ 1.82 GHz <= LOL
PRO: 60CU | 6SA @ 2.15 GHz

Xbox was betting that AI / Ray Tracing will be a massive part of next gen. And they wanted a big marketing number.

Instead RT was tiny and AI was nonexistent. So they just got owned by Sony in practice.
 
Last edited:
I couldn't believe people bought the PS5 Pro at the price it was. I surely will be in for a treat seeing folks buying this Xbox if the price is as stupid as its rumored to be, lmao.

Just never thought I'd be in a timeline like this, lol
High price is one thing but going by rumors it's also a beast of a box feature-wise.

Imagine if it can run previous generations Xbox games, plus games from Steam, Epic Games Store, GOG, Microsoft Store, Gamepass, allow emulators, mods, no online paywall, and have a nice design and is silent and works well with a controller on the TV.
 
Last couple pages seem like nVidia fan boy(s) thinking somehow Sony or MS could have made consoles with RTX GPU and things would be better.

1. Impossible since nVidia dont have x86 license. And they wouldn't have collabed with AMD for CPU on a SoC. Intel was a bad choice since their CPU designs in 2019/2020 were fucking embarrassments.

2. nVidia GPU console would too expensive since their GPU would require too many transistors and too much fie space, killing the target prices. RDNA 2 allowed RT without dedicated silicon for RT. nVidia 20 series required wasted silicon for tensor hardware. RTX 20 series only became affordable enough for console in 2025 on 8 fucking nm severly down clocked. lolol
 
Last edited:
High price is one thing but going by rumors it's also a beast of a box feature-wise.

Imagine if it can run previous generations Xbox games, plus games from Steam, Epic Games Store, GOG, Microsoft Store, Gamepass, allow emulators, mods, no online paywall, and have a nice design and is silent and works well with a controller on the TV.
Imagine there's no games
No need for our TV's.
imagine all the people
bored out of their damned minds.

- John Lennon "Imagine"
 
So is Magnus basically the better Steam
machine? 1200 €/$ for RTX 5080 Performance doesn't look bad at all if done right (i.e. open system).
This could very well lead to a migration from existing PC players to Magnus…but what's in it for Microsoft? Or will it only be possible to buy games from XBox Store?
 
Last couple pages seem like nVidia fan boy(s) thinking somehow Sony or MS could have made consoles with RTX GPU and things would be better.
Things would be better, without question.
Impossible since nVidia dont have x86 license. And they wouldn't have collabed with AMD for CPU on a SoC. Intel was a bad choice since their CPU designs in 2019/2020 were fucking embarrassments
ARM leads X86 in client performance.

But it doesn't matter. People have benchmarked the PS5's CPUs in the miner thing. With 8 cores (devs get 7) it performed like an i3-14100. Except Single Core was halved. It's not some hot shit ultra fast CPU. Consoles just have low CPU requirements. How do you think jaguar worked in PS4 and A78 in NS2?
nVidia GPU console would too expensive since their GPU would require too many transistors and too much fie space, killing the target prices
They wouldn't be. The gap between Ampere and RDNA2 architecturally is so big that it overcomes Nvidia's premium margins and the gap between Samsung foundry and TSMC.

But Nvidia didn't really have an interest in consoles beyond offloading failed Tegra chips.

The gap in 2027 is not as vast as it used to be. It's a pretty marginal gap. RDNA5 looks comparable to blackwell+ (Blackwell on 3N). And we await what Nvidia will cook in Rubin and if Cerny has unannounced tricks.
 
Last edited:
XSX iGPU doesn't even have a fucking LLC. It's just L2 and the L2 is tiny. With all due respect, you're just arguing to argue.
LLC is the Last Line of Cache (L2 in this situation),

I just couldn't be bothered to go read specs to be specific in case someone(you) claimed there was an L3 and yet you've still corrupted that comment to imply LLC isn't an L2 synonym in a system with an L2 as its Last Line of Cache.
 
Things would be better, without question.

ARM leads X86 in client performance.

But it doesn't matter. People have benchmarked the PS5's CPUs in the miner thing. With 8 cores (devs get 7) it performed like an i3-14100. Except Single Core was halved. It's not some hot shit ultra fast CPU. Consoles just have low CPU requirements. How do you think jaguar worked in PS4 and A78 in NS2?

They wouldn't be. The gap between Ampere and RDNA2 architecturally is so big that it overcomes Nvidia's premium margins and the gap between Samsung foundry and TSMC.

But Nvidia didn't really have an interest in consoles beyond offloading failed Tegra chips.

The gap in 2027 is not as vast as it used to be. It's a pretty marginal gap. RDNA5 looks comparable to blackwell+ (Blackwell on 3N). And we await what Nvidia will cook in Rubin and if Cerny has unannounced tricks.
Are you one of those motherfuckers who not even bother to engage or dispute the arguments made to you previously?

Yeah, you be a wasted of time.
 
The debate about available VRAM for the PS6 is interesting, most games on the PS6 will probably be able to dedicate 20-23GB of VRAM to the GPU. That's probably a rough estimate, and depends on the game of course. The only GPUs that exist in the PC space that have that amount of VRAM is the 4090 and 5090. I suppose the 7900 XTX as well, but that lacks in feature set.

By comparison, before the PS5 launched the only GPU with the right amount of power, VRAM, and feature set was the 2080 Ti. However, cards like the 2080 and 2070 Super do just fine for the most part. So I'd expect 16GB cards like the 4080, 4080S, 5070 Ti, and 5080 to be fine, especially with cross-gen probably going on for ages.
 
First party will do it. Third party devs too if Microsoft provides some incentives( money, help with development of the game)

I do not know much about tech, but I know for a fact that whatever differences there are between the PS6 and Magnus, the end result on screen will be more or less the same. 95% of people will have a hard time differentiating between the versions, despite what forum dwellers suggest. A delta of 25 to 50% is not that significant to begin with, especially that we have hit diminishing returns.
 
Top Bottom