Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
Nothing will happen, and dont hope for anything, DONT GO ON THE HYPE TRAIN. lackluster-mediocre game at best that looks like current gen, but shinier.
images
 
What does Github have anything to do with it? The split memory speed didn't appear on the Github leak.

Your posts are full of false assumptions, like the XSX and PS5 having the same die size, which leads you to a wild goose chase around irrelevant things and solving things that don't need solving. The rumors are actually the other way around, that the XSX die is much bigger than the PS5. Actually, the original rumor about the XSX and PS5 from the chines bulleting board was that XSX is 360mm^2 and the PS5 is 300mm^2. There is no way in hell MS has the same die size, they have 40% more CUs and extra 64-bit worth of memory controllers in the APU.

I read your posts, but instead of commenting about irrelevant things, I tried to get you back to basics and make you use Occam's razor. Would the XSX work just like any other unified memory console ever, or will it have a freak of nature setup you are suggesting causing them big performance problems? Which one makes more sense? Why even have a big pool of GDDR6 if the CPU and GPU stay separate instead of going with a much cheaper split pool? There is zero sense in the setup you are suggesting.

Yes, both the CPU and GPU will use a full 320-bit interface, both will have access to 100% of the memory pool. I'm willing to place any bet on that.

It's really impressive how the CPU will have 336GB/s bandwidth which is more than enough and the GPU will have 560GB/s for tasks that require it and 336GB/s for tasks that do not. It's a very optimal design and I can see how the SSD benefits from this kind of thinking with the SFS hardware in the GPU.
 
It's really impressive how the CPU will have 336GB/s bandwidth which is more than enough and the GPU will have 560GB/s for tasks that require it and 336GB/s for tasks that do not. It's a very optimal design and I can see how the SSD benefits from this kind of thinking with the SFS hardware in the GPU.
If their design is a straight forward one, both the CPU and the GPU will have 560GB/s access to 10GB and 336GB/s access to 6GB and it is up to the developer to choose what to place where. It's more of a byproduct than a feature. If GDDR6 wasn't so damn expensive, the XSX would have probably had 20GB that provides a single 560GB/s speed.
 
Last edited:
If their design is a straight forward one, both the CPU and the GPU will have 560GB/s access to 10GB and 336GB/s access to 6GB and it is up to the developer to choose what to place where. It's more of a byproduct than a feature. If GDDR6 wasn't so damn expensive, the XSX would have probably had 20GB that provides a single 560GB/s speed.

Interesting. But according to Andrew Goosen, they classified the 10GB as gpu optimal RAM specifically for the GPU, so developers could possibly do whatever they want but as a guideline it would be better to ensure assets that require higher memory bandwidth like textures go to that 10GB.

My other question is how much of a difference would an additional 4GB or RAM benefit the system compared to say increasing the SSD speed to say 3.7GB/s? I'd like to know what you think.

They are probably paying between $6-7 per GB for GDDR6. It would cost between $24-28 per console. Thats nothing and there's still time to increase it.
 
Interesting. But according to Andrew Goosen, they classified the 10GB as gpu optimal RAM specifically for the GPU, so developers could possibly do whatever they want but as a guideline it would be better to ensure assets that require higher memory bandwidth like textures go to that 10GB.

My other question is how much of a difference would an additional 4GB or RAM benefit the system compared to say increasing the SSD speed to say 3.7GB/s? I'd like to know what you think.

They are probably paying between $6-7 per GB for GDDR6. It would cost between $24-28 per console. Thats nothing and there's still time to increase it.
That's a massive increase in cost per console.
 
What does Github have anything to do with it? The split memory speed didn't appear on the Github leak.

Your posts are full of false assumptions, like the XSX and PS5 having the same die size, which leads you to a wild goose chase around irrelevant things and solving things that don't need solving. The rumors are actually the other way around, that the XSX die is much bigger than the PS5. Actually, the original rumor about the XSX and PS5 from the chines bulleting board was that XSX is 360mm^2 and the PS5 is 300mm^2. There is no way in hell MS has the same die size, they have 40% more CUs and extra 64-bit worth of memory controllers in the APU.

I read your posts, but instead of commenting about irrelevant things, I tried to get you back to basics and make you use Occam's razor. Would the XSX work just like any other unified memory console ever, or will it have a freak of nature setup you are suggesting causing them big performance problems? Which one makes more sense? Why even have a big pool of GDDR6 if the CPU and GPU stay separate instead of going with a much cheaper split pool? There is zero sense in the setup you are suggesting.

Yes, both the CPU and GPU will use a full 320-bit interface, both will have access to 100% of the memory pool. I'm willing to place any bet on that.

PaintTinJr said:
"The XsX CU count is big for the chip size IIRC – pretty sure someone in this thread said both chips are similar areas"

I said 'similar' not 'the same' and was saying that in reference to something I read - because I openly admitted I didn't know the sizes-, but I would still say 300mm^2 compared to 360mm^2 is similar in this context of the post I made. and the components in each chip, assuming they are the sizes. Are they the official sizes?. Chip sizes isn't something I would claim to know the defacto standard for, so if you claim that is a huge difference in sizes I'm happy to concede on not knowing how to use relative sizing terms for chip dies.

Neither the PS5 or XsX have the same interconnection setup as previous consoles, Zen2 uses a superset of HyperTransport - which I believe originated in AMD's Opertron server chips a long time ago - and knowing exactly how Infinity fabric or Architecture is used in each console to connect the memory to the memory controllers is a big deal, and potentially a huge difference to performance/trhoughput.

As for asking 'why'? There is plenty of potential reasons - many you should already know because eco friendly engineering is no longer optional on devices that sell in 10s of millions, hence two pools more waste, more power draw, etc, etc - and other reasons are probably because Xbox has been quite focused in messaging on texturing and shading efficiency as some silver bullets to gain mindshare with the Ultra PC settings crowd. Split contention bandwidth with very fast access for the GPU isn't an issue in current-gen PC ultra settings games, as the Gears demo showed.
 
Last edited:
Interesting. But according to Andrew Goosen, they classified the 10GB as gpu optimal RAM specifically for the GPU, so developers could possibly do whatever they want but as a guideline it would be better to ensure assets that require higher memory bandwidth like textures go to that 10GB.

My other question is how much of a difference would an additional 4GB or RAM benefit the system compared to say increasing the SSD speed to say 3.7GB/s? I'd like to know what you think.

They are probably paying between $6-7 per GB for GDDR6. It would cost between $24-28 per console. Thats nothing and there's still time to increase it.
Yes, all I mean was that it's not mandated by the hardware.

IMO, considering both consoles seek to eliminate the streaming pool by catering to the next (or as close to the next) frame, having an extra 4GB will be more significant than having a faster SSD considering 4GB is probably at least an order of magnitude larger than the PS5 and XSX streaming pool. Addin 4GB will have an even larger effect considering it will allow for 17.5GB for games over 13.5GB (30% increase), it will unlock 100% of the memory to function at 560GB/s, and resolve the slower pool memory bandwidth overhead that could be as high as 67% (in case MS didn't do some customization to solve it).
 
Last edited:
You keep saying XsX's solution was about cost, but that doesn't match the XsS and XsX pincer affront on a PS5 by price and performance. They can literal price an XsX as high as they like if it is genuinely more performant than the PS5 by 20% and the niche gamer it is aimed at would still buy it. A couple of gigabytes of GDDR6 aren't a bank breaker (IMHO). The reality is that if they had a unified 560GB/s they would need 5 memory controller units in the Zen2 and another 5 in the GPU, and there is no way they have the area (and probably not enough layers) in their chip to do that.

Why would a unified system (that is designed to be unified - not something pieced together) have duplicate memory controllers for the CPU and GPU?

The native Zen2 controller won't exist in a custom unified design, nor will the standard controller of the GPU. Both will utilize shared memory controllers for memory access.

PS4

ps4-reverse-engineered-apu.jpg


Xbox One

YsxT4fN.jpg


You only need 10 32bit controllers for a unified 320bit bus (which of course amd pairs).
 
Depends on what the BOM currently is, and how much the RAM will cost in say 3 years. They'll be paying less than $6 in three years barring extreme circumstances in global DRAM markets.

We don't know the full BOM rn, but honestly 16GB GDDR6 is a ton, and must be a huge portion of the BOM already. 4 more gigs is just hard to justify, especially when XSX has already clearly been pretty optimized to its non standard approach. These consoles will already be straining to get to market at 500$ IMO. More ram is always good, and more ram > better storage, as long as the current storage isn't just total garbage (like a 5400 rpm laptop drive), but a change at this point would be suprising.
 

Now let's see...something cool and fun...Could be something revolving around water...arcade-y...with a colourful palette...Astrobot fishing simulator! No ok. Astrobot as the new Playstation mascot in new adventures?
PlayStation All-Stars Battle Royale water sports!
Oh well...I would place my bets on a casual 2D/3D platformer or a local co-op fun multiplayer game.
 
Last edited:
Really, between HZD and GOW on the PS4, along with the Witcher 3....just amazing. The Last of Us remake has to be considered as well since it was the best game on PS3 IMHO, and a strong candidate for the PS4 as well. Can't wait for the next one, as long as the supposed "leaks" aren't true. But you're right about GOW...the pacing was such that I wasn't bored AT ALL and really was sad when it was over.

I love many games indeed, and TLOU made me cry:messenger_crying: It's one of my favorites as well, the story-telling was insane. It saddens me that such thing was a Metal Gear thing, now look at how this legendary IP buried. Back at MGS2 I was laughing at the drones that they're too wishful thinking, at that time I've never seen a drone indeed. Look at the nanomachines in MGS4, it's pretty much possible at some degree, and now Death Stranding and the current pandemic.

Kojima is still my best gaming figure, although I feel his ego (well deserved though) and I hate his cringy jokes in his games, it's his most notable weakness. Well, no one is complete, I guess. I really miss the endless hours spent watching cutscenes. Let's hope Sony could buy the IP.

If it was mind games it was pretty elaborate since they briefed Devs on the machine several months ago.

It was real at one point and its possible with how the world is today it may be delayed or possibly canned all together.

I still believe its coming maybe not at launch along side the Series X but at some point.

If it was mind games then even calling the nextbox Series X was even part of that because the very name implies more then one, to me anyhow.

Lockhart is a stupid idea, enough said.



Geoff has seen something! Probably something will change gaming more than what it has been changed recently:messenger_tears_of_joy:.

No way you mean to tell me the "experts" at IGN said something stupid?

The same guys who brought you this gem?



Same thing happening now, 8K is the new future bar, 1080p is ancient, even cellphones are dropping it, and we're talking about 5-6" only! 16K is the future, probably near the sealing. Haters gonna hate.:messenger_winking_tongue:
 
Last edited:
I said 'similar' not 'the same' and was saying that in reference to something I read - because I openly admitted I didn't know the sizes-, but I would still say 300mm^2 compared to 360mm^2 is similar in this context of the post I made. and the components in each chip, assuming they are the sizes. Are they the official sizes?. Chip sizes isn't something I would claim to know the defacto standard for, so if you claim that is a huge difference in sizes I'm happy to concede on not knowing how to use relative sizing terms for chip dies.

Neither the PS5 or XsX have the same interconnection setup as previous consoles, Zen2 uses a superset of HyperTransport - which I believe originated in AMD's Opertron server chips a long time ago - and knowing exactly how Infinity fabric or Architecture is used in each console to connect the memory to the memory controllers is a big deal, and potentially a huge difference to performance/trhoughput.

As for asking 'why'? There is plenty of potential reasons - many you should already know because eco friendly engineering is no longer optional on devices that sell in 10s of millions, hence two pools more waste, more power draw, etc, etc - and other reasons are probably because Xbox has been quite focused in messaging on texturing and shading efficiency as some silver bullets to gain mindshare with the Ultra PC settings crowd. Split contention bandwidth with very fast access for the GPU isn't an issue in current-gen PC ultra settings games, as the Gears demo showed.
XSX has an official size, 365mm^2. PS5 doesn't have an official size yet AFAIK. Obviously, if you are talking about two PC GPUs in the same series then 60mm^2 isn't a big difference, but in such a similar console APU it is. ~60mm^2 is plenty of room for memory controllers and an extra 8 WGPs (RNDA1 64-bit controller is ~16mm^2 and 8 WGPs are ~39mm^2), especially if Sony's I/O block and sound block are larger than MS's I/O and sound blocks.

Regarding splitting the memory, 10GB of GDDR6 + 8GB of DDR4 would actually cost a lot less and require less power, but it is still an inferior setup to a unified 16GB setup MS has right now. I think it's pretty far fetched to assume that the GPU is connected to all 10 chips, but the CPU isn't when it's the most conventional setup there is. IMO it makes a lot more sense to just go with a classic 256-bit setup and get it done with. Right now we don't have anything suggesting that's the case, so I have no idea why you are assuming it (and I lurk on a few big internet forums it seems that you are the only one I saw championing this setup).

I'll be honest, I'll be totally shocked if that will be the case. having a single pool of 10 chips and not connecting both the GPU and CPU to all 10 is an insane design, it totally undermines the idea of a single GDDR pool.
 
We don't know the full BOM rn, but honestly 16GB GDDR6 is a ton, and must be a huge portion of the BOM already. 4 more gigs is just hard to justify, especially when XSX has already clearly been pretty optimized to its non standard approach. These consoles will already be straining to get to market at 500$ IMO. More ram is always good, and more ram > better storage, as long as the current storage isn't just total garbage (like a 5400 rpm laptop drive), but a change at this point would be suprising.

The biggest cost is either the APU or the SSD. The 16GB RAM is between $96-$112. If you've listened to Phil Spencer, MSFT has no issues whatsoever at the moment with regards to the hardware in terms of price, performance and production. Any issues are with software.

Most likely they do not see the need of increasing the RAM but $24-$28 now and say at $5 per GB 3 years down for $20 per console is minute.
 
Why would a unified system (that is designed to be unified - not something pieced together) have duplicate memory controllers for the CPU and GPU?

The native Zen2 controller won't exist in a custom unified design, nor will the standard controller of the GPU. Both will utilize shared memory controllers for memory access.
PS4

ps4-reverse-engineered-apu.jpg


Xbox One

YsxT4fN.jpg
You only need 10 32bit controllers for a unified 320bit bus (which of course amd pairs).
I get the point you are trying to make, but from what is being said in the PC space regarding AMD's future products using 36-40 CU GPUs, Epyc and Infinity Architecture (and Vega's using infinity fabric/architecture) and it supposedly replacing AMD Crossfire, I don't think pictures of old Jaguar based APUs - one with tiny memory bandwidth and one with great bandwidth - are entirely relevant. You are assuming that 3x the PS4 memory bandwidth in a unified setup is a given solution you just cherry pick for XsX. If memory controllers and interconnects are such trivial things then how did Xbox spend more on the Xbox One (excluding Kinect) and end up with such an inferior solution?
 
Geoff has seen something! Probably something will change gaming more than what it has been changed recently:messenger_tears_of_joy:.

lol no doubt. Especially since he's essentially hosting all gaming related content this entire Summer. He's definitely seen more than we can imagine.
 
Last edited:
I get the point you are trying to make, but from what is being said in the PC space regarding AMD's future products using 36-40 CU GPUs, Epyc and Infinity Architecture (and Vega's using infinity fabric/architecture) and it supposedly replacing AMD Crossfire, I don't think pictures of old Jaguar based APUs - one with tiny memory bandwidth and one with great bandwidth - are entirely relevant. You are assuming that 3x the PS4 memory bandwidth in a unified setup is a given solution you just cherry pick for XsX. If memory controllers and interconnects are such trivial things then how did Xbox spend more on the Xbox One (excluding Kinect) and end up with such an inferior solution?

AMD's future plans have zero impact on the XSX, that's essentially irrelevant. XSX is not using a chiplet, there is no interconnect here like a commercial Zen 2 and its IO die (or other chiplets). The on die memory controllers will actually work faster than an off chip IO die (one of the reasons AMD increased the L3 cache on Zen2). When MS iterates again in the future, they may go with a chiplet design (probably 3d stacked), that doesn't alter the current XSX chip.

Edit: Regarding X1's inferior memory setup. That's a multi faceted issue, the rumor being that MS wanted 8GB capacity from the start and during the design phase GDDR5 would have been prohibitively expensive. Sony started targeting 4GB, which made increasing to 8 when pricing dropped quite easy. The lower latency of DDR3 was also a boost for Kinect. But, MS needed more working memory bandwidth than DDR3 could offer, so the large (for the time) SRAM pool was a necessity (which unfortunately used a lot of die space that weakened the GPU).
 
Last edited:
This totally makes sense since it's been very long since the other Insomiac studio released the last R&C. But I don't see Sony acknowledging that their rat/dog/whatever franchise is minor and doesn't deserve to be in the full reveal.

I would say many old/mature ones like most of us here would still love to grab a crash game of R&C game. Actually, this gen has been neglecting this genre a lot, with Sony at least trying with some littleBigPlanet and remakes. We need some new installments!

Problem is Sony has lost its most iconic IP, Crash Bandicoot, probably they're trying to forget that. They've tried years back to acquire it, but Activision, the villain, is stubborn and won't let go nor make a new game.

crash-bandicoot-n-sane-trilogy-cover.cover_large.jpg


Sony should blame themselves here, the IP has been bouncing between different companies before it settled at Activation that has zero interest in the IP.
 
XSX has an official size, 365mm^2. PS5 doesn't have an official size yet AFAIK. Obviously, if you are talking about two PC GPUs in the same series then 60mm^2 isn't a big difference, but in such a similar console APU it is. ~60mm^2 is plenty of room for memory controllers and an extra 8 WGPs (RNDA1 64-bit controller is ~16mm^2 and 8 WGPs are ~39mm^2), especially if Sony's I/O block and sound block are larger than MS's I/O and sound blocks.

Regarding splitting the memory, 10GB of GDDR6 + 8GB of DDR4 would actually cost a lot less and require less power, but it is still an inferior setup to a unified 16GB setup MS has right now. I think it's pretty far fetched to assume that the GPU is connected to all 10 chips, but the CPU isn't when it's the most conventional setup there is. IMO it makes a lot more sense to just go with a classic 256-bit setup and get it done with. Right now we don't have anything suggesting that's the case, so I have no idea why you are assuming it (and I lurk on a few big internet forums it seems that you are the only one I saw championing this setup).

I'll be honest, I'll be totally shocked if that will be the case. having a single pool of 10 chips and not connecting both the GPU and CPU to all 10 is an insane design, it totally undermines the idea of a single GDDR pool.

When I first read the DF article about the XsX memory it felt like they avoided getting to the bottom of the issue because they knew there was a slug under the stone. I genuinely hope I am wrong on that. I'm suggesting all module will still be connected because all 5 memory controllers will act as 1 channel with switched modes and shift data as needed, resulting in the following accesses.

1. GPU optimal memory accessed by GPU. The first gigabyte in all modules is accessed and 5 controllers return the data with full 320bit transfer @ 560GB/s.
2. GPU optimal memory accessed by CPU. The first gigabyte in all modules is accessible. 5 controllers access the memory but consolidate it at the three shared memory controllers that are wired to both CPU and GPU and connect to the 2GB modules, resulting in 192bit transfer @336GB/s.
3. CPU optimal memory accessed by the CPU. The 2nd gigabyte in the three 2GB modules is accessed by the three shared memory controllers resulting in 192bit transfer @ 336GB/s
4 CPU optimal memory accessed by the GPU. The 2nd gigabyte in the three 2GB modules is accessed by the three shared memory controllers, and they shift or expand the data across the full width of the 5 memory controllers as needed, resulting in 320bit transfer, but only 336GB/s worth of traffic.
 
Both IGN and Austin Evans videos kinda pissed me off a little. "Graphics have already plateaued" No, you buffoon, we've just not seen next gen yet. The graphics ceilings so far up and away its not even funny
It's what happens when there are journalists without an imagination. They should stick to reporting numbers
 
IGN: "You May Need to Lower Your Expectations For Next-Gen Graphics"

Say that again when Sony, Capcom, Rocksteady and others will show the games.

Also i think Valhalla is gonna be impressive, maybe not the most impressive we'll get at console launch but i think it's gonna be something that will stand out to say the least.
 
IGN: "You May Need to Lower Your Expectations For Next-Gen Graphics"

Say that again when Sony, Capcom, Rocksteady and others will show the games.

Also i think Valhalla is gonna be impressive, maybe not the most impressive we'll get at console launch but i think it's gonna be something that will stand out to say the least.
If they're keeping it at 30fps for next gen its probably cause they're trying to upgrade the graphics significantly to set it apart from current gen versions
 
Nothing will happen, and dont hope for anything, DONT GO ON THE HYPE TRAIN. lackluster-mediocre game at best that looks like current gen, but shinier.

you know. There are things that are more impressive than having the most beautiful dust though.
For example, Dirt 5 offers:

- 120 FPS
- 4 Player splitscreen

this Gen we had mostly "cinematic"games with "cinematic" framerate.
And splitscreen? Sorry, only for indies!

so, if we get this gen more FPS and more splitscreen with the same graphics? Then this is still a HUGE advancement
 
Now let's see...something cool and fun...Could be something revolving around water...arcade-y...with a colourful palette...Astrobot fishing simulator! No ok. Astrobot as the new Playstation mascot in new adventures?
PlayStation All-Stars Battle Royale water sports!
Oh well...I would place my bets on a casual 2D/3D platformer or a local co-op fun multiplayer game.
Knack 3
 
I sort of stopped playing the games after Assassins Creed Black Flags, I tried playing Origins but the game felt too big and too boring and I was bogged down by so many side quests forcing up me rank up because of the story mode. I guess it was Ubisofts first attempt at a massive open world RPG so that might explain it. I have not yet played AC Odyssy but since you mentioned that I may give it a try.

I tried Origins and had the EXAKT same experience as you. I loved Black Flag but really didn't like any of the games after that one. I thought Origins had some good ideas but was mind-numbingly boring and dull. However, I tried Odyssey recently and I like it soooo much more!

Yeah, the dialogue isn't great all the time and sometimes a bit dumb, but I think it's a huge step up from Origins and the game really is a lot of fun. Like Origins, just not boring! Also, Kassandra is a great protagonist and very interesting unlike Bayek and whatever his story was about...

So you might want to give it a go!
 
The thing is a huge chunk of gamers have never experienced 144hz gaming so no the majority of the people don't care about it.
There's a simple reason for this: it's a niche thing. I have a monitor for work and I have a TV for gaming. I can't imagine finishing work and still sitting in the same chair for another couple of hours while playing games. It's not healthy and my spine knows it well. 144 Hz monitors are 32" at most.
Considering the 10GB still uses the whole 320-bit interface, I'm assuming it's not a problem. MS actually already said it's not a problem and that because 16GB is cheaper in the DF article.


First, you don't dump the whole game to the SSD (compress + copy) first and then dump the new game from the SSD to memory (decompress + copy) but as the old game gets removed from memory the new game is written to the newly freed areas in memory at the same time. So I'm assuming the whole process is bound by the writing + compressing speed considering these consoles are probably optimized for reading and decompression.

Regarding the X, games have 9GB available so XSX games should take 50% longer to swap (assuming all games take advantage of the whole 9GB on X1X and 13.5GB on XSX 100% of the time, which they don't).
I think that the process won't involve compression/decompression other than on-the-fly part of it (hardware-based). Even if we consider 10 GB for a game on average and, I think, they said you can swap between 6 games, it's around 60 GB that needs to be reserved so, more or less, 5% of the drive. Do you think saving a bit of space is worth inevitable slower transition if you'd use compression? Those memory dumps won't compress well because they're not one kind of data (like textures) but a mixture of all. How much can you achieve this way, 10-20% so 6-12 GB more SSD space and probably double the time to get that. That would be 15-20 seconds to swap a game. I think they would like it to stay under 10 seconds.
 
I can't wait for all these people crying about "next gen will be the same games with PC ultra settings" to finally be shut up.

My prediction is that there will be next gen games more impressive looking than Hellblade II.

Thts a given. We saw no gameplay of Hellblade 2 but cutscenes. Not a good example to compare too imo.
 



It's a wonderful interview overall, and it's wonderful what Unreal Engine is doing for us, I hope we skip the shitty 2D and 8-bit crap, I would rather play an indie game developed in Dreams (a PS4 game) than most of the indie games flooding the store.

Still with many NDA's, the answers don't give everything away.

I can understand them. The last time a dev acknowledged having PS5 devkits, he got erased from the Internet by Sony's Ninjas.
 
This what I consider next gen, this what we should expect, not that circus event:



I would like to see this in next-gen graphics, I mean seriously, aren't we already tired of looking at tech demos like this, and then end up with sad-looking games like the ones in Xbox Inside presentation? What's all this for?

Just "hey guys, look at what can be achieved, but you're not going to get it because [insert here your excuse of choice - could be anything from a one-man army developer to developers that don't have enough money to do it and so on].

It's a bit messed up to be promised all these possibilities in this console generation (which mind you, hasn't even launched yet), just to start thinking, "Oh no, this will still not be possible, let's wait for a console refresh, 3-4 years down the line, or even better, let's wait for the NEXT generation of consoles, 6-7 years from now."

And the excuse with RT implementation should not even be used, we're not going to get it, but it will give lazy developers a reason to still go 30FPS this generation. Just like last generation we had a failed 4K promise, we're getting this one now. Always dodging it, almost never delivering.

Yes, I would rather get fluid gameplay than eye-popping textures, thank you very much. I am so done with 30FPS.

Sorry for the rant.
 
Last edited:
Considering the 10GB still uses the whole 320-bit interface, I'm assuming it's not a problem. MS actually already said it's not a problem and that because 16GB is cheaper in the DF article.

Never said it was a problem, just more granular detail of why teh memory is read in slices and you cant just shove in a memory controller and start "splitting slices", and explains why Ladia Gaia is likely correct, you read eitrher the 10 GB OR the 6GB, there is no AND possible unless the complex timing is solved. somehow.
 
It's really impressive how the CPU will have 336GB/s bandwidth which is more than enough and the GPU will have 560GB/s for tasks that require it and 336GB/s for tasks that do not. It's a very optimal design and I can see how the SSD benefits from this kind of thinking with the SFS hardware in the GPU.

No its not improessive, its a compromise, when the bus is reading the 6GB, the 10 GB is sitting idle waiting as its a common bus for APU.

It reads better when you think of CPU and GPU on different bus like PC.

Hence the average GPU speed is much slower than 560, likely 450-500 if the CPU and audio is in slower access pool.

Yes its still faster than Ps5 GPU access., which is also slower than the 448.
 
Last edited:
I think its the other way around... push 9gbs full of textures and game data you dont need using a fixed hw budget, is less desirable than pushing 4.8 gbs if you can get exactly what onto the screen in time.

The truth is you want both. High throughput SSD, that only accesses what you need on screen as you need it.

Cerny alludes to that by saying their io solution can achieve that "if coded right". I agree with him.
Actually, the faster you can load in assets, the more RAM you are able to use for the current frame and the more detail you can put into the game. I'm sure the faster, lower latency PS5 SSD will offer benefits in this area.

IMO, considering both consoles seek to eliminate the streaming pool by catering to the next (or as close to the next) frame, having an extra 4GB will be more significant than having a faster SSD considering 4GB is probably at least an order of magnitude larger than the PS5 and XSX streaming pool. Addin 4GB will have an even larger effect considering it will allow for 17.5GB for games over 13.5GB (30% increase), it will unlock 100% of the memory to function at 560GB/s, and resolve the slower pool memory bandwidth overhead that could be as high as 67% (in case MS didn't do some customization to solve it).
Actually, if you think about it. You don't want to elimate the steaming pool, you want to maximise it.

I think broadly speaking you have two parts of RAM for the current and next frames:
- fixed pool: stuff that is used pretty much every frame (character, certain sounds, certain textures)
- streaming pool: stuff that is expected to be used for the current and next X frames (textures, models, etc for the environment)

So currently the fixed pool is rather large because it probably also contains stuff that's not often needed but is difficult to stream fast enough with current hard drives.

The streaming pool on the other hand is filled with stuff that is not yet needed because in current gen it also contains stuff for the next 30 seconds or so.

So you might think that having a fast SSD allows us to reduce the streaming pool because we don't need 30s of assets in advance but only 1s of assets. However, you could actually reduce the fixed pool because you can probably stream assets in way quicker so assets do not longer have to be fixed in RAM.

I believe the streaming pool should be maximised combined with having a high usage of the pool for the current frame. That allows a large part of the RAM to be used for the current frame allowing for a removal of pop-in and expansion of detail in the scene (probably limited by the GPU).

The PS5 has quite a few advantages here:
- Fully usable 16GB RAM
- Possibly smaller System RAM footprint
- Faster SSD sequential speeds
- Faster SSD random speeds
- Less SSD latency

All in all the PS5 might have quite an effective RAM benefit.
 
Last edited:
The PS5 has quite a few advantages here:
- Fully usable 16GB RAM
- Possibly smaller System RAM footprint
- Faster SSD sequential speeds
- Faster SSD random speeds
- Less SSD latency

All in all the PS5 might have quite an effective RAM benefit.
You don't expect the PS5 OS to consume RAM?
 
This what I consider next gen, this what we should expect, not that circus event:



Biggest thing that I would want to get the next gen feeling is destructible environments. GTA with this would be cool, level the whole city with choppa.

I would be more than happy with 1080p next gen, 30fps or 60fps and current gen level of graphics or bit above, if it would give us this.

Shiny photo realistic games are just boring if mechanics stay the same, polishing turd more and more and in the end it is still a turd. Make exploding, turd, that would be next level. :messenger_grimmacing_

4k and +60hz is just waste. 4k alone eats up too much of benefits and it is nothing special vs 1080p. Even vhs movies still look much better than 4k games (geometry, lightning and other reality stuff) and their res is lower than low. so cranking up the res is just not worth it when fullHD + ultra details would make games look better, or FullHD + destruction physics like I would prefer.
 
Edit: Regarding X1's inferior memory setup. That's a multi faceted issue, the rumor being that MS wanted 8GB capacity from the start and during the design phase GDDR5 would have been prohibitively expensive. Sony started targeting 4GB, which made increasing to 8 when pricing dropped quite easy. The lower latency of DDR3 was also a boost for Kinect. But, MS needed more working memory bandwidth than DDR3 could offer, so the large (for the time) SRAM pool was a necessity (which unfortunately used a lot of die space that weakened the GPU).
Albert Panello actually commented on that a few months ago in Era and told the story. According to Panello, the Xbox One's goal was to be a multimedia device so MS pushed hard for 8GB in order to support multiple apps running simultaneously. The only way they could achieve 8GB (keep in mind that the design was conceived before 2010) with console prices was using DDR3. Sony went in a different route, much faster but smaller 4GB GDDR5 solution. According to Panello, the ESRAM wasn't there in order to offset the slow DDR3 but because their engineers felt it is the next evolution of the 360's EDRAM, so the ESRAM was going to be there regardless of their RAM solution. MS knew back then that Sony was using 4GB of unified GDDR5 and was sure that DDR3's slow speed will be offset by its huge volume compared to the PS4's 4GB.

Then, two things had happened that MS wasn't ready for. The first was Sony upgrading the PS4 from 4GB to 8GB in late 2012, which caught MS by surprise and ruined their plan to compensate for their memory speed by using higher volume. The second thing that happened was the Hynix factory fire in 2013 which made DDR3 prices spike and caused GDDR5 prices to freefall. The fire created a ludicrous situation where even after considering Sony's much superior and expensing solution, both machine's BOM became virtually the same.

The funny thing was that the memory setup winded up costing almost the same (ESRAM + DDR3 VS GDDR5) and the X1 APU size was even larger than the PS4 because of the ESRAM, a size which would have allowed them to easily have a +2TF GPU. Building consoles is a gamble made years in advance and MS lost big time with theirs.

When I first read the DF article about the XsX memory it felt like they avoided getting to the bottom of the issue because they knew there was a slug under the stone. I genuinely hope I am wrong on that. I'm suggesting all module will still be connected because all 5 memory controllers will act as 1 channel with switched modes and shift data as needed, resulting in the following accesses.

1. GPU optimal memory accessed by GPU. The first gigabyte in all modules is accessed and 5 controllers return the data with full 320bit transfer @ 560GB/s.
2. GPU optimal memory accessed by CPU. The first gigabyte in all modules is accessible. 5 controllers access the memory but consolidate it at the three shared memory controllers that are wired to both CPU and GPU and connect to the 2GB modules, resulting in 192bit transfer @336GB/s.
3. CPU optimal memory accessed by the CPU. The 2nd gigabyte in the three 2GB modules is accessed by the three shared memory controllers resulting in 192bit transfer @ 336GB/s
4 CPU optimal memory accessed by the GPU. The 2nd gigabyte in the three 2GB modules is accessed by the three shared memory controllers, and they shift or expand the data across the full width of the 5 memory controllers as needed, resulting in 320bit transfer, but only 336GB/s worth of traffic.
IMO it's much simpler, only two cases. There are 5 64-bit controllers used by both the GPU and CPU:
1) Access the first 1GB of each chip (10GB) - accessed by all 5 controllers for 560GB/s for both CPU and GPU.
2) Access the second 1GB of the chips that have a second GB (6GB) - accessed by 3 controllers for 336GB/s for both CPU and GPU.

It works the same in the 360, the X1X, the PS4, the PS4 Pro, the PS5 and the Switch. I see no reason for it to work differently on the XSX (other than the side effect of having two chip types).

I think that the process won't involve compression/decompression other than on-the-fly part of it (hardware-based). Even if we consider 10 GB for a game on average and, I think, they said you can swap between 6 games, it's around 60 GB that needs to be reserved so, more or less, 5% of the drive. Do you think saving a bit of space is worth inevitable slower transition if you'd use compression? Those memory dumps won't compress well because they're not one kind of data (like textures) but a mixture of all. How much can you achieve this way, 10-20% so 6-12 GB more SSD space and probably double the time to get that. That would be 15-20 seconds to swap a game. I think they would like it to stay under 10 seconds.
It depends on the implementation and capabilities of the machine. Using Zlib, you should achieve ~30% compression on game data which would allow another 1-2 games to the quick resume pool. When you use quick resume, the CPU is almost completely available and I'm assuming that utilizing almost 100% of 16 threads for Zlib compression can get pretty fast results. At the same time, the XSX decompression block can decompress the Zlib compressed memory dump from the SSD back to memory. So the data is being compressed using Zlib on the way out by the CPU while data is being decompressed by the decompression block on the way in.

I don't really know if it will work like that, but having a full Zen 2 CPU available during the process and at the same time a decompressor available, it seems like a no brainer. It will also make the data transfer faster if the CPU compression can keep up. The only question IMO is, can the compression on the CPU keep up?

Never said it was a problem, just more granular detail of why teh memory is read in slices and you cant just shove in a memory controller and start "splitting slices", and explains why Ladia Gaia is likely correct, you read eitrher the 10 GB OR the 6GB, there is no AND possible unless the complex timing is solved. somehow.
If there is zero customization? Yes, you either read the 10GB or the 6GB, that's why I've talked about the slow pool creating a 67% overhead, because there are 4 channels sitting idle while it is read. But if it was MS's plan all along, they might have some form of customization to keep those 4 channels busy while the 6GB pool is read considering GPUs constantly read data. I guess we will have to wait in order to know more, but as it stands, reads/writes from/to the 6GB pool will have a 67% overhead while 4 channels idle (which is what Lady Gaia was talking about).

Actually, if you think about it. You don't want to elimate the steaming pool, you want to maximise it.

I think broadly speaking you have two parts of RAM for the current and next frames:
- fixed pool: stuff that is used pretty much every frame (character, certain sounds, certain textures)
- streaming pool: stuff that is expected to be used for the current and next X frames (textures, models, etc for the environment)

So currently the fixed pool is rather large because it probably also contains stuff that's not often needed but is difficult to stream fast enough with current hard drives.

The streaming pool on the other hand is filled with stuff that is not yet needed because in current gen it also contains stuff for the next 30 seconds or so.

So you might think that having a fast SSD allows us to reduce the streaming pool because we don't need 30s of assets in advance but only 1s of assets. However, you could actually reduce the fixed pool because you can probably stream assets in way quicker so assets do not longer have to be fixed in RAM.

I belive the streaming pool should be maximised combined with having a high usage of the pool for current frame. That allows a large part of the RAM to be used for the current frame allowing for a removal of pop-in and expansion of detail in the scene (probably limited by the GPU).

The PS5 has quite a few advantages here:
- Fully usable 16GB RAM
- Possibly smaller System RAM footprint
- Faster SSD sequential speeds
- Faster SSD random speeds
- Less SSD latency

All in all the PS5 might have quite an effective RAM benefit.
TBH, I'm not sure 100% what you mean. The streaming pool will never be gone, even if you serve the next frame, because you have to buffer that data to be ready for the next frame anyway. But the streaming pool will be so small, you can say it is eliminated. The data for the current frame, what you called the fixed pool, will be huge. So the proportions compared to previous generations will change, from a big streaming pool and a big fixed pool on the PS4 you get a tiny streaming pool and a huge fixed pool on the PS5.

Regarding the advantages, where did you get things like fully usable 16GB on the PS5? Sony still hasn't revealed how much available memory games have, it might be more than the XSX's 13.5, it might be less and it might be exactly the same. We don't know yet.
 
Last edited:
Status
Not open for further replies.
Top Bottom