Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
There's the system beep when PS4 is powered on and then there's a small tune that plays when the player account is selected and goes to the homescreen.
 
This post is from May 2019, after the 1st wired article.
t1vLQKi.png


g6p2PFQ.png
 
That Cerny revealed to be an early low-speed devkit.
So what is the argument here?

That a low-speed devkit of the PS5 can load Spiderman in 0.8sec, and the latest XSX devkit needs 5-6sec for State of Decay, and the Spiderman demo isn't optimised? So not only can the low-speed devkit load a game that is much heavier on the assets side compared to State of Decay, it also does this not twice as fast, but 5-6 times as fast. And you wonder why people think they might have been doing some optimisations... You're basically saying that the SSD of XSX is 5-6 times slower than the PS5 SSD, and maybe even more because this was a low-speed devkit.
 
This post is from May 2019, after the 1st wired article.
t1vLQKi.png


g6p2PFQ.png
It all sounds pretty nuts... do normal drives just not every really come close to their theoretical maximum?

Feel like Sony is either gonna get copied here (like Cerny suggested) or some of this is BS or some sort of risk involved. Truly insane if Sony just decided to look at I/O for PS5 and stomped all over what SSD manufacturers have been able to do lol
 
Man, my backlog is so big and it doesn't help to know they can be played in boost mode on PS5. I usually don't play games again after I finish them, so I have to decide between playing now on PS4 or playing better in next gen enhanced BC.
I honestly don't care much at all for next-gen games right now. Games look great on the Pro. I just want a damn upgraded console because I'm so tired of the jet engine noise and the slow loading times like in FF7R.
 
Last edited:
Yeah I'm the same it's my PS4 backglog that I'm looking forward to getting to.

I mostly PC games this gen partly because I just can't stand the load times on 5400 RPM drives. Kept buying the big PS4 exclusives and never getting around to them.

Don't have that problem on Xbox since all the exclusives come to PC where I have SSD and a much faster CPU/etc.

My daughter plays Sims 4 constantly. She has a custom content problem. She has 119 GB of it and wont delete any.

The game took over an hour to load every single time because of it.

So three days ago I installed an NVMe M.2 drive. The game now loads in 6 minutes and plays flawlessly.
 
Bloody hell. I have to move to America.
I bought the house during my first marriage where I was briefly going through an early-life crisis and thought I wanted kids and a big back yard.

Woke up one day realizing I was nuts and left my ex and got saddled with a massive divorce settlement and stuck with the mortgage on a house worth less than when I bought it lol I'm back financially but there's a reason I have a house far bigger than what myself and my current wife need and it ain't all roses lol
 
Last edited:
What a sad disparate attempt 🤣

XSX has a theoretical max output of 6GB/s. How in the blue hell they achieve 7.2GB/s? That's like saying PS5 can decompress 36GB/s because.....reasons

Wake up from this dream, XsX raw is 2.4GB/s and 4.8GB/s compressed PERIOD
i have no idea what are they smoking or using, i think, no doctors, even Strange, could help them. how gamenyc78 said : they are lost cause.
 
I bought the house during my first marriage where I was briefly going through an early-life crisis and thought I wanted kids and a big back yard.

Woke up one day realizing I was nuts and left my ex and got saddled with a massive divorce settlement and stuck with the mortgage on a house worth less than when I bought it lol I'm back financially but there's a reason I have a house far bigger than what myself and my current wife need and it ain't all roses lol
That's what I love about America. You are both nuts and rich. In Spain we are just nuts.
 
Last edited:
What a sad disparate attempt 🤣

XSX has a theoretical max output of 6GB/s. How in the blue hell they achieve 7.2GB/s? That's like saying PS5 can decompress 36GB/s because.....reasons

Wake up from this dream, XsX raw is 2.4GB/s and 4.8GB/s compressed PERIOD

well its 2.4GB/s and im pretty sure xbox will reach 4.8GB/s at times, but will average quite below that. 4GB/s maybe is my guess.
Decompression efficiency and factor are diffrect from data to data so its just an average thing. Thats also why xbox has a 6gb/s decompression chip and the PS5 has a 22GB/s chip.
Both will get over their respectable speeds but not for a whole second not even 1/10th of a whole sec. but periodicly it will happen that a certain kind of data is very easy do decompress.

in the end its basicly 2.4GB/s vs 5.5GB/s because software will change over the generation and whatever software solution happens to be the best, will be used by both. Meaning that if u gain 40% more data out of the SSD because of your decompression solution,
each party will get those 40%. Double the data that xbox claims is ... well ... i doubt they gonna achieve anything near that tbh. 5.5gb/s to 8 gb/s sounds alot more reasonable in comparison.

When PS4 came along nobody used Kraken, developers just decided to use it since its efficient and easy to work with.
 
I think the 22GB/s figure is total peak throughput in rare circumstances where data compresses extremely well;

Help me to understand being a layman and all: at what point will the "data compress well"? Is this during gameplay where there may be (rare) instances of that 22GB/s occurring? Please and thanks 🍻. I may do some research into how and why decompression became a reality for graphics in gaming 🤔...fascinating as hell.
 
Last edited:
Your point? I never said it wasn't.

Don't waste my time with smartass replies.

Maybe I failed to understand you correctly.

Could you explain your point again please?

"Their first showing of the newer tech, is how to load your whole game state in just a few seconds, switching from game to game."

This is the part that I'm struggling with. With State of Decay they loaded the game from Zero. But that switching function sounds like they loaded the games from a sleep state. Which I believe it functions similarly to putting your console asleep and starting up onto the game again. On my console when I do that it lets me enter the game without having to go through loading the entire game.

So essentially you really can't compare the two.
 
Last edited:
Help me to understand being a layman and all: at what point will the "data compress well"? Is this during gameplay where there may be (rare) instances of that 22GB/s occurring? Please and thanks 🍻.
Not all files compress equally. That's basically it; take 2 files one might be able to compress 25% the other 50%. Depends on the file type and the data within (and the compression algorithm used)
 
Last edited:
Help me to understand being a layman and all: at what point will the "data compress well"? Is this during gameplay where there may be (rare) instances of that 22GB/s occurring? Please and thanks 🍻. I may do some research into how and why decompression became a reality for graphics in gaming 🤔...fascinating as hell.
22GB/s is not the typical decompression case, it's a best case scenario. In other words, 22GB/s is never going to happen.
 
Last edited:
This post is from May 2019, after the 1st wired article.
t1vLQKi.png


g6p2PFQ.png


So just taking what he gives as an example, he goes from HDD to Sata, to NVMe, to PCIe, to PS5. He starts with 42 seconds, then HDD to Sata takes it to 25. Sata to NVMe is 7% faster, I'll round a bit and call it 22.5 seconds now. So with 0=3 % more from NVME to PCIe we should end up with about 22 seconds, just over 50% of the time it took the HDD. They then state PS5 is 32% faster which should be a 7 second (ish) lead, so this title loads in only 15 seconds on ps5, down from 42 seconds, a mere 35% as long.

The Xbox series X has been shown loading State of Decay in 10 seconds, whereas the One X takes a full 51 seconds. That's about 20 % as long.


Basically, while this guy says a ton of stuff, it's all useless. I'm sure IRL XSX and PS5 will have very similar BC improvements, but to try and say PS5 has some magic tech, and then give figures that lead to WORSE performance than what Xbox has already shown in actual demos is silly.
 
Help me to understand being a layman and all: at what point will the "data compress well"? Is this during gameplay where there may be (rare) instances of that 22GB/s occurring? Please and thanks 🍻. I may do some research into how and why decompression became a reality for graphics in gaming 🤔...fascinating as hell.
its not during any particular moment. u got a 4k screen that is 3840x2160 pixxel. that thing is blue and blue only. so the decompression can go like " picture 3840x2160 all pixxels blue" thats very little data while the raw file would be "picture 3840x2160 top left pixxel blue, the pixxel right to that also blue ... and so on. "

this is NOT how decompression literally works, but certain kind of information can be saved up without losing any data while the "description" doesnt have to be big. if u go and add alot of details like a tree with bugs and birds on it then u need far more extra information to get to the same perfect outcome while the end result will be the same size, because the end result is raw, meaning every pixxel has a certain color adressed to it.

very rough explaination, hope that helps.
tl;dr good decompression is not something that occurs while doing a specific task, its how easy it is to describe what is actually decompressed without losing meaningfull detail.
 
oh man back to SSDs..

I just wanna say that yes Sony PS5 has the advantage on higher compressed bandwidth compared to Xbox Series X from its SSD and other custom parts (I/O etc),
but Xbox Series X has

*112 GB/SEC of more memory bandwidth* in its 10GB GDDR6 RAM dedicated to graphics (yes I know it has the other 3GB of lower 336GB/sec of bandwidth for graphics)

if PS4Pro is any indication of what is possible with slightly more RAM than PS4, slightly better CPU, slightly more compute units, 42GB/sec of higher bandwidth then imagine what can be possible with Xbox Series X with higher bandwidth on top of:

MORE COMPUTE UNITS, Faster CPU clock speed, as this contributes to the *PROCESSING* of Hi-Fidelity graphics.

I am praying to the Video gamez Gods that this will finally end card board cut out trees, leaves, and background audience in racing games, sports games, etc.

And no, I am not a computer engineer, programmer, artist etc. But I can understand this much at least.
 
oh man back to SSDs..

I just wanna say that yes Sony PS5 has the advantage on higher compressed bandwidth compared to Xbox Series X from its SSD and other custom parts (I/O etc),
but Xbox Series X has

*112 GB/SEC of more memory bandwidth* in its 10GB GDDR6 RAM dedicated to graphics (yes I know it has the other 3GB of lower 336GB/sec of bandwidth for graphics)

if PS4Pro is any indication of what is possible with slightly more RAM than PS4, slightly better CPU, slightly more compute units, 42GB/sec of higher bandwidth then imagine what can be possible with Xbox Series X with higher bandwidth on top of:

MORE COMPUTE UNITS, Faster CPU clock speed, as this contributes to the *PROCESSING* of Hi-Fidelity graphics.

I am praying to the Video gamez Gods that this will finally end card board cut out trees, leaves, and background audience in racing games, sports games, etc.

And no, I am not a computer engineer, programmer, artist etc. But I can understand this much at least.

With the PS5s weaker GPU does it need more bandwidth than it currently has?

I'm just trying to figure out why Sony chose that bandwidth when they could have gone with something higher.
 
Last edited:
not sure what ur point is. what does vram speed have to do with the ssd speed? literally nothing.
ps5 does not have a higher decompressed bandwidth, ps5 does have higher ssd bandwidth. overall in gerneral under pretty much every circumstance.

does that mean much? well hard to say, its difficult to give a accurate answer since up until now games were designed for lower drives. it will be a huge gamechanger thats an easy guess but does 2.4gb/s actually bottleneck anything?
i dont know. cerny talked a bit about what faster drives can do but it still an unknown. overall texture detail on ps5 could be higher or such but well ...

the diffrences in these consoles are pretty minor, early in the xbo/ps4 lifecyle i enjoyed the term HDtwins for them, but xbox one and ps4 had a bigger power diffrence then ps5/xsx
 
No; just the 2 of us... but we have a big house and have friends/family who come over to game. My wife plays on console far more than I do and depending on what she's doing she plays in different rooms in the house.

Gotta up my game and have a 5th PS4 in the bathroom.:messenger_winking_tongue:

We have more people at home though :lollipop_tears_of_joy: Now you are in the "untouchable zone" because you bought enough PS4's that makes you a Sony diplomat.
 
Point was about no visible componentry, so we basically don't know if there were any type of optimisations.

This doesn't make sense. OF COURSE there were other optimizations. The point is, this was a very early dev kit and the likelihood is that things were improved over that. Not sure what point there is to be made here. In the end, what matters is what the retail unit performs like. Is that where you're going with this? That the retail unit may perform worse?
 
but does 2.4gb/s actually bottleneck anything?
That's my big question too.

Everyone is talking about how PS5 will be able to have way more detail per scene.. nobody talking about how there may be other bottlenecks like game size.

The PS5 can stream textures at insane rates, but what is actually feasible for a given scene, in a large game? At what point does it become unrealistic to expect the level of detail the insane SSD can spit out?
 
With the PS5s weaker GPU does it need more bandwidth than it currently has?

The PS5's GPU contains heavily customized super charged RDNA 2 compute units with sonic the hedgehog speed 2.23 Ghz clock frequency. How these compute units are utilized will most likely be fully demonstrated and expressed with 1st party software (which will most likely explain why higher bandwidth is not needed. Perhaps this is why the SSD bandwidth kick in?). With 3rd party software, Digital Foundry will examine every pixel and frame between XSX and PS5 in the subsequent months after both consoles launch.
 
Last edited:
With the PS5s weaker GPU does it need more bandwidth than it currently has?

I'm just trying to figure out why Sony chose that bandwidth when they could have gone with something higher.

They were testing 536 last year, but stuck with 448. Probably diminishing returns and cost effectiveness. 536 would have been better, but why pay, let's say, 15% more when your real world advantage is 5%?
 
Gotta up my game and have a 5th PS4 in the bathroom.:messenger_winking_tongue:

We have more people at home though :lollipop_tears_of_joy: Now you are in the "untouchable zone" because you bought enough PS4's that makes you a Sony diplomat.
Heh well we also have 3 Xbox One's. Was planning on getting a 4th (a One X) but just... don't really use Xbox very much because of the dual PC releases.
 
oh man back to SSDs..

I just wanna say that yes Sony PS5 has the advantage on higher compressed bandwidth compared to Xbox Series X from its SSD and other custom parts (I/O etc),
but Xbox Series X has

*112 GB/SEC of more memory bandwidth* in its 10GB GDDR6 RAM dedicated to graphics (yes I know it has the other 3GB of lower 336GB/sec of bandwidth for graphics)

if PS4Pro is any indication of what is possible with slightly more RAM than PS4, slightly better CPU, slightly more compute units, 42GB/sec of higher bandwidth then imagine what can be possible with Xbox Series X with higher bandwidth on top of:

MORE COMPUTE UNITS, Faster CPU clock speed, as this contributes to the *PROCESSING* of Hi-Fidelity graphics.

I am praying to the Video gamez Gods that this will finally end card board cut out trees, leaves, and background audience in racing games, sports games, etc.

And no, I am not a computer engineer, programmer, artist etc. But I can understand this much at least.

Series X also needs to feed 16 more CUs with Ram BW. Ram BW per CU is very comparable between PS5 and XsX
 
With the PS5s weaker GPU does it need more bandwidth than it currently has?

I'm just trying to figure out why Sony chose that bandwidth when they could have gone with something higher.
It's cheaper.

Also has less thermal footprint.

Same reason ms did it. Everyone is trying to save dollars where they can.
 
With the PS5s weaker GPU does it need more bandwidth than it currently has?

I'm just trying to figure out why Sony chose that bandwidth when they could have gone with something higher.

It depends on the workload, but it seems that pretty much any GPU can be bandwidth limited at times, especially at 4K 60. Really fast GPUs hit their cap, but less powerful GPUs have issues too. Often smaller GPUs receive variants with as much memory as their bigger brothers, which do increase performance, as do memory OCs. The gtx 2060 line ranged from gddr5 at 3gb to 6gb of gddr6. The issues is one of price to performance. Super high end GDRR memory is expensive, and you really want to spend as much on your actual GPU die as possible.
 
They were testing 536 last year, but stuck with 448. Probably diminishing returns and cost effectiveness. 536 would have been better, but why pay, let's say, 15% more when your real world advantage is 5%?

This seems like the most likely reason to me. I'm pretty sure the current bandwidth amount hits their sweet spot where price and performance is concerned.

I'm assuming that Microsoft chose a higher amount because their GPU needs it.
 
It depends on the workload, but it seems that pretty much any GPU can be bandwidth limited at times, especially at 4K 60. Really fast GPUs hit their cap, but less powerful GPUs have issues too. Often smaller GPUs receive variants with as much memory as their bigger brothers, which do increase performance, as do memory OCs. The gtx 2060 line ranged from gddr5 at 3gb to 6gb of gddr6. The issues is one of price to performance. Super high end GDRR memory is expensive, and you really want to spend as much on your actual GPU die as possible.

I keep having to post this every week it seems, oh well


gB3XFtg.png
 
Heh well we also have 3 Xbox One's. Was planning on getting a 4th (a One X) but just... don't really use Xbox very much because of the dual PC releases.

Yes, I might try some of the xbox exclusives "with no ray tracing" on Radeon VII, other parts seem to be much better than XSX on paper so far. If I find them interesting enough might throw a big navi in when it releases, mainly for the insane 2TB/s bandwidth and 24GB HBM2E VRAM for future proofing for video editing (4K now, probably 8K later) especially if I start using raw video files and high frame rates. Plus for HDMI 2.1, as Radeon VII supports only HDMI 2.0 like all current cards (I don't want DP 1.4 as I prefer using the TV).

Still, I doubt I'll ever start any game on my PC, just like my previous PC that was good for gaming in 2010.
 
oh man back to SSDs..

I just wanna say that yes Sony PS5 has the advantage on higher compressed bandwidth compared to Xbox Series X from its SSD and other custom parts (I/O etc),
but Xbox Series X has

*112 GB/SEC of more memory bandwidth* in its 10GB GDDR6 RAM dedicated to graphics (yes I know it has the other 3GB of lower 336GB/sec of bandwidth for graphics)

if PS4Pro is any indication of what is possible with slightly more RAM than PS4, slightly better CPU, slightly more compute units, 42GB/sec of higher bandwidth then imagine what can be possible with Xbox Series X with higher bandwidth on top of:

MORE COMPUTE UNITS, Faster CPU clock speed, as this contributes to the *PROCESSING* of Hi-Fidelity graphics.

I am praying to the Video gamez Gods that this will finally end card board cut out trees, leaves, and background audience in racing games, sports games, etc.

And no, I am not a computer engineer, programmer, artist etc. But I can understand this much at least.

Right, but you are forgetting that:

- more CUs does not necessarily mean you're getting more done, as you need more memory bandwidth to fill more CUs
- 300mhz is negligible
- 1GB and 2GB RAM chips, which effectively decreases bandwidth (more time to fill) when you're using the 2GB ones. Admittedly, the PS5 may also have multiple sized chips.

With the current data, we can assume an 18% in Graphics processing advantage to the XBox, including Ray tracing (scales with CUs and clock speed), up to 7?% in whatever the CPU is running and and a 15?% advantage in memory bandwidth.

On the opposite side

- tempest engine, with around double the processing power of XBox's audio chip
- possibly a different VRS implementation (Sony has several patents on this, VRS is Microsoft's name) that may or may not have it's custom IO and not dip into the CPU (making MS advantage just 100Mhz)
- Double the throughput via SSD, which means pretty much double the speed in ram fill.

All in all, you are likely to see slightly better FX in the Xbox with a potential 18% advantage in resolution, but better audio and potentially higher quality assets in the PS5. These differences will be apparent in first party games and most likely not third party.

Third party games will likely be pretty similar overall, and it will take a DF video to nitpick slight differences.

My 2 cents.
 
Let's say you want to time travel between two distinct periods in the same scene and you need to switch out 8GB of data to do it. With the PS5 you can likely do it in under a second whereas on XSX you're likely looking at close to two seconds. The former equates to near-instant whereas the latter equates to a short loading screen or elongated transition.

You may think, well...just work around it and elongate the transition. Which ever way you look at it, you're having to make a compromise on one and not on the other. Take the TitanFall2 time-switching mission for eg. (but on a grander scale with far greater visual density/complexity and complex simulation). In that case, it relies on a near-instant switch between environments to perform various time-sensitive game play tasks such as wall running, jumping/landing, avoiding/attacking enemies and avoiding landing in certain areas.

The XSX is a vast improvement over last gen already but the speed and i/o stack on the PS5 -- from what we've heard -- appears to reach an appropriate threshold to not just speed loading up or increase complexity per scene, but to have further gameplay implications and to minimise compromises in new core functionalities vs the scene complexity.

I don't think that "~100x" Cerny stated is just a nice round number in regards to real-world results, but represented a threshold between less compromise and total freedom (relative to the rest of the system's capabilities).
 
Last edited:
Right, but you are forgetting that:

- more CUs does not necessarily mean you're getting more done, as you need more memory bandwidth to fill more CUs
-300mhz is negligible

-I dont get this, why are we expecting GPU's in general to have more CU over time as it contributes to more TFLOPS assuming that it has sufficient bandwidth over time as well.
-Does CPU frequency have less of an impact on graphics compared to GPU frequency? You could say PS5 GPU frequency increase is negligible compared to XSX GPU frequency (400mhz)
 
Last edited:
Couple of things:
1. If streaming is faster it means less RAM usage. Not more.
2. Rendering a high quality assets is pretty much the same as rendering a low quality one. You can check the perf of any fan-made texture packs and the differences are negligible. Even if the texture size was increased 2x across the board (google the recent Witcher 3 re-tex).
Interesting. Similarly shouldn't we expect the faster CUs to have less ram usage than slower CUs ? Because they will use the ram during less time (because clocked higher) than slower CUs ?
 
but does 2.4gb/s actually bottleneck anything?
Yes and no. The bottleneck is not in the SSD speed. The SX arquitecture is quite fantastic. The bottleneck in both consoles is the RAM. With better assets you need bigger RAM amounts. If this generation wanted to keep the graphical jump from 360 to PS4 and it would maintained the the mechanical drives it'd needed 128GB of RAM @ 1TB/s. See? Those PCIe SSDs seem expensive but they are actually way cheaper than the alternative. Since the RAM technology hasn't keep with the needs of processing power these SSDs are here to help with that.

Both systems have dedicated silicon to load asset faster from their drives and that's not because people wants 2 second loads time so bad is worth the investment. It's because without fast asset streaming 16GB of RAM at 448-560 GB/s are ridiculous compared with previous generational jumps. So, everyone and their mother are currently implementing in their engines ways to move assets on the fly according to this new especifications.

So, is it 4.8GB/s a bottleneck for the system? No. But 16GB of RAM are, and they are less a bottleneck in PS5 than in SX. Specially factoring the SX is 18% more powerful but has not more RAM neither more speed in the drive to move assets from and to that pool.
 
With the PS5 you can likely do it in under a second whereas on XSX you're likely looking at close to two seconds.
the more i look into it, the more i doubt that it will just be two seconds.

there is a reason why sony can load spiderman in less than a second and state of decay takes 11 seconds. both are probably loading 5gb of data max. if what you are saying it true, then it shouldve taken state of decay 2 seconds to load it. but we know it took 11 seconds.

Cerny said this himself. If you replace an HDD in the PS4 PRo with a 10x faster SSD, it only amounts to a 2x decrease in loading times. You need to do more than just replace the HDD, you have to redesign the I/O to make sure that 100x faster SSD can offer 100x faster loading and streaming.

The xbox series x SSD is 40x faster and yet offers a 4x decrease in loading times. Sounds to me like the case Cerny described above. A simple boost in raw SSD power simply isnt good enough unless you are willing to go in and really fuck around with I/O. MS tried doing that with the velocity architecture, but the best demo they could show off still took 11 seconds to load.

The more i think about this, the more I begin to understand just how crazy this console really is. Cerny said the goal was to get 100x boost in the I/O, and he seems to have achieved that goal. We know how this affects the SSD related performance. Just imagine what it does to CPU and GPU performance.
 
Status
Not open for further replies.
Top Bottom