Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
SenjutsuSage SenjutsuSage was a special case even on xbox era, someone who was seen as too deluded

but i think everyone here knows that

he needs help

hopefully the quarantine doesnt give him a case of cabin fever and makes it worse

lmao shows how much you know. I was kicked outta there quite some time ago for being too positive about playstation 5 in my comments and I even got shit for it a few times from a few posters. So yea, think what you want. I apparently piss off both sides, which means I must be doing something right.
 
Imagine buying those trash hype beast brands in the first place. Buy Senheiser or Sony for quality headsets.

Consider GPU is working on 10GB pool at near full speed when the CPU has to access the slow pool. For the GPU to continue its work it needs access to the 10 chips because the data its working with is spread across all 10 chips hence 560GB/s bandwidth. So that leaves 2 options:
  1. Stall GPU access for a few cycles and let CPU pull its data as fast as possible (at 336GB/s) to quickly resume GPU work in the following cycles
  2. Split chip access using 16bit address for the duration of CPU work. This will allow simultaneous GPU/CPU access but will have the effect of halving their respective bandwidth 280GB/s & 168GB/s
Both approaches lead to the same result mathematically: For an average of 48GB/s of CPU access 32GB/s is wasted leaving GPU with an average of 480GB/s

I understand previous implementations of TSS weren't enabled by hardware, nvidia introduced hardware accelerated TSS with Turing months before DX API supported it with sampler feedback


You can call it what you want, I'll reiterate the point I made since our conversation began: SF/TTS hardware capability is a standard Turing/RDNA2 feature not owned by MS, nothing stops Sony from developing their own software to take advantage of the standard TSS/SF hardware capability found in RDNA2
Too bad for you, 32bit data type for 32bit floating-point data element (word) can't be flattened to the entire 320-bit bus when 32bit word length is smaller than the total 320-bit bus width. This is why I stated AMD GPUs used "combined scatter" memory operations. There are reasons for "64-bit memory controller" instead of a single uber "320-bit memory controller". A combined 320-bit memory bus can support ten 32bit data elements (e.g. INT32, FP32) or twenty 16bit data elements (e.g. FP16 , INT16).

This is not factoring any real-time memory compression trick such as DCC and memory controller scheduler/out-of-order/load balancing behaviors.

GCN has out-of-order wave64 processing capability when wave64 is waiting for its data, GCN can execute other wave64s with data payloads. This also true for RDNA's wavefront processing.
 
Last edited:
I was kicked outta there quite some time ago for being too positive about playstation 5
Sooo positive
cbTyGIi.png
Too bad for you, 32bit data type for 32bit floating-point data element (word) can't be flattened to the entire 320-bit bus when 32bit word length is smaller than the total 320-bit bus width. This is why I stated AMD GPUs used "combined scatter" memory operations. There are reasons for "64-bit memory controller" instead of "320-bit memory controller".
If the XSX intends to get anywhere close to its 560GB/s peak it needs to spread its data across all 10 chips, even for a little as 10MB.
Using 64bit memory controller independently would limit GPU bandwidth to 112GB/s
A combined 320-bit memory bus can support ten 32bit data elements (e.g. INT32, FP32) or twenty 16bit data elements (e.g. FP16 , INT16).

This is not factoring any real-time memory compression trick such as DCC nor memory control scheduler, out-of-order and load balancing behaviors.

GCN has out-of-order wave64 processing capability when wave64 is waiting for its data, GCN can execute other wave64s with data payloads.
None of this is related
 
Last edited:
Im confident because Richard used the same terminology before to refer to hardware accelerated RT
But just for arguments sake lets say XSX has a unique hardware modification, it doesn't change the fact that SFS/TSS hardware capability is a basic RDNA2/Turing feature.

edit: Found one but im sure there are more examples, also on videos


More to Senjus point on the TSS vs SFS discussion, MS in their high level DX12U outline says this:

"Sampler feedback also enables Texture-space shading (TSS), a rendering technique which de-couples the shading of an object in world space from the rasterization of the shape of that object to the final target."

So SFS is actually the technology that undergirds TSS' existence. Any hardware that exists to enable or accelerate TSS does so because it enables SFS functionally and then TSS is a capability built from SFS.

 
Or perhaps, many people don't care about audio, except Sony since Tempest 3D Audio is suddenly such an important factor in gaming. So important, it got the second most amount of time during Cerny's show.

If sound quality is so important for the average person, you wouldn't get so many people sticking to audio from their TV's integrated 10 watt speakers or a shitty $15 headset with one ear piece.

Believe it or not, there's also many gamers who even put the game sound on mute and prefer listening to music.

I think when consoles got CD quality sound in the 90s, it's kind of good enough for many people. But graphics are always there and there's more to improve upon, since every game needs it. And just eye candy but frames and smoothness. Only games that don't really need it are retro games and if you're playing a super sim game like OOTP Baseball or Zork.
companies sell their unique features that sets them apart from the competition. its why phil has been shoving his tflops crown down our throat for months. he knows its the one thing that sets his console apart.

sony has the ssd and the audio tech. the audio tech will work through the tvs, headphones, stereo systems, and the integrated 10w speakers. thats the whole point of the tempest engine. you dont need headphones, though it will undoubtedly be better with headphones.

this isnt the same situation as last gen when ms produced a terrible console with kinect integration its only unique and ironically enough, its most hated feature. the no loading, 3d audio through speakers and other ssd related features will compete with MS's higher resolution games. of course sony will talk about how important those features are and of course sony fans will too.
lmao shows how much you know. I was kicked outta there quite some time ago for being too positive about playstation 5 in my comments and I even got shit for it a few times from a few posters. So yea, think what you want. I apparently piss off both sides, which means I must be doing something right.
dude you are a compulsive liar. You were banned because you kept saying three ps5 didn't hardware ray tracing. Even after the second wired article and the CES Jim Ryan presentation confirming it.

First you lie about what Matt said and now this. Get some help. You are living in your own reality. This isn't healthy.
 
Sooo positive

If the XSX intends to get anywhere close to its 560GB/s peak it needs to spread its data across all 10 chips, even for a little as 10MB.
Using 64bit memory controller independently would limit GPU bandwidth to 112GB/s

None of this is related
At 32bit word level, it can't be striped across the entire 320bit bus width. Your argument is correct with a 320-bit word size but not with 32bit word size. LOL
 
90% of 825GB is 742.5GB
After 10GB OS that leaves 732.5GB

1. Your response wasn't relevant to i what I said, you named drop a random technique that has nothing to do with interleaved memory. For simultaneous GPU/CPU access using 16bit address you are effectively halving their respective bandwidth 280GB/s (GPU) & 168GB/s (CPU). Physical limitation of each chip

2. I did, you need to work on reading comprehension
First component: SSD
Second component: Decompression block

One decompression unit handles both compression algorithms
When particular hardware has two dissimilar data format supports, it usually has two fixed hardware ASIC IP. I don't see XSX SSD decomposition block being flexible enough to support other compression formats.
 
More to Senjus point on the TSS vs SFS discussion, MS in their high level DX12U outline says this:

"Sampler feedback also enables Texture-space shading (TSS), a rendering technique which de-couples the shading of an object in world space from the rasterization of the shape of that object to the final target."

So SFS is actually the technology that undergirds TSS' existence. Any hardware that exists to enable or accelerate TSS does so because it enables SFS functionally and then TSS is a capability built from SFS.

There's a confusion here due to different nomenclatures
TSS is a technique that doesn't need to be hardware accelerated, nvdia was the first to introduce hardware based TSS and they retained the name
Months later DX API introduced SF/SFS which improved previous implementation and also added support for the hardware capability.

Anyways whats your point regarding this? Im not sure I follow
 
That distinction you pointed out is for sure important, PS5 could do thousands of simple PS4 VR era sources and hundreds of more complex/advanced sources. Dolby didn't make the distinction so you might be onto something but regardless of Dolby being capable of handling hundreds of complex sources is the inherent physical limitation that makes 3D Audio with 100s of sources difficult (if not impossible) to implement in multiple speakers setups instead they use Virtual Surround to approximate it but it won't be as advanced (less sources?)I assume. Both Cerny and Dolby mentioned this limitation

For sure. Im not up to speed with their audio block capabilities but same principle applies if their solution can be used with non proprietary headphones or TV speakers.
I was talking about Dolby in general not meaning to take a dig at XSX.

DF is alluding to hardware acceleration, nvidia calls this hardware capability TSS.
Bespoke just means dedicated hardware in contrast with a entirely software solution, this "bespoke hardware" capability will be present in every RDNA2 card. There are different tiers of support: software, hybrid and hardware
What's unique about XSX is the setup supporting it: SSD, I/O, CPU, GPU & APIs or Velocity Architecture as MS calls it.

Yep, the more I lean into the stuff the more cool I find it, and I just love that they laid out a whole glossary for the stuff. It's pretty much perfect communication. To collect some of them they've done quite a few things.

BCPack - a new compression system specially tailored for GPU textures, which by the sounds of it is pretty impressive.

DirectStorage - DirectStorage is an all new I/O system designed specifically for gaming to unleash the full performance of the SSD and hardware decompression. It is one of the components that comprise the Xbox Velocity Architecture. Modern games perform asset streaming in the background to continuously load the next parts of the world while you play, and DirectStorage can reduce the CPU overhead for these I/O operations from multiple cores to taking just a small fraction of a single core; thereby freeing considerable CPU power for the game to spend on areas like better physics or more NPCs in a scene. This newest member of the DirectX family is being introduced with Xbox Series X and we plan to bring it to Windows as well. This one is a much bigger deal than I think people appreciate.

Hardware Decompression – Hardware decompression is a dedicated hardware component introduced with Xbox Series X to allow games to consume as little space as possible on the SSD while eliminating all CPU overhead typically associated with run-time decompression. It reduces the software overhead of decompression when operating at full SSD performance from more than three CPU cores to zero – thereby freeing considerable CPU power for the game to spend on areas like better gameplay and improved framerates. Hardware decompression is one of the components of the Xbox Velocity Architecture.

Sampler Feedback Streaming (SFS) – A component of the Xbox Velocity Architecture, SFS is a feature of the Xbox Series X hardware that allows games to load into memory, with fine granularity, only the portions of textures that the GPU needs for a scene, as it needs it. This enables far better memory utilization for textures, which is important given that every 4K texture consumes 8MB of memory. Because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance.
Sooo positive
cbTyGIi.png

If the XSX intends to get anywhere close to its 560GB/s peak it needs to spread its data across all 10 chips, even for a little as 10MB.
Using 64bit memory controller independently would limit GPU bandwidth to 112GB/s

None of this is related

That old dead horse again? Cerny properly put it to bed in his road to ps5 presentation. Meanwhile, we still have people claiming the github leak was inaccurate and are still claiming insiders who were wrong about nearly everything were somehow actually right. Hell, even after they were proven wrong people are still citing some of them here as evidence to support their arguments. Nice try, though.
 
lmao shows how much you know. I was kicked outta there quite some time ago for being too positive about playstation 5 in my comments and I even got shit for it a few times from a few posters. So yea, think what you want. I apparently piss off both sides, which means I must be doing something right.
as u can see, hes also a compulsive liar
 
At 32bit word level, it can't be striped across the entire 320bit bus width. Your argument is correct with a 320-bit word size but not with 32bit word size. LOL
Nice talking to you, looking forward to read you proposed model/diagram that defies interleaved memory system mechanics
When particular hardware has two dissimilar data format supports, it usually has two fixed hardware ASIC IP. I don't see XSX SSD decomposition block being flexible enough to support other compression formats.
I don't see why it wouldn't be able to handle both formats if it was designed with that function in mind, having two decompresors would be a waste of die space (bad design). Regardless DF & MS were clear, only one decompressor.
Yep, the more I lean into the stuff the more cool I find it, and I just love that they laid out a whole glossary for the stuff. It's pretty much perfect communication. To collect some of them they've done quite a few things.
Thats good, shows you are willing to acknowledge when you are wrong and accept new information you were previously unaware off
As exicited as you are over the XSX I just assumed you knew what the Velocity architecture consisted off
That old dead horse again? Cerny properly put it to bed in his road to ps5 presentation. Meanwhile, we still have people claiming the github leak was inaccurate and are still claiming insiders who were wrong about nearly everything were somehow actually right. Hell, even after they were proven wrong people are still citing some of them here as evidence to support their arguments. Nice try, though.
Wrong quote? how is that relevant to what i said
 
Last edited:
Sooo positive
cbTyGIi.png

If the XSX intends to get anywhere close to its 560GB/s peak it needs to spread its data across all 10 chips, even for a little as 10MB.
Using 64bit memory controller independently would limit GPU bandwidth to 112GB/s

Based on what Rnival and others have written the sum of all accesses on is always 320 bit. The 560 Gb/s access is the sum of 20 16bit pipes to memory. So no matter which server is consuming the sum of all access is 560 Gb/s.

Let say the CPU needs 3GB. Based on the write up about memory access restrictions, the CPU can see all 20 pipes but the GPU can only see 10. The CPU can access whatever it wants but lets assume it only taps the top 1GB of 3 2Gb chips, using 3 16 bit lanes. The total bandwidth consumed here is 3*28 = 84 Gb/S. In those same modules the GPU will consume the lower 1Gb at 28Gb/s for 84GB/s + the full bandwidth (both 16 bit pipes) of the other three 2Gb chips (56*3=168)+ the full bandwidth of the 1GB Chips (56*4=224GB/s).

Cpu= 84 (3GB)
Gpu= 84 (3GB)
GPU= 168 (3GB)
GPU= 224 (4GB)
SUM= 560

If the cPU needed all 6GB it could just use the three 16bit lanes attached to the other three chips and consume half the 2Gb chip array total bandwidth dropping the GPU bandwidth to 392/560 and allowing the GPU to keep access to 10GB but just at a lower rate. Still it would be 560GBs across the entire bus being consumed.
 
dude you are a compulsive liar. You were banned because you kept saying three ps5 didn't hardware ray tracing. Even after the second wired article and the CES Jim Ryan presentation confirming it.

First you lie about what Matt said and now this. Get some help. You are living in your own reality. This isn't healthy.


He has been like this since a lot of time now.

I remember of this generation pre launch, he was listening, drinking and spreadind FUD all the shi*** word from MisterXmedia.

He does the same here.

We must believe that he doesn't retain the lesson of the past , believing such mentally health disordered guy's like MisterXmedia and Blueis violet.

;)
 
Last edited:
Yep, the more I lean into the stuff the more cool I find it, and I just love that they laid out a whole glossary for the stuff. It's pretty much perfect communication. To collect some of them they've done quite a few things.

BCPack - a new compression system specially tailored for GPU textures, which by the sounds of it is pretty impressive.

DirectStorage - DirectStorage is an all new I/O system designed specifically for gaming to unleash the full performance of the SSD and hardware decompression. It is one of the components that comprise the Xbox Velocity Architecture. Modern games perform asset streaming in the background to continuously load the next parts of the world while you play, and DirectStorage can reduce the CPU overhead for these I/O operations from multiple cores to taking just a small fraction of a single core; thereby freeing considerable CPU power for the game to spend on areas like better physics or more NPCs in a scene. This newest member of the DirectX family is being introduced with Xbox Series X and we plan to bring it to Windows as well. This one is a much bigger deal than I think people appreciate.

Hardware Decompression – Hardware decompression is a dedicated hardware component introduced with Xbox Series X to allow games to consume as little space as possible on the SSD while eliminating all CPU overhead typically associated with run-time decompression. It reduces the software overhead of decompression when operating at full SSD performance from more than three CPU cores to zero – thereby freeing considerable CPU power for the game to spend on areas like better gameplay and improved framerates. Hardware decompression is one of the components of the Xbox Velocity Architecture.

Sampler Feedback Streaming (SFS) – A component of the Xbox Velocity Architecture, SFS is a feature of the Xbox Series X hardware that allows games to load into memory, with fine granularity, only the portions of textures that the GPU needs for a scene, as it needs it. This enables far better memory utilization for textures, which is important given that every 4K texture consumes 8MB of memory. Because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance.


That old dead horse again? Cerny properly put it to bed in his road to ps5 presentation. Meanwhile, we still have people claiming the github leak was inaccurate and are still claiming insiders who were wrong about nearly everything were somehow actually right. Hell, even after they were proven wrong people are still citing some of them here as evidence to support their arguments. Nice try, though.

After you finished your points, I wanted to let you know that all of that is still lagging behind PS5's RAW transfer rate by 15.6% (4.8 compressed vs 5.5 RAW) and 87.5% behind PS5 average compressed (4.8 vs 9) and 266.6% slower in terms of max (6 vs 22).

It's still way, way inferior overall as all those numbers have bottlenecks, as an old game like state of decay took 11sec to load, while it's 51sec on HDD X1. That's only 4.6x faster. Very old dev kit of PS5 was 0.8sec vs 8sec, and that's 10x faster than PS4, that's been tested about a year ago if we take leaks as being posted shortly after, or probably way back.

EDIT: Checked the video again and it's 11sec vs 51sec, 4.6x faster + PS5 0.8sec PS4 8sec (the previous was taken from a other source of another test not represented, which makes it 18x faster)

 
Last edited:
Let say the CPU needs 3GB. Based on the write up about memory access restrictions, the CPU can see all 20 pipes but the GPU can only see 10. The CPU can access whatever it wants but lets assume it only taps the top 1GB of 3 2Gb chips, using 3 16 bit lanes. The total bandwidth consumed here is 3*28 = 84 Gb/S. In those same modules the GPU will consume the lower 1Gb at 28Gb/s for 84GB/s + the full bandwidth (both 16 bit pipes) of the other three 2Gb chips (56*3=168)+ the full bandwidth of the 1GB Chips (56*4=224GB/s).
The problem with that pipe system example is that you have chips with different throughput 28GB/s & 56GB/s. All chips must have the same performance and size for the interleaved memory system to work
GPU must either access all chips at 28GB/s or all chips at 56GB/s there can't be different configurations
 
There's a confusion here due to different nomenclatures
TSS is a technique that doesn't need to be hardware accelerated, nvdia was the first to introduce hardware based TSS and they retained the name
Months later DX API introduced SF/SFS which improved previous implementation and also added support for the hardware capability.

Anyways whats your point regarding this? Im not sure I follow

"From the definition - Sampler Feedback Streaming (SFS) – A component of the Xbox Velocity Architecture, SFS is a feature of the Xbox Series X hardware that allows games to load into memory, with fine granularity, only the portions of textures that the GPU needs for a scene, as it needs it. This enables far better memory utilization for textures, which is important given that every 4K texture consumes 8MB of memory. Because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance. "

You were arguing as to whether this was an API feature or HW feature right?
 
The problem with that pipe system example is that you have chips with different throughput 28GB/s & 56GB/s. All chips must have the same performance and size for the interleaved memory system to work
GPU must either access all chips at 28GB/s or all chips at 56GB/s there can't be different configurations

No. The 1 GB chips can always be accessed at 56GBs. If the CPU isnt consuming one of the pipes on the 2Gb Memory it can also use all 32 bits to consume the 1GB. the memory controller management of the PS5 should be similar except that it is looking at 8 2GB chips each with two 16bit paths to the chips.
 
After you finished your points, I wanted to let you know that all of that is still lagging behind PS5's RAW transfer rate by 15.6% (4.8 compressed vs 5.5 RAW) and 87.5% behind PS5 average compressed (4.8 vs 9) and 266.6% slower in terms of max (6 vs 22).

It's still way, way inferior overall as all those numbers have bottlenecks, as an old game like state of decay took 11sec to load, while it's 40sec on HDD X1. That's only 3.6x faster. Very old dev kit of PS5 was 0.8sec vs 15sec, and that's 18x faster, that's been tested about a year ago if we take leaks as being posted shortly after, or probably way back.

that 6 vs 22 you are talking about theoretical maximum, if we are throwing these around one can say that effective memory in xsex will be 48gb (theoretical max based on multiplier provided by ms), we can even stretch a lot and say that xbsex raw power is 12 +13 TFlop. Can you see what I am doing? I am using same logic as you did in your post, I am using theoretical values, stretching or talk and making it fit into a narrative. Please stop.
 
You were arguing as to whether this was an API feature or HW feature right?
No, i aknowledged from the beginning SF/SFS can be hardware accelerated
My point was that SF/SFS GPU hardware capability is a basic RDNA2/Turing feature not unique to XSX
No. The 1 GB chips can always be accessed at 56GBs. If the CPU isnt consuming one of the pipes on the 2Gb Memory it can also use all 32 bits to consume the 1GB. the memory controller management of the PS5 should be similar except that it is looking at 8 2GB chips each with two 16bit paths to the chips.
You are not understanding me
When GPU access memory all chips must provide the same bandwidth, there cant be mixed and matched configurations in your example: Seven chips at 56GB/s and three chips at 28GB/s. Can't work together this way
 
After you finished your points, I wanted to let you know that all of that is still lagging behind PS5's RAW transfer rate by 15.6% (4.8 compressed vs 5.5 RAW) and 87.5% behind PS5 average compressed (4.8 vs 9) and 266.6% slower in terms of max (6 vs 22).

It's still way, way inferior overall as all those numbers have bottlenecks, as an old game like state of decay took 11sec to load, while it's 40sec on HDD X1. That's only 3.6x faster. Very old dev kit of PS5 was 0.8sec vs 15sec, and that's 18x faster, that's been tested about a year ago if we take leaks as being posted shortly after, or probably way back.

Ohhh so numbers matter now, do they? I was confused with how much i was being told that none of the bigger numbers on the Xbox side mean anything with the entire "less is more balanced and elegant" argument I've been hearing, so this change is welcome. There's one potential problem with all the numbers you just tossed out, however.

If one console has a far superior, less wasteful way of loading in GPU texture data, thus saving far more physical RAM and potentially needing to copy far less also, there's a very good chance that real-world game situations outside of initial loading might not pan out the way those numbers suggest. PS5's SSD is fast, but it isn't faster than GDDR6 memory. Want to know the difference in the Series X's GDDR6 bandwidth compared to the PS5's SSD if the 2-3x physical memory multiplier effect that Microsoft says happens thanks to Sampler Feedback Streaming is actually the real deal thus leaving Series X much more extra physical ram to play with?

Let's use the 9GB/s speed of the PS5 SSD. vs the Xbox Series X's full memory bandwidth speed (9 vs 560). Okay, in this scenario seems the PS5 SSD is 6122.22% slower than Series X's memory bandwidth.

Using your amazing 22GB/s SSD figure (22 vs 560) it becomes 2445.45% slower than Series X's memory bandwidth.

Now do you see how silly this exercise is? I'm mostly just having some fun, but at the same time letting you know that if the Series X's Sampler Feedback Streaming is ANYWHERE near as good as Microsoft says it is, the PS5 SSD wouldn't fair quite as good as those numbers suggest if Microsoft has a sizeable portion of physical RAM that isn't being filled up with as many textures as one might have expected, that changes the dynamic a little bit (more than a little) :messenger_winking:
 
He has been like this since a lot of time now.

I remember of this generation pre launch, he was listening, drinking and spreadind FUD all the shi*** word from MisterXmedia.

He does the same here.

We must believe that he doesn't retain the lesson of the past , believing such mentally health disordered guy's like MisterXmedia and Blueis violet.

;)

Shoot it goes back further than that. He used to be a hardcore Xbox fan over on N4G years ago. I knew I recognized his name constantly shitting on HipHopGamer posts and other pro PS3 news articles. Never thought I'd see him again almost 10 years later. LOL
 
that 6 vs 22 you are talking about theoretical maximum, if we are throwing these around one can say that effective memory in xsex will be 48gb (theoretical max based on multiplier provided by ms), we can even stretch a lot and say that xbsex raw power is 12 +13 TFlop. Can you see what I am doing? I am using same logic as you did in your post, I am using theoretical values, stretching or talk and making it fit into a narrative. Please stop.

Are you serious, man? Yes, those max numbers could happen occasionally, please don't muddy the water with things that have nothing to do to the matter.

If you have a problem with those numbers try contacting Mark Cerny or Phil Spencer:

A custom decompressor has been built into the PS5's I/O unit capable of handling over 5GB of Kraken input format per second. After decompression, that becomes around eight or nine gigabytes. The I/O unit itself, however, is capable of outputting as much as 22GB/s if the data compressed well.


The PS5's drive is far faster, though. It is capable of 5.5GB/s data transfer, which turns into 8-9GB/s when compression is used. This does not slow the console down either, as there is hardware dedicated to decompression of this data. Microsoft's Xbox Series X SSD is fast, and can use similar hardware-based compression, but its speeds are 2.4GB/s, or 6GB/s with compression.


Let's respect the readers around here and avoid taking things out of context.

Ohhh so numbers matter now, do they? I was confused with how much i was being told that none of the bigger numbers on the Xbox side mean anything with the entire "less is more balanced and elegant" argument I've been hearing, so this change is welcome. There's one potential problem with all the numbers you just tossed out, however.

If one console has a far superior, less wasteful way of loading in GPU texture data, thus saving far more physical RAM and potentially needing to copy far less also, there's a very good chance that real-world game situations outside of initial loading might not pan out the way those numbers suggest. PS5's SSD is fast, but it isn't faster than GDDR6 memory. Want to know the difference in the Series X's GDDR6 bandwidth compared to the PS5's SSD if the 2-3x physical memory multiplier effect that Microsoft says happens thanks to Sampler Feedback Streaming is actually the real deal thus leaving Series X much more extra physical ram to play with?

Let's use the 9GB/s speed of the PS5 SSD. vs the Xbox Series X's full memory bandwidth speed (9 vs 560). Okay, in this scenario seems the PS5 SSD is 6122.22% slower than Series X's memory bandwidth.

Using your amazing 22GB/s SSD figure (22 vs 560) it becomes 2445.45% slower than Series X's memory bandwidth.

Now do you see how silly this exercise is? I'm mostly just having some fun, but at the same time letting you know that if the Series X's Sampler Feedback Streaming is ANYWHERE near as good as Microsoft says it is, the PS5 SSD wouldn't fair quite as good as those numbers suggest if Microsoft has a sizeable portion of physical RAM that isn't being filled up with as many textures as one might have expected, that changes the dynamic a little bit (more than a little) :messenger_winking:

You seem to be mixing things up, can you provide links stating that XSX SSD can do 560GB/s? Thanks.

=====

Out of curiosity, I searched for how State of Decay 2 is doing with just SATA 3 SSD (0.5GB/s only compared to 2.4GB/s on XSX):




Here's his description:

After seeing how the new xbox loads this game and alos to see how a new pcie or pci mini ssd could factor in on how fast the game loads right now 19 seconds is how long it takes to load in and that is not too bad yet there is always room for improvments.

His build specs:

148332.jpg



So 51sec on X1 with 0.05-0.1GB/s HDD, 19sec on PC with SATA 3 SSD with a speed of 0.5GB/s, and 11sec on XSX with 2.4GB/s SSD. So XSX is only 0.7x faster than a PC with SATA 3 and DDR3 ram and weak, outdated specs PC with all the bottlenecks known on PC's.

If anyone got a problem with that, try contacting the guy, don't kill the messenger.

EDIT: Checked the video again and it's 11sec vs 51sec on X1, 4.6x faster.
 
Last edited:
Let's use the 9GB/s speed of the PS5 SSD. vs the Xbox Series X's full memory bandwidth speed (9 vs 560). Okay, in this scenario seems the PS5 SSD is 6122.22% slower than Series X's memory bandwidth.

Using your amazing 22GB/s SSD figure (22 vs 560) it becomes 2445.45% slower than Series X's memory bandwidth.
Let's use the 4.8GB/s speed of your precious Series X SSD instead. You know, the SSD that is actually going to interface with it. It appears to be 11666.66% slower than the memory bandwidth. I'm even left wondering if that amazing virtual memory claim has any truth at all to it. Maybe BCPack will bring it down, closer to that less embarrassing PS5 SSD ratio.

Can we stop the stupid comparisons now?
 
Last edited:
Was there a rumor of Xbox news this Monday morning? Saw something somewhere maybe?
Yes, IGN was planning on releasing a story or some information today.

Let's use the 4.8GB/s speed of your precious Series X SSD instead. You know, the SSD that is actually going to interface with it. It appears to be 11666.66% slower than the memory bandwidth. I'm even left wondering if that amazing virtual memory claim has any truth at all to it. Maybe BCPack will bring it down, closer to that less embarrassing PS5 SSD ratio.

Can we stop the stupid comparisons now?
Wasn't that his whole point?
 
Last edited by a moderator:
Let's use the 4.8GB/s speed of your precious Series X SSD instead. You know, the SSD that is actually going to interface with it. It appears to be 11666.66% slower than the memory bandwidth. I'm even left wondering if that amazing virtual memory claim has any truth at all to it. Maybe BCPack will bring it down, closer to that less embarrassing PS5 SSD ratio.

Can we stop the stupid comparisons now?

With real world results so far, old PS5 dev kit (silver tower) is 10x faster (18x faster on unrepresented test reported by WIRED) than PS4 HDD, latest XSX is 4.6x faster than X1 HDD and 0.7x faster than SATA 3 SSD.

 
Last edited:
The only thing I'm sure is lots of you are going to be deluded by tempest engine, it's not the 2nd coming of Jesus is just a positional audio engine technology, the fun fact is that most of you are going to ear it trough some shitty TV speaker and then react like those Americans pastors believers "I've seen the light.... I've heard the call..." it's sad you can't have a technical discussion here, it's just guys without the minimal technical background believing they have the truth.
 
Ok so now we are talking about how the XSXs memory breaks the logic law to always reach the peak performance :lollipop_neutral:

For games which use more than 10 GB, which will be almost the new AAA or AA this not gonna happens.

Even in cases where happens in some moment the OS of the console will need to read it.

The sames apply for PS5 (not the 10GB limit) but that has a uniform bandwidth so its peak should be easier to reach.

The XSX should still has better bandwidth but is not so far as the paper tell us and need more optimization than PS5 but still in a better position than Xbox one.
 
Are you serious, man? Yes, those max numbers could happen occasionally, please don't muddy the water with things that have nothing to do to the matter.

If you have a problem with those numbers try contacting Mark Cerny or Phil Spencer:

A custom decompressor has been built into the PS5's I/O unit capable of handling over 5GB of Kraken input format per second. After decompression, that becomes around eight or nine gigabytes. The I/O unit itself, however, is capable of outputting as much as 22GB/s if the data compressed well.


The PS5's drive is far faster, though. It is capable of 5.5GB/s data transfer, which turns into 8-9GB/s when compression is used. This does not slow the console down either, as there is hardware dedicated to decompression of this data. Microsoft's Xbox Series X SSD is fast, and can use similar hardware-based compression, but its speeds are 2.4GB/s, or 6GB/s with compression.


Let's respect the readers around here and avoid taking things out of context.



You seem to be mixing things up, can you provide links stating that XSX SSD can do 560GB/s? Thanks.

=====

Out of curiosity, I searched for how State of Decay 2 is doing with just SATA 3 SSD (0.5GB/s only compared to 2.4GB/s on XSX):




Here's his description:

After seeing how the new xbox loads this game and alos to see how a new pcie or pci mini ssd could factor in on how fast the game loads right now 19 seconds is how long it takes to load in and that is not too bad yet there is always room for improvments.

His build specs:

148332.jpg



So 52sec on X1 with 0.05-0.1GB/s HDD, 19sec on PC with SATA 3 SSD with a speed of 0.5GB/s, and 11sec on XSX with 2.4GB/s SSD. So XSX is only 0.7x faster than a PC with SATA 3 and DDR3 ram and weak, outdated specs PC with all the bottlenecks known on PC's.

If anyone got a problem with that, try contacting the guy, don't kill the messenger.

EDIT: Checked the video again and it's 11sec vs 52sec on X1, 4.7x faster.

I thought it was supposed to be 50x faster than X1 ? Spencer lied ? shocking.

Yep as expected XSX has all the usuall I/O bottlenecks the PC have. Everything must be done on main ram (taking bandwidth BTW) using decade old APIs. But it was expected as they even removed the SSD cache. Their 2.4 GB/s number is only there for the PR, but they sacrificed everything else for that and they'll never even reach their stated 4.8GB/s compressed number because of how inefficient the whole system is with I/O.
 
I thought it was supposed to be 50x faster than X1 ? Spencer lied ? shocking.

Yep as expected XSX has all the usuall I/O bottlenecks the PC have. Everything must be done on main ram (taking bandwidth BTW) using decade old APIs. But it was expected as they even removed the SSD cache. Their 2.4 GB/s number is only there for the PR, but they sacrificed everything else for that and they'll never even reach their stated 4.8GB/s compressed number because of how inefficient the whole system is with I/O.

Yup, but so far of what we've seen the old dev kit of PS5 is 10x faster than PS4 that makes it AT LEAST around a devastating 2.2x faster than the latest XSX in action (120% faster), and that's being humble and generous to XSX. How does load times describe it? Not sure.

Fun fact, PS4 is much faster than Xbox One:

 
Last edited:
On Microsoft's side it appears to be 3 things, their form factor and all that it's allowed them to achieve from a performance standpoint (we've seen how many people now building an entire series x from scratch? Microsoft are that proud of the thing.
They have to be proud because it's going to be very difficult to convince a lot of people to change their TV stands/cabinets to accommodate 30 cm of Xbox. I know it might sound trivial for a real fan of the brand but millions of people out there will just look at the small fridge, shrug and buy the smaller form factor console which doesn't make them change their entire setup.

What I've also noticed from the start, and nobody has to take my word for it, is how much AMD has been hyping the hell out of Series X on their official blog. It's been this way since it was called project scarlett. It's unlike anything I've ever seen in recent memory. They talk like Series X will match the full RDNA 2 PC featureset, or at the very least appear so damn impressed by what went into it that they feel on some level it's good PR for their company. From my perspective anyway, if these two consoles were as similar from a GPU architectural standpoint as so many assume, then you would think AMD would be dedicating equal time to praising the Playstation 5 the way they've been doing Xbox Series X. We know both consoles will be amazing and there's no "weak" or horrible system here. Gamers will be well served all gen long, but nobody is arguing that. We've only just been discussing how they will compare from a game performance standpoint.
This is because Microsoft needs any help they can get to get the word out. You have no idea how synonymous Playstation is with the word "console" in most of the world. Xbox is that cheap box with Kinect from last gen you've probably seen in an electronic appliances shop but nobody you know has it. I'm not exaggerating in any way. Sony will build their own narratives with the word "PlayStation" and gain millions of likes for a logo or controller design. Microsoft has it the hard way so they'll invent a lot of buzzwords and partner with companies like AMD, Dolby, etc. Still, nobody cares because behind those buzzwords, it's the same thing. But PR people don't get that because admission would mean they're useless and can be fired to save money.

After Borderlands 2 came out, almost nothing good happened for the genre.
Division 2 is a looter shooter and one of the best games of this generation for me.
 
That's an exaggeration, I think. I'm pretty sure that 4.8 figure is a typical value.
No. It's the max compressed speed for all data. 6GB/s being the max compressed speed for textures with a lossy compression (so most multiplat devs won't use it).

They never stated any typical speeds, only Sony did. Just Look at their own disappointing bench to get a sense of their typical speeds.
 
No. It's the max compressed speed for all data. 6GB/s being the max compressed speed for textures with a lossy compression (so most multiplat devs won't use it).

They never stated any typical speeds, only Sony did. Just Look at their own disappointing bench to get a sense of their typical speeds.

We might face a problem for that comparison, as PS5 might have no loading screens. Probably it's safe to make an imaginary ~1sec for PS5 to make it comparable.
 
quick question what advantage of including programing language code into post?
for example:
Python:
def percentage_calc(n, n2):
    if n > n2:
        return str(round(100 - n2 * 100 / n, 2)) + '%'
    if n < n2:
        return str(round(100 - n * 100 / n2, 2)) + '%'
    else:
        return '0%'
        
print(percentage_calc(12, 10.28))

14.33%
 
quick question what advantage of including programing language code into post?
for example:
Python:
def percentage_calc(n, n2):
    if n > n2:
        return str(round(100 - n2 * 100 / n, 2)) + '%'
    if n < n2:
        return str(round(100 - n * 100 / n2, 2)) + '%'
    else:
        return '0%'
       
print(percentage_calc(12, 10.28))

14.33%
To show you follow PEP8?😜
 
Wait isn't today Microsoft news day according to some IGN tweet. What are we expecting?
MS has shown the console itself already so next-gen gameplay would be hella tasty ngl.
 
Status
Not open for further replies.
Top Bottom