• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Velocity Architecture - 100 GB is instantly accessible by the developer through a custom hardware decompression block

BrentonB

Member
If that were true it wouldnt be variable.

Not all games or applications will require full power from the APU. Take backwards compatibility or Netflix as an example. The CPU will hardly be taxed by a PS4 game and watching a video will hardly require anything from the system. In these cases it will be more efficient to lower clock speeds. This is especially useful when a game engine isn't yet fully optimized for PS5 and running full tilt will actually break a game.
 

rntongo

Banned
Does it really matter though? If the games are dope, what's the difference? I'm not too fussed about the specifics, and judging by what we saw the other day the games will be just fine. I suppose the Smart Shift is there to make sure power isn't wasted if there are inefficiencies in code or optimization? It can just move the power from one side to the other so that games always perform in the best way.

You're right, the most important thing is the quality of games and that's going to be good no doubt.
 

Gavon West

Spread's Cheeks for Intrusive Ads
Not all games or applications will require full power from the APU. Take backwards compatibility or Netflix as an example. The CPU will hardly be taxed by a PS4 game and watching a video will hardly require anything from the system. In these cases it will be more efficient to lower clock speeds. This is especially useful when a game engine isn't yet fully optimized for PS5 and running full tilt will actually break a game.
I agree. However, it just doesnt make sense if the PS5 basically runs at its max peak all the time - even 98% or 97% of the time - why not just lock the 10.28Tf or whatever it is, in? If it can keep those sustained clocks, thats not variable.
 
K

Kise Ryota

Unconfirmed Member
I think that smartshift is more about workloads than clock speeds. Both CPU and GPU can reach max clocks if the workload is within the power budget. The question is: what would be these workloads that could 'break' the total power budget? (which would cause a downclock on CPU, GPU or both). We can only wait to see what's the truth.
 

BrentonB

Member
I agree. However, it just doesnt make sense if the PS5 basically runs at its max peak all the time - even 98% or 97% of the time - why not just lock the 10.28Tf or whatever it is, in? If it can keep those sustained clocks, thats not variable.

I really think the main reason is to accommodate engines built for PS4. Mark talked about how they'll need to test games to make sure they actually work on the PS5 due to how bad the Jaguar CPU was. If the PS5 is too fast then it'll break compatibility. I don't care about BC so I would be fine with fixed clocks, but there are enough people who want it that they needed a way to make it work.
 

Deto

Banned
I agree. However, it just doesnt make sense if the PS5 basically runs at its max peak all the time - even 98% or 97% of the time - why not just lock the 10.28Tf or whatever it is, in? If it can keep those sustained clocks, thats not variable.

Not to have 50-page topics complaining that the PS4 becomes a jet engine when you open the Horizon Zero Dawn 2 map.

Variable clock is an engineer solution, while fixed clock is a marketing solution made just for you to be here posting exactly that.

Interesting that no one questions all the video cards, which vary the clock with the temperature, a much less and problematic solution for games, since the FPS of the game on the PC varies with room temperature, which is much worse than clock varying with electricity consumption , which is totally predictable.

If I were you, they would tell anyone who is going to buy a Geforce 2070, which is garbage and has fake performance.

Variable clock will be the new "checkboarding"

MS spent 1 year doing crap marketing "true 4k" to call the PS4 PRO fake for using checkboarding, along with its fans, so that in the end this would be the solution of the future with DLSS 2, checkboading, temporal injection, etc.

Now we will have this again, xbox fans will stay for a few months with "PS5 clock fake" on the internet, so that in the end this PS5 solution will prove superior and if adopted by MS in "xbox sx 2"

next xbox will have a variable clock like the PS5, varying with consumption, in the same way that the xbox one X uses checkboading in gears 5 and SX will use features similar to checkboading as well.

Summary:

PS4 PRO: "4k fake trash"
MS: "hurr durr true 4k".

MS games come out: wait, we're also going to use it (aka checkboaring gears 5) because it's a smart and superior solution.

PS: "TF fake, clock fake"
MS: "hurrr durr true Clock"

xbox sx "2": wait, this is a superior solution too, we will use it too.
 
Last edited:
Ultimate is not a revision. It's all the features that were not part of the feature level but supported by some DX12 GPUs now part of the DX12 Ultimate which includes everything in the spec.

There may be things in the spec that are refined but again what are you looking for? Why tie that refinement to the 2x figure?

If that spec clearly explains what SFS is, if that spec clearly states what it does and MS claim "SFS gives you 2x multiplier because you only need to load part of the texture". Why assume that this multiplier is from some unknown method or something secret that other GPUs don't have?

Why not just take it at face value and realise SFS is the method described already in the spec and offers this multiplier by loading only part of the texture like PRT+ when compared to whole texture ? Why does this stupid idea that this is some super secret sauce that offers 2x or 3x performance just for the XSX need to exist? It's a stretch based on wishful thinking.

Hey look, I didn't bring up any 2x multiplier or any of that stuff. I just said I thought DX12 Ultimate was a revision of DX12, adding more to it. The way you describe it makes it seem more like DX12 had a roadmap and basic 12 was the first part of that roadmap. Ultimate is the next part, fulfilling more of the specification that Basic didn't, due to whatever combination of market and development features.

In terms of what performance benefits it brings, the truth is we don't know yet. However, I'm willing to take MS's 2x to 3x claim at face value because, again, I'm an optimist and generally will take what the engineers at MS and Sony are claiming unless real-world results on their hardware end up forming a pattern of performance that betrays their claims. But we're not at that point yet, for either, because the consoles aren't out yet.

There could be an implementation of these DX12U features on XSX's Velocity Architecture that provide the performance they claim, I don't see much of a reason to cast doubt on their claims this early on. If actual performance falls short, then it'll fall short, and can be acknowledged as such. But for the time being, we should at least provide them the benefit of the doubt that seems to be afforded to Sony.

The chip in the GitHub Leak wasn't RDNA2 tough since it lacked hardware based raytracing.

It very likely was RDNA2, the RT would not have been needed enabled for Ariel iGPU profile testing, because Ariel was an RDNA1 chip. References to Navi 10 were likely towards the Ariel iGPU, as well.

At this point it's more odd to question Oberon not being the PS5 chip versus accepting it is, because there is no proof of any other chip matching up to the PS5 specs as we know them. Same way how Arden is very likely pretty much the XSX chip. The differences in terms of active clocks and active CUs between PS5/XSX and the Oberon/Arden chips can be rationalized through historical precedent (Morpheus APU having parts of its devkit chip disabled for running PS4 regression compatibility, Scorpio APU having all CUs active for devkit and 4 disabled for retail unit. Both of this match up almost exactly with trends of the Oberon and Arden chips, respectively).

Ask yourself from 6th gen onward if we've ever gotten within 5-6 months of new system launches having ZERO info on actual chips of said upcoming systems? It's not happened, because we've always had some concrete info on next-gen system chips by then. There are no other options for PS5 and XSX; Oberon and Arden are their respective APUs.

Neither has to be downclocked for the other to reach maximum clock speeds. Developers do not need to choose between the two.

For the final/retail system, yes. But the devkits currently use "profiles", which hard-set one component at a lower power setting so the other can operate at a higher power setting, in both cases affecting the frequency/clocks on said components.

Again, have to stress this is only the case for the devkits, Cerny's said the retail system will effectively automate the power shifting on its own, in the background.

Was that quote about needing to throttle back CPU to sustain max GPU clock a misprint? I forgot what dev it was, I could search the thread. That quote seemed in line with the other “power profile” comments too, I thought?

It wasn't a misprint; the devkits use power profiles, as you mentioned. And the power profiles hard-set certain parameters in terms of power load settings for CPU and GPU (and maybe other components too like the audio processor).

In that context the comment about throttling CPU to sustain max GPU clock wasn't really a misprint, since at least some devs are probably doing that right now with devkits. But the final retail system should have implemented the variable frequency stuff fully, and the process automated, so devs won't need to set things to hard power profiles (tho they need to manage their code to ensure they stay within power budget ranges, of course).

I don't get the damage control.

You can't excuse one fan base saying their console is RDNA 2 and the other is RDNA 1.5 or RDNA 1. Fact is, both are labeled RDNA 2 and there's no excuse for them to say this other than to make the PS5 look weaker.

I'm not excusing anything; just saying not everyone who says the systems aren't full RDNA2 is trying to insinuate they are RDNA1. The fact of the matter is they are both custom GPUs that will use as many of the RDNA2 feature set as deemed required. And we're already hearing the systems may have some RDNA3 features, that doesn't mean they are RDNA3 (TBF I am a bit weary on the RDNA3 rumors but we'll see).

And that's not the point. They're saying it's not a 9.2TF console and it will reach 10.2TF "sometimes" in certain situations. Based on the numbers you just provided, the 6.9TF number would be smaller based on what they're saying and it's simply not true.

You're maybe taking the 6.9/8.1 numbers out of context. When Sony and MS give their TF numbers, they're speaking in theoretical terms. The higher the theoretical, the more headroom there is for actual real-world performance to reach. If the architectures are the same (as they are the case here), then that ratio stays the same between the GPUs, which will generally reflect in the numbers.

Again I suggest watching that NXGamer video (and the latest one on the SSD I/Os while at it); they bring up the somewhat poor throughput utilization in real-world application terms with the PS4 and XBO GPUs. That number should be MUCH higher with the PS5 and XSX GPUs, but the point is that we'll likely never see a PS5 game actually "really" hit 10.275 TF even at max utilization, just like how we'll never actually "reallY' see a XSX game hit 12.147 TF at max utilization, in real-world game scenarios. But both systems should hit very close to those theoretical maximums, even higher than the numbers NXGamer gave IMHO.

SSD was going to be talked about regardless of the TF count. It's a major factor next gen and it's leaps faster than what's on the XsX. It's something that really separates both consoles while the TF and CPU figures are very close.

Again though, they're paper specs, and MS gave sustained numbers whereas Sony did not clarify if theirs are sustained or peak. And there are two components to the SSD I/O: hardware and software. PS5's solution has the hardware advantage, but we could be in a situation where XSX's could end up with the software advantage and while that wouldn't close the delta on that front, it would shrink it to a notable amount.

This is just speculation though, because we don't have enough info the SSD I/O in the systems yet. Yes, I know that sounds ridiculous given what we actually already know, but in the scope of ALL the tech that goes into even just the SSD I/O component, there's lots of crucial stuff we don't know yet. Officially, anyway.

There's also the fact that you can't look at deltas without considering the context of what the numbers actually reference. I'll put it like this; let's say the SSDs are like Subaru sedans and the CPUs/GPUs are like supercars. PS5 has a higher-end Subaru sedan and XSX has a lower-end one. CPU-wise let's say the PS5 has a Lamborghini Gallardo and the XSX has a Lamborghini Murceilago. And in terms of GPUs, the PS5 has a Ferrari F50 and the XSX has a Ferrari Enzo.

Those are very rough comparisons but work with me here. We've got a three-part F1 circuit race and PS5 has its three cars and XSX has its three cars. Now the PS5's Subaru is going to generally beat XSX's Subaru but we all know the Lambos and Ferraris are the stars of this F1 circuit race, they are simply performing on a completely different level. And they are both absolutely demolishing the Subarus. Now PS5 and XSX's Lambos and Ferraris may be relatively closer in performance than their Subarus, but the one with the higher-performing Lambos and Ferraris is still going to win.

And we're using an F1 circuit race example here because we're talking about overall performance of these things testing a multitude of their capabilities in combination within practical terms, not quick-burst performance of just select features. That's the scale these things really fit in when it comes to the overall architectures and where the components fall in place in terms of priority levels.
 

DForce

NaughtyDog Defense Force
I'm not excusing anything; just saying not everyone who says the systems aren't full RDNA2 is trying to insinuate they are RDNA1. The fact of the matter is they are both custom GPUs that will use as many of the RDNA2 feature set as deemed required. And we're already hearing the systems may have some RDNA3 features, that doesn't mean they are RDNA3 (TBF I am a bit weary on the RDNA3 rumors but we'll see).

You are. You're telling me things that have nothing to with the conversation. It doesn't matter if it's not full RDNA 2, the fact is, Sony and MS call it RDNA 2 and you can't say one is full RDNA 2 and the other is not.

You're maybe taking the 6.9/8.1 numbers out of context. When Sony and MS give their TF numbers, they're speaking in theoretical terms. The higher the theoretical, the more headroom there is for actual real-world performance to reach. If the architectures are the same (as they are the case here), then that ratio stays the same between the GPUs, which will generally reflect in the numbers.

Again I suggest watching that NXGamer video (and the latest one on the SSD I/Os while at it); they bring up the somewhat poor throughput utilization in real-world application terms with the PS4 and XBO GPUs. That number should be MUCH higher with the PS5 and XSX GPUs, but the point is that we'll likely never see a PS5 game actually "really" hit 10.275 TF even at max utilization, just like how we'll never actually "reallY' see a XSX game hit 12.147 TF at max utilization, in real-world game scenarios. But both systems should hit very close to those theoretical maximums, even higher than the numbers NXGamer gave IMHO.

I watched it, and again, its IRREVERENT.

Let me put it like this since you're clearly missing the point.

They're saying it will be 4.9 and sometimes hit 6.9 because PS5's variable frequency won't be able to maintain that number when needed.

It's the fact that they're saying 10.2TF is not the real number and that it will only hit that number "sometimes".

I don't have to explain this an further.

Again though, they're paper specs, and MS gave sustained numbers whereas Sony did not clarify if theirs are sustained or peak. And there are two components to the SSD I/O: hardware and software. PS5's solution has the hardware advantage, but we could be in a situation where XSX's could end up with the software advantage and while that wouldn't close the delta on that front, it would shrink it to a notable amount.

This is just speculation though, because we don't have enough info the SSD I/O in the systems yet. Yes, I know that sounds ridiculous given what we actually already know, but in the scope of ALL the tech that goes into even just the SSD I/O component, there's lots of crucial stuff we don't know yet. Officially, anyway.

There's also the fact that you can't look at deltas without considering the context of what the numbers actually reference. I'll put it like this; let's say the SSDs are like Subaru sedans and the CPUs/GPUs are like supercars. PS5 has a higher-end Subaru sedan and XSX has a lower-end one. CPU-wise let's say the PS5 has a Lamborghini Gallardo and the XSX has a Lamborghini Murceilago. And in terms of GPUs, the PS5 has a Ferrari F50 and the XSX has a Ferrari Enzo.

Those are very rough comparisons but work with me here. We've got a three-part F1 circuit race and PS5 has its three cars and XSX has its three cars. Now the PS5's Subaru is going to generally beat XSX's Subaru but we all know the Lambos and Ferraris are the stars of this F1 circuit race, they are simply performing on a completely different level. And they are both absolutely demolishing the Subarus. Now PS5 and XSX's Lambos and Ferraris may be relatively closer in performance than their Subarus, but the one with the higher-performing Lambos and Ferraris is still going to win.

And we're using an F1 circuit race example here because we're talking about overall performance of these things testing a multitude of their capabilities in combination within practical terms, not quick-burst performance of just select features. That's the scale these things really fit in when it comes to the overall architectures and where the components fall in place in terms of priority levels.


Right, all MS numbers are "substained" but PS5's SSDs are not. You're really playing word games here and it reminds when we had this discussion a few months ago when you were saying Cerny was assuming his numbers.

I don't have time for people who just want to keep moving goalpost.
 

ToadMan

Member
Just trying to figure out what specs each of the 12 chips will have (MT/s) and what current NVMe drives have the same or similar spec chips. I'm bored and a nerd.....

Oh.... I don’t think we can really know that unless Sony produce a detailed spec or after a tear down is done post launch.

I mean the SSD performance is the big ace Sony have - they’re going to try and protect that information as long as they can.

They’ve been working with Samsung so I’d look at Samsung’s fastest SSD and then consider ways that speed could be improved.

Samsung use V-NAND usually. Could be MLC but Samsung have also used TLC (while calling it MLC).

I imagine Sony took the Samsung SSD, ripped out the controller and put in their own custom controller and increased the cooling so they can run the NAND and associated buses at a higher rate.

The PS5 is all about thermal/power management and so far they haven’t shown what it looks like. Cerny just said he expected people to be “quite happy” with their cooling solution.

Was he being mischievous and understating it, or is the PS5 gonna be as ugly as the Xsex to achieve the necessary cooling performance?

Don’t know yet because we haven’t seen anything official. But given the cooling focus, I’d say Sony have taken high end Flash and over clocked it with their custom controller to get their throughput performance.
 
You're really confusing me. Microsoft stated that SFS will provide 2-3x multiplier for RAM and SSD. When a Microsoft engineer working on SFS was asked, he declined to comment but gave scenario of 4x improvement by using only 25% of a texture. What more do you want? All the tweets have been posted here.

First understand the technology then after use as argument in that way, I posted this in other thread but is useful here.

- SFS is the best thing ever happen to the memory
- Yes it is good they improve that last solution because you know this far of be new, right ?
- This is new they told us
- Not exactly a similar solution appears in since the generation of Xbox 360 and PS3 was use for ID software for use megatextures
and after this gen both consoles announced as part of its API PS4 called 'Partially Resident Textures' and Xbox One 'Tiled Resources'
-But they announced this will decrement in almost 3x times or higher the use of memory
-Yeah that is not true because first we already have a similar technology, second you are thinking Play didn't improve it in 7 years and actually
you cannot use every time
-You only hate Xbox of course they can use
-They said not me, this is just an improvement and actually already GPU of NVIDIA are compatible so yeah is not so exclusive

b9QaZBN.jpg
dku5imq.jpg

rXsZShD.jpg



IraDa75.jpg

But hey is not like Xbox lied the spec sheet they just forget in which year they live and what technologies exists and its limitations.

Know something funny the guy MixterMedia of whatever is called said in 2014 the solution of Xbox one was 100 to 1 compare to PS4, yeah sure.
 

ToadMan

Member
If that were true it wouldnt be variable.

Yes it would.

The power is capped the frequency may vary.

During development, code is optimised to meet the power cap. If the developers are successful in their optimisation the clocks run at 100% all the time.

If the developers aren’t successful or miss an optimisation scenario, at run time, if the power demand is exceeded, SmartSwitch can allocate power from cpu to gpu or vice versa if one of them is running below its power budget. In this case the clocks of both remain at 100%.


If the code is unoptimised to the point the power demand is greater than the total power budget (this would be due to poor development optimisation - I.e. poor development technique) , the clock speed will be throttled to ensure the power budget is maintained.

It’s a simple system that has been used in various forms for decades now. It’s only new in the console space.
 
Last edited:

NXGamer

Member
Maybe this is a dumb question but is there even a way to ever know for sure? Can the guys at DF or any expert analysis person like NXGamer NXGamer actually measure a consoles TFLOPs in use? Or will we always just kinda be guessing.
Guessing. Only way to know is image tests, frame-rate and other factors. Direct access to code debugger would be excellent but NDA's exist for a reason.
 

ToadMan

Member
I agree. However, it just doesnt make sense if the PS5 basically runs at its max peak all the time - even 98% or 97% of the time - why not just lock the 10.28Tf or whatever it is, in? If it can keep those sustained clocks, thats not variable.

I think you haven’t grasped the relationship of power, processor activity and clock speed.

I’m sure you’ve noticed that the fans on your current console get louder sometimes yet the clock speed is fixed.... so why the change in fan speed?

It’s because the amount of work being done per clock tick - the processor activity - is increasing, consuming more power and generating more heat. The frequency didn’t change, the work load did.


For PS5, the above won’t happen - the power is capped, the fans won’t get louder than the max power level. As long as processor activity stays within the power budget the PS5 will run at 100% clock on both gpu and cpu all day.

It’s only when that (total system) power budget is breached that clock speeds will be throttled. The cause of such a power budget breach is poorly optimised code - that’s to say throttling of the clock speed is a development choice or optimisation failure.
 

jimbojim

Banned
The actual SSD speed is still 2.4GB/s you're showing you don't understand what you're saying. Compression itself is a multiplier effect. Improving texture streaming is also a different kind of multiplier effect. The SSD speed will always remain 2.4GB/s!!

I've just used 4.8 as an example, i know it's raw 2.4. I've should said a little better. Anyway, and you're understand all what you're saying? Dude, i'm not the person on ERA who was told by a well know insider and dev Matt to stop spreading nonsense.

1qm99nl.jpg



You tried the same thing there, you didn't succeeded

jJz8R6U.jpg


I came here to describe what I'd learnt about the XSX and also to learn more from others. I've only gotten twisted information from you and lies at times

You came here to learn ( or on ERA ) about how console works and how are they designed and other tech stuff? Well, go to some good college in Silicon Valley. There is a loooong way ahead of you then. You know just like the rest of us ( someone little more, someone little less ), relying on tweets, posts, articles across the net.
 
Last edited:

dottme

Member
It’s a simple system that has been used in various forms for decades now. It’s only new in the console space.

Where has this been used before? All the other system I know vary the clock based on the temperature of the chip.
Where this system is superior is that it is independent of the chip temperature and always run the same code the same way.
 

Panajev2001a

GAF's Pleasant Genius
The actual SSD speed is still 2.4GB/s you're showing you don't understand what you're saying. Compression itself is a multiplier effect. Improving texture streaming is also a different kind of multiplier effect. The SSD speed will always remain 2.4GB/s!!

You think you are uncovering something magical here? He knows, you know, everyone knows the raw speed the XSX SSD can transfer data at.

MS is the one that said, counting lzma and BCPack, the effective (taking compression into account) data rate is estimated to be around 4.8 GB/s which is what he was saying.
Sony did the sack calculation using Kraken as compressor and from 5.5 GB/s they computer an effective data rate of 8-9 GB/s which shows the advantage of using BCPack for textures (Kraken is more efficient than lzma but not as much as BCPack for texture data).
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
That's not true. Cerny's argument was that a reduction in power in one processor only brings about a smaller percentage reduction in it's clock speed. So a good example would be that if the CPU is at 3.5GHz, and the GPU needs to hit 2.23GHz, you could transfer say 10% of power from the CPU in order to hit the 2.23GHz for the GPU. But according to Cerny it would be a few percentage points so say 3% for example. But here is the other main issue, if the percentage reduction in clock speed is so small why do you need to draw power away from one processor to the other?? It would be negligible and you'd just let them run at their respective clocks all the time. So that's why his presentation of the APU wasn't adding up for me.

Because reducing clockspeed (and potentially allowing you to reduce voltage as well) allows the more complex work the GPU is doing (power usage is not just clockspeed, voltage, and capacitance but you have activity of the chip to take into account too). In many cases both CPU and GPU can run at their top frequency and one of the optimisation jobs is figuring out whether you want that or you want to run particular workload that would cause this system to kick in or refactor your code to avoid it. Power is fixed, everything else is not.

More and well said indeed at (if you are interested in what this is about): https://www.neogaf.com/posts/258225582/
 
Last edited:

rntongo

Banned
I've just used 4.8 as an example, i know it's raw 2.4. I've should said a little better. Anyway, and you're understand all what you're saying? Dude, i'm not the person on ERA who was told by a well know insider and dev Matt to stop spreading nonsense.

1qm99nl.jpg



You tried the same thing there, you didn't succeeded

jJz8R6U.jpg




You came here to learn ( or on ERA ) about how console works and how are they designed and other tech stuff? Well, go to some good college in Silicon Valley. There is a loooong way ahead of you then. You know just like the rest of us ( someone little more, someone little less ), relying on tweets, posts, articles across the net.
Wasn't going to reply to you but you've said nothing of substance except going through my posts on resetera. I learned to approach the topic differently and now I share what I know and read from others on these posts. I've learned a lot.
 

Panajev2001a

GAF's Pleasant Genius
I came here to describe what I'd learnt about the XSX and also to learn more from others. I've only gotten twisted information from you and lies at times. See how you're using PRT which is what the Xbox One and PS4 used in order to make a false equivalence to Sampler Feedback in DX12U and XSX. Reflect on that

You came here armed maybe with good intention and spreading unsubstantiated FUD. Not my fault you are not keeping your argument straight and just middle waters by jumping from argument to argument.
Then again this is if you are not disingenuous and just trying to put down the box you like/promote least.
 
Question - my layman brain says what you wrote sounds awesome. I'm not following why people are arguing over the SSD though. Maybe I need to digest the information more, but isn't the point of this thread that Series X also has at least part of ssd available to use like Sony is doing? If so, what is the actual difference that people are debating?
Reports indicate that BCPack enables 50 percent or greater compresssion of game textures, which is substantially higher than what’s possible with Kraken. It’s important to remember, though, that the Playstation 5 delivers over 2 times the raw I/O throughput of Xbox Series X and even a highly efficiently texture compression algorithm like BCPack is unlikely to fully compensate for that. On-the-fly compression and decompression means that the Xbox Series X’s storage has an effective throughput of around 4.8GB/s.

Sony’s Kraken compression algorithm is a big part of the reason why the PS5 is able to hit an effective throughput of nearly 9 GB/s. The I/O unit itself, however, is capable of outputting as much as 22GB/s if the data compressed well.”

Basically at its best Xbox hits 4.8GB/s. PS5 at its worst hits 5.5GB/s. They can both access data from the SSD magnitudes quicker than before. PS5 is just in a different league in eliminating this bottleneck that has existed since the PS360 days. There will be major gameplay advancements that even the most expensive PC wouldnt be able to run.

100GB being instantly accessible is nonsense though. The whole SDD is accessible at 4.8GB/s for the Xbox, 8-9 GB/s for PS5 at average.

Sony just has to show their exclusives, to see If they can walk the walk. If the UE5 demo is anything to go by, yup.
 

martino

Member
That's DX12. SFS is a DX12 Ultimate feature building on top of all that.



What about them? We don't know the entire setup of the SSD I/O system in the platform, so it's premature to assume how some of these work.



That's probably exactly what they are doing. Either that, or they have a chunk of SLC NAND cache on a block of NAND to store the paging table.



Wager based on what? Again, you're making assumptions based on incomplete data/information. All of the problems you are mentioning, I'm sure MS and Seagate have known of them and taken measures to mitigate their impact in the design. So I still say your overhead cost/performance hit is wildly excessive.



The 2.4 GB refers to the sustained speed; peaks could be a bit higher, a lot of data operations will be lower because they simply won't need to demand 2.4 GB/s of bandwidth throughput. Thermal situations etc. are also why they gave the 2.4 GB/s sustained clock. These are also potential issues that affect PS5's SSD, and in fact we don't know what it's sustained numbers are on the SSD or if the SSD under continuous heavy loads will affect the power load of the system (in turn affecting the variable frequency rate).

They're still questions that will have to be answered in due time.



Well regardless, without seeing the specific post I can't directly comment on what you're addressing in that regard.

However, whatever perception you might seem to have on my posts, I can assure you in reality that is not what I'm doing, or at least it's not my intention. When it comes to discussing console tech I tend to focus more on the system that either is the underdog in the situation or where there's (from my POV) more misinformation on, intentional or accidential.

At current, IMO it feels like the XSX is the system with more misinformation on it, and fewer people who attempt speaking out on clearing up that misinformation, compared to PS5. I do speak out against PS5 misinformation too, just not as often, because there's usually more people who will do that anyway, and a tone is set to dissuade that type of misinformation around here at the very least.

Sometimes I bring up certain Youtubers if I see them speaking their own misinformation, like Moore's Law Is Dead or the Innocenceii (I focused on a graph they had which @Kazekage1981 screencapped to speak about a persuasive psychological tactic at play there from my POV). But I have no problem giving props to those who seem to keep things pretty fair and informative on top of that, like NXGamer and RedTech Gaming.

I am an optimist when it comes to the consoles and their technology, but the reason you don't see me making yet another SSD thread (for example) is because A) there's already a million of them and B) while I know the importance SSDs will play in the next-gen, they aren't miracle workers and don't replace other, arguably more critical aspects of system performance like CPU and GPU. You can look at some of my recent posts in this thread and see how there are other people speaking on unknowns regarding XSX but taking the worst-case scenario, and I ask why?

I don't take worst-case scenarios with PS5; if I speak on something PS5-related where it may seem that way, I'm just probably trying to look at a topic from a different POV that the mainstream perspective isn't maybe considering. Like take the PS5 SSD for example: if I ask about the random read on first 4 KB block isn't not because I'm trying to take a worst-case scenario. It's because I know that will be very important in deciding actual SSD performance. The same can refer to XSX SSD but at least around here the pessimist outlook in terms of SSDs are geared more towards XSX's. Like with people saying the flying segment in the UE5 demo could not be done on XSX, despite none of us knowing what the actual SSD and I/O pipeline performance for that segment of the demo was.

If I said something like "why wouldn't XSX be able to do that segment", it's because I've already looked further ahead; if one of the next-gen systems can't do even a modest-level (compared to future versions of it which I feel will happen later in the gen) streaming segment similar to the UE5 demo, that is going to hurt next-gen 3rd-party development overall, even with UE5 engine scaling in effect. It would also lower the ceiling on what PS5 could accomplish with that later throughout the generation since you would already be talking about an UE5 demo with a streaming segment tapping almost 50% of the SSD's raw bandwidth.

So I hope this clarifies a few things; I'm not trying to do double standards, I just want to cut back on misinformation and I tend to gravitate to the underdogs in that regard. Though oft-times I will also do so for the popular pick if there's a need and I feel I can contribute something of a different perspective that hasn't been vocalized yet.

i only linked you .the quote wasn't from you.
 
Last edited:

RaySoft

Member
Already explained earlier. SeX offers better graphics/framerates/perf/$.
Imagine your TV screen. Now imagine that the PS5 only has to render what you see on that screen and nothing else. Nothing outside that TV screen is using GPU cycles. The Series X does the same. The quality of that screen is now more dependent on how fast you can stream in those assets, the faster you are, the more complex/ritch scenery you can render (up to the GPU's max thruput) The Series X does the same thing, but at a slower pace, so it has to compensate a little by loading in assets a bit before the PS5 has to and also needs to hold it in memory a bit longer as well. (the slower you are, the longer you have to hold data in ram. Series X uses a bit more CPU cycles than the PS5 during these asset fetches, so that 100MHz advantage it has will be eaten up by this. The Series X has a stronger GPU so it can render more complex geometry on screen at once, so on more slower scenes, the Series X has it's moment to shine. During more faster paced scenes (ala the last part of Unreal Engine 5 demo) It has to start to load the assets a little earlier and needs more ram to hold more of the data if it can't stream it in fast enough. It could also choose to drop some quality instead and maintain the speed that way. This is up to the devs (or engine) wich path it will take.

I know I've oversimplified stuff here, but it's within the ballpark. So saying that the XSX has better graphics/fps/perf/$ is not only a question of what defines this, but also on what kind of scene is being rendered. As the devs have said, they are more closer than many think.
They have both their strenghts and "weaknesses", but one thing's for certain though, both consoles will be breathtaking as soon as devs start to use them the "right" way.
 

ToadMan

Member
Where has this been used before? All the other system I know vary the clock based on the temperature of the chip.
Where this system is superior is that it is independent of the chip temperature and always run the same code the same way.

Embedded systems, mobile devices - where there is a power shortage/heat issue.

Implementations vary - they might do as you say and use the actual temperature of the chip rather than a model. Sony have taken this modelled SoC approach with PS5 because gamers in warmer ambient conditions would get a different experience to those in cooler ones.

But the principle is commonplace.
 
D

Deleted member 775630

Unconfirmed Member
During more faster paced scenes (ala the last part of Unreal Engine 5 demo) It has to start to load the assets a little earlier and needs more ram to hold more of the data if it can't stream it in fast enough. It could also choose to drop some quality instead and maintain the speed that way. This is up to the devs (or engine) wich path it will take.
I agree with what you said, the only problem is we don't have a clue on the amount of data that was streamed in that scene, what the needed speeds were to get there. But obviously the PS5 can just load more assets, but can the GPU handle that much information? Maybe the XSX SSD is at 4.8GB/sec because they think that more assets would be overkill since the GPU wouldn't be able to render all of it?
 

Panajev2001a

GAF's Pleasant Genius
I agree with what you said, the only problem is we don't have a clue on the amount of data that was streamed in that scene, what the needed speeds were to get there. But obviously the PS5 can just load more assets, but can the GPU handle that much information? Maybe the XSX SSD is at 4.8GB/sec because they think that more assets would be overkill since the GPU wouldn't be able to render all of it?

Possibly, but maybe when they developed the HW they decided they wanted to invest the extra silicon in getting TFLOPS and RT victory (possibly by going to 2x and more the performance of Xbox One X they were expecting clear supremacy there) and other factors such as the memory card.
It might also be the case that they did not want to clock the GPU that much higher or radically alter the RDNA2 architecture layout to pump the performance the shared triangle setup, HW dispatcher, ROP’s, rasteriser, geometry engine (another area where Sony actually invested money beyond the clock speed increase was customising the geometry engine with AMD), etc... up as that was the cost of committing to the next level up from what they reached and thus used their money in other areas.

Once they committed to the TFLOPS target and the approach to get there, they were in deep and had to see it through: now they had to balance it and make it possible (memory bandwidth became a problem they had to solve for example... that requires a lot of smart engineering work to make a system they could sell at a non exorbitant price too).
 

longdi

Banned
I agree with what you said, the only problem is we don't have a clue on the amount of data that was streamed in that scene, what the needed speeds were to get there. But obviously the PS5 can just load more assets, but can the GPU handle that much information? Maybe the XSX SSD is at 4.8GB/sec because they think that more assets would be overkill since the GPU wouldn't be able to render all of it?

Well put. That was what i wanted to say.
2.4/4.8gbs is great, overkill possibily.
5.5/9gbs could just be too much headroom, and probably just helps to ease even more development. But in terms of level designs and graphics performance, probably amounts to nothing.
 
Last edited:

Dory16

Banned
100GB being instantly accessible is nonsense though. The whole SDD is accessible at 4.8GB/s for the Xbox, 8-9 GB/s for PS5 at average.

Sony just has to show their exclusives, to see If they can walk the walk. If the UE5 demo is anything to go by, yup.
I don’t understand why one of the manufacturer’s claims are necessarily nonsense why the other’s claims are the gospel.
Years of RnD have gone into developing those consoles and most likely none of them will completely meet the specs that they advertise. We need to stop acting as if we understood every detail about the innards of those consoles that we have never seen or used. Games will tell the full story and DF will help spell it out. On paper there is no doubt which one is more powerful. A basic understanding of computer architecture tells us that much. I delay judgment for everything else.
 
Last edited:

longdi

Banned
Imagine your TV screen. Now imagine that the PS5 only has to render what you see on that screen and nothing else. Nothing outside that TV screen is using GPU cycles. The Series X does the same. The quality of that screen is now more dependent on how fast you can stream in those assets, the faster you are, the more complex/ritch scenery you can render (up to the GPU's max thruput) The Series X does the same thing, but at a slower pace, so it has to compensate a little by loading in assets a bit before the PS5 has to and also needs to hold it in memory a bit longer as well. (the slower you are, the longer you have to hold data in ram. Series X uses a bit more CPU cycles than the PS5 during these asset fetches, so that 100MHz advantage it has will be eaten up by this. The Series X has a stronger GPU so it can render more complex geometry on screen at once, so on more slower scenes, the Series X has it's moment to shine. During more faster paced scenes (ala the last part of Unreal Engine 5 demo) It has to start to load the assets a little earlier and needs more ram to hold more of the data if it can't stream it in fast enough. It could also choose to drop some quality instead and maintain the speed that way. This is up to the devs (or engine) wich path it will take.

I know I've oversimplified stuff here, but it's within the ballpark. So saying that the XSX has better graphics/fps/perf/$ is not only a question of what defines this, but also on what kind of scene is being rendered. As the devs have said, they are more closer than many think.
They have both their strenghts and "weaknesses", but one thing's for certain though, both consoles will be breathtaking as soon as devs start to use them the "right" way.

UE5 demo didnt need 9gbs streaming though.
You would think Epic designs multiplatform engine, are wise enough, to at most, tap out the average qlc nvme speeds.
Anything above is a bonus of overkill, anything below, yes you get a little more loading.
 

Panajev2001a

GAF's Pleasant Genius
UE5 demo didnt need 9gbs streaming though.
You would think Epic designs multiplatform engine, are wise enough, to at most, tap out the average qlc nvme speeds.
Anything above is a bonus of overkill, anything below, yes you get a little more loading.

The engine yes, the demo can shoot above. Enabling the HW is the purpose of these engines, it is the developers that seek parity after.
 

Panajev2001a

GAF's Pleasant Genius
Well put. That was what i wanted to say.
2.4/4.8gbs is great, overkill possibily.
5.5/9gbs could just be too much headroom, and probably just helps to ease even more development. But in terms of level designs and graphics performance, probably amounts to nothing.

Mmh... back to the “640 KB” ought to be enough for everyone kind of argument? Yeah, maybe XSX SSD is overkill and quite likely it is not...
 
D

Deleted member 775630

Unconfirmed Member
Possibly, but maybe when they developed the HW they decided they wanted to invest the extra silicon in getting TFLOPS and RT victory (possibly by going to 2x and more the performance of Xbox One X they were expecting clear supremacy there) and other factors such as the memory card.
It might also be the case that they did not want to clock the GPU that much higher or radically alter the RDNA2 architecture layout to pump the performance the shared triangle setup, HW dispatcher, ROP’s, rasteriser, geometry engine (another area where Sony actually invested money beyond the clock speed increase was customising the geometry engine with AMD), etc... up as that was the cost of committing to the next level up from what they reached and thus used their money in other areas.

Once they committed to the TFLOPS target and the approach to get there, they were in deep and had to see it through: now they had to balance it and make it possible (memory bandwidth became a problem they had to solve for example... that requires a lot of smart engineering work to make a system they could sell at a non exorbitant price too).
Could be. Your train of thought is as real as mine at the moment, we just don't really know :) Eventually we'll see the difference in games, June/July events can't come soon enough.
 

longdi

Banned
Mmh... back to the “640 KB” ought to be enough for everyone kind of argument? Yeah, maybe XSX SSD is overkill and quite likely it is not...

640KB was enough back then though. 🤷‍♀️
Like PS5 apu can process 448gbs of data, and SeX a healthy bit more.
Nice and all to feed the apu units with more data, but like how much better can you design a game with slightly bigger feeding tubes, that are already super enlarged from last gen game design?

It is just seeing to believe.
Mark may went overkill with his SSDio thing, by simply because 12 lanes 64bit cells are the most cost efficient, and they happened to be 12 lanes which gives nice theoretical numbers.
 

longdi

Banned
geometry engine (another area where Sony actually invested money beyond the clock speed increase was customising the geometry engine with AMD),

Do you know somethign we dont?
Is PS5 primative shaders stronger than SeX mesh shaders?

I doubt Mark Sony paid money to deviate from Rdna2 when he could choose the bigger APU ip.
 

longdi

Banned
Imo the disconnect is because many of us, takes the Series X hardware, as the gold standards for next gen.
Because of how One X was surprisingly powerful
Because of how clear and consistent MS information has been.
Because of what we see in PC space.

Thus there is faith that 2.4/4.8gbs is the optimal feeding tube for next gen hardware and game design.
Sony hyping their 5.5/9gbs seems a bit overkill, hence the disbelief at their choices. Sometimes when you focus hard on an aspect, that you unwillingly discard other areas to make that part work. 🤷‍♀️
 
Last edited:
I don’t understand why one of the manufacturer’s claims are necessarily nonsense why the other’s claims are the gospel.
Years of RnD have gone into developing those consoles and most likely none of them will completely meet the specs that they advertise. We need to stop acting as if we understood every detail about the innards of those consoles that we have never seen or used. Games will tell the full story and DF will help spell it out. On paper there is no doubt which one is more powerful. A basic understanding of computer architecture tells us that much. I delay judgment for everything else.
What are the claims. Those are the cold facts. Xbox better GPU. PS5 better SDD. Power vs speed.

Edit:
100 GB is instantly accessible by the developer with a I/O Throughput of 4.8 GB/s
(How can something be instantly and 4.8GB/s?)
 
Last edited:

rntongo

Banned
What are the claims. Those are the cold facts. Xbox better GPU. PS5 better SDD. Power vs speed.

Edit:
100 GB is instantly accessible by the developer with a I/O Throughput of 4.8 GB/s
(How can something be instantly and 4.8GB/s?)

Up to 100GB of of a game install on the SSD can have data directly accessed by the CPU. So not the whole 100GB at once.
 

93xfan

Banned
I agree. However, it just doesnt make sense if the PS5 basically runs at its max peak all the time - even 98% or 97% of the time - why not just lock the 10.28Tf or whatever it is, in? If it can keep those sustained clocks, thats not variable.

Isn’t it a GPU vs CPU situation? They trade performance for one or the other?
 

Three

Member
Hey look, I didn't bring up any 2x multiplier or any of that stuff. I just said I thought DX12 Ultimate was a revision of DX12, adding more to it. The way you describe it makes it seem more like DX12 had a roadmap and basic 12 was the first part of that roadmap. Ultimate is the next part, fulfilling more of the specification that Basic didn't, due to whatever combination of market and development features.

In terms of what performance benefits it brings, the truth is we don't know yet. However, I'm willing to take MS's 2x to 3x claim at face value because, again, I'm an optimist and generally will take what the engineers at MS and Sony are claiming unless real-world results on their hardware end up forming a pattern of performance that betrays their claims. But we're not at that point yet, for either, because the consoles aren't out yet.

There could be an implementation of these DX12U features on XSX's Velocity Architecture that provide the performance they claim, I don't see much of a reason to cast doubt on their claims this early on. If actual performance falls short, then it'll fall short, and can be acknowledged as such. But for the time being, we should at least provide them the benefit of the doubt that seems to be afforded to Sony.



It very likely was RDNA2, the RT would not have been needed enabled for Ariel iGPU profile testing, because Ariel was an RDNA1 chip. References to Navi 10 were likely towards the Ariel iGPU, as well.

At this point it's more odd to question Oberon not being the PS5 chip versus accepting it is, because there is no proof of any other chip matching up to the PS5 specs as we know them. Same way how Arden is very likely pretty much the XSX chip. The differences in terms of active clocks and active CUs between PS5/XSX and the Oberon/Arden chips can be rationalized through historical precedent (Morpheus APU having parts of its devkit chip disabled for running PS4 regression compatibility, Scorpio APU having all CUs active for devkit and 4 disabled for retail unit. Both of this match up almost exactly with trends of the Oberon and Arden chips, respectively).

Ask yourself from 6th gen onward if we've ever gotten within 5-6 months of new system launches having ZERO info on actual chips of said upcoming systems? It's not happened, because we've always had some concrete info on next-gen system chips by then. There are no other options for PS5 and XSX; Oberon and Arden are their respective APUs.



For the final/retail system, yes. But the devkits currently use "profiles", which hard-set one component at a lower power setting so the other can operate at a higher power setting, in both cases affecting the frequency/clocks on said components.

Again, have to stress this is only the case for the devkits, Cerny's said the retail system will effectively automate the power shifting on its own, in the background.



It wasn't a misprint; the devkits use power profiles, as you mentioned. And the power profiles hard-set certain parameters in terms of power load settings for CPU and GPU (and maybe other components too like the audio processor).

In that context the comment about throttling CPU to sustain max GPU clock wasn't really a misprint, since at least some devs are probably doing that right now with devkits. But the final retail system should have implemented the variable frequency stuff fully, and the process automated, so devs won't need to set things to hard power profiles (tho they need to manage their code to ensure they stay within power budget ranges, of course).



I'm not excusing anything; just saying not everyone who says the systems aren't full RDNA2 is trying to insinuate they are RDNA1. The fact of the matter is they are both custom GPUs that will use as many of the RDNA2 feature set as deemed required. And we're already hearing the systems may have some RDNA3 features, that doesn't mean they are RDNA3 (TBF I am a bit weary on the RDNA3 rumors but we'll see).



You're maybe taking the 6.9/8.1 numbers out of context. When Sony and MS give their TF numbers, they're speaking in theoretical terms. The higher the theoretical, the more headroom there is for actual real-world performance to reach. If the architectures are the same (as they are the case here), then that ratio stays the same between the GPUs, which will generally reflect in the numbers.

Again I suggest watching that NXGamer video (and the latest one on the SSD I/Os while at it); they bring up the somewhat poor throughput utilization in real-world application terms with the PS4 and XBO GPUs. That number should be MUCH higher with the PS5 and XSX GPUs, but the point is that we'll likely never see a PS5 game actually "really" hit 10.275 TF even at max utilization, just like how we'll never actually "reallY' see a XSX game hit 12.147 TF at max utilization, in real-world game scenarios. But both systems should hit very close to those theoretical maximums, even higher than the numbers NXGamer gave IMHO.



Again though, they're paper specs, and MS gave sustained numbers whereas Sony did not clarify if theirs are sustained or peak. And there are two components to the SSD I/O: hardware and software. PS5's solution has the hardware advantage, but we could be in a situation where XSX's could end up with the software advantage and while that wouldn't close the delta on that front, it would shrink it to a notable amount.

This is just speculation though, because we don't have enough info the SSD I/O in the systems yet. Yes, I know that sounds ridiculous given what we actually already know, but in the scope of ALL the tech that goes into even just the SSD I/O component, there's lots of crucial stuff we don't know yet. Officially, anyway.

There's also the fact that you can't look at deltas without considering the context of what the numbers actually reference. I'll put it like this; let's say the SSDs are like Subaru sedans and the CPUs/GPUs are like supercars. PS5 has a higher-end Subaru sedan and XSX has a lower-end one. CPU-wise let's say the PS5 has a Lamborghini Gallardo and the XSX has a Lamborghini Murceilago. And in terms of GPUs, the PS5 has a Ferrari F50 and the XSX has a Ferrari Enzo.

Those are very rough comparisons but work with me here. We've got a three-part F1 circuit race and PS5 has its three cars and XSX has its three cars. Now the PS5's Subaru is going to generally beat XSX's Subaru but we all know the Lambos and Ferraris are the stars of this F1 circuit race, they are simply performing on a completely different level. And they are both absolutely demolishing the Subarus. Now PS5 and XSX's Lambos and Ferraris may be relatively closer in performance than their Subarus, but the one with the higher-performing Lambos and Ferraris is still going to win.

And we're using an F1 circuit race example here because we're talking about overall performance of these things testing a multitude of their capabilities in combination within practical terms, not quick-burst performance of just select features. That's the scale these things really fit in when it comes to the overall architectures and where the components fall in place in terms of priority levels.
You've misunderstood what I've meant. I am not claiming some false information is being given by MS or some betraylaton. Taking it at face value would be accepting that the vast majority of that 2x-3x multipler is the know method of SFS over transferring the whole texture as is clearly even stated by the spec and MS' PR material. I'm saying why are some people in these threads, one particularly, then using that figure as some kind of XSX secret sauce? MS aren't the ones giving false info its just some people here are trying hard to forcefully link that figure to some custom exclusive feature of XSX. The reason is kind of clear too.
 
Last edited:

Redlight

Member
“full clock speed for the vast majority of the time”

Do you know where that quote came from exactly? It was mentioned as part of the Eurogamer article earlier, but the one I read said 'full clock speed most of the time'. I might've missed it being posted here.

For the record, I imagine that the PS5 does run at peak the vast majority of the time, but if Cerny actually said 'vast majority' it would be helpful if you could provide a link.
 

Eliciel

Member
What are the claims. Those are the cold facts. Xbox better GPU. PS5 better SDD. Power vs speed.

Edit:
100 GB is instantly accessible by the developer with a I/O Throughput of 4.8 GB/s
(How can something be instantly and 4.8GB/s?)

It's Like Marketing, it's Like definining that everything >4.8gb/s is considered "instantly". Not the first time someone would define such "Segments" for Marketing reasons...
 

RaySoft

Member
I agree with what you said, the only problem is we don't have a clue on the amount of data that was streamed in that scene, what the needed speeds were to get there. But obviously the PS5 can just load more assets, but can the GPU handle that much information? Maybe the XSX SSD is at 4.8GB/sec because they think that more assets would be overkill since the GPU wouldn't be able to render all of it?
Sometimes you can achieve the same result by doing things in a slightly different manner. The PS5's GPU is closer to it's SSD so it won't need as much caching and planing ahead as the XSX may do, but that doesn't mean the XSX couldn't achieve the same result.
 
D

Deleted member 775630

Unconfirmed Member
Sometimes you can achieve the same result by doing things in a slightly different manner. The PS5's GPU is closer to it's SSD so it won't need as much caching and planing ahead as the XSX may do, but that doesn't mean the XSX couldn't achieve the same result.
True, we'll see what the games tell us. 3rd party games will look better on the XSX, but we'll see how first party games manage.
 

Dory16

Banned
What are the claims. Those are the cold facts. Xbox better GPU. PS5 better SDD. Power vs speed.

Edit:
100 GB is instantly accessible by the developer with a I/O Throughput of 4.8 GB/s
(How can something be instantly and 4.8GB/s?)
Let me get this right. You're saying GPU = Poer and SSD = speed? In computer graphics?
Why not leave technical analysis to qualified people? You're a fan I understand but such nonsensical statements will just irritate those who have a clue about rendering.
 
Thats interesting! I had no idea. Do you have a source I can read about it


Inside Xbox Series X: the full specs
We visit Microsoft for a briefing on the impressive tech of its next flagship console.

Article by Richard Leadbetter, Technology Editor, Digital Foundry
Updated on 16 March 2020

"...There are customisations to the CPU core - specifically for security, power and performance, and with 76MB of SRAM across the entire SoC, it's reasonable to assume that the gigantic L3 cache found in desktop Zen 2 chips has been somewhat reduced. The exact same Series X processor is used in the Project Scarlett cloud servers that'll replace the Xbox One S-based xCloud models currenly being used.

For this purpose, AMD built in EEC error correction for GDDR6 with no performance penalty (there is actually no such thing as EEC-compatible G6, so AMD and Microsoft are rolling their own solution),
while virtualisation features are also included. And this leads us on to our first mic-drop moment: the Series X processor is actually capable of running four Xbox One S game sessions simultaneously on the same chip, and contains an new internal video encoder that is six times as fast as the more latent, external encoder used on current xCloud servers.."

Asked and answered.
 
Last edited:

ToadMan

Member
What are the claims. Those are the cold facts. Xbox better GPU. PS5 better SDD. Power vs speed.

Edit:
100 GB is instantly accessible by the developer with a I/O Throughput of 4.8 GB/s
(How can something be instantly and 4.8GB/s?)

So much confusion over this now ...

This just means Xsex developers can access 100gb of the SSD storage directly like accessing a slow piece of system RAM - no file system lookup required. Outside the 100gb a file system operation is required.


And before we go there, yes the PS5 does the same except one can direct address the whole 825Gb of SSD storage not just 100gb. Oh and yes the PS5 has a file system developers can use if they wish.

Is any of this going to matter? Almost certainly not and MS are working hard on their compression stuff to make sure it doesn’t matter.

EDIT - I should probably point out the access rate of 2.4/5.5gb/s is achieved regardless of which access method is used on both systems - the difference is that a file lookup takes a little extra time while direct addressing avoids that overhead.
 
Last edited:
Yeah, at 4.8 GB/s. You can't transfer faster than that ( not counting 6 GB/s because it's theoretical max like it is 22 in PS5 ). Physically it's impossible. It is what it is. It would go faster than 4.8 if MS put the same speed in XSX as Sony did in PS5, but they didn't.
It's like the same crap when i'm trying to say : yeah, it's possible in XSX for RAM you can push data at 650GB/s through 560GB/s. No, physically, it's impossible.

6gb isn't a theoretical max. It is the stated MINIMUM decompression throughput rate of the Hardware decompression block.

Much like the stated locked compute of the XSX CU array is 12.15.

Upon some reflection and reading through this thread and others, I actually think I understand why they chose a decompression rate of 6GB/s... but its simply a random speculation nothing more.
 
Top Bottom