• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Velocity Architecture - 100 GB is instantly accessible by the developer through a custom hardware decompression block

jimbojim

Banned
You forgot to edit out the red part that you claimed was a lie. Please resort to research before posting here. I already posted a tweet from a MSFT engineer with proof of the claim.

No, i forgot to paint it in red the rest of bolded part of the sentence.
 

rntongo

Banned
Wait, what? Apologize to you, Phil Spencer, to Xbox Series X? I've edited a post because i saw it says "or higher" on official page. But before that i thought it was 2x-3x because it's often mentioned here.

Did you ever apologize for spreading FUD on ERA? No, you didn't. Not even tried to edit posts. But whatever.

Sorry, XSX! I apologize. You're the most powerful console in the universe in every segment damn segment. Your SSD speed like speed of light. I'm so sorry that i've insulted MSFT and Velocity. All hail to rntongo

Honestly you talk incoherently. Misrepresented a pedantic exchange I had on ERA as FUD. Its a good thing you edited the reply because you could mislead people. On the other hand you forgot to edit out the red part you claimed was a lie. I would suggest you edit that as well.
 
Last edited:
No we actually do not. NO one yet understands how the 100 Gb partition works. No one understands why MS went with a distinct VRAM solution at higher bandwidth with a slower SSD implementation (how do they intend to feed that?) etc. Clearly SFS And BCPACK are part of the XVA solution along with Direct Storage and the HW block. EEc on the memory block...? for what? Its a console not a server!

If you know tell us. LOL

Don't you think it's kind of weird with Microsoft being transparent they didn't go into so much depth with their I/O system?

Same reason why Cerny avoided talking about the GPU alot while he didn't hesitate to give us details on the PS5 I/O.

Then there's this.

xboxseriesxvsps5.jpg
 

jimbojim

Banned
Honestly you talk incoherently. Misrepresented a pedantic exchange I had on ERA as FUD. Its a good thing you edited the reply because you could mislead people. On the other hand you forgot to edit out the red part you claimed was lie. I would suggest you edit that as well.

Look who's talking.

Btw. in the first place, i've mentioned 2x-3x at first because i saw this post of yours when i've done some research. I've typed 2x-3x in search window, entered your name and got this :

You're really confusing me. Microsoft stated that SFS will provide 2-3x multiplier for RAM and SSD. When a Microsoft engineer working on SFS was asked, he declined to comment but gave scenario of 4x improvement by using only 25% of a texture. What more do you want? All the tweets have been posted here.

So, don't blame me even if i've apologized to XSX and saluted you.
 
Last edited:

ethomaz

Banned
Don't you think it's kind of weird with Microsoft being transparent they didn't go into so much depth with their I/O system?

Same reason why Cerny avoided talking about the GPU alot while he didn't hesitate to give us details on the PS5 I/O.

Then there's this.

xboxseriesxvsps5.jpg
I'm interested how the same feature is written in two different ways for PS5 and Xbox in a comparison table?
 
Last edited:

Ascend

Member
When MSFTWTFBBQ said on official page it is 2x-3x crap ( or higher ), but nevertheless, it's much higher for some reason, it's 4x. Pack it up, Panajev.

Just throwing this out here again... Bolding out the relevant parts.

As textures have ballooned in size to match 4K displays, efficiency in memory utilisation has got progressively worse - something Microsoft was able to confirm by building in special monitoring hardware into Xbox One X's Scorpio Engine SoC. "From this, we found a game typically accessed at best only one-half to one-third of their allocated pages over long windows of time," says Goossen. "So if a game never had to load pages that are ultimately never actually used, that means a 2-3x multiplier on the effective amount of physical memory, and a 2-3x multiplier on our effective IO performance."

A technique called Sampler Feedback Streaming - SFS - was built to more closely marry the memory demands of the GPU, intelligently loading in the texture mip data that's actually required with the guarantee of a lower quality mip available if the higher quality version isn't readily available, stopping GPU stalls and frame-time spikes. Bespoke hardware within the GPU is available to smooth the transition between mips, on the off-chance that the higher quality texture arrives a frame or two later. Microsoft considers these aspects of the Velocity Architecture to be a genuine game-changer, adding a multiplier to how physical memory is utilised.



And

Sampler Feedback Streaming (SFS) – A component of the Xbox Velocity Architecture, SFS is a feature of the Xbox Series X hardware that allows games to load into memory, with fine granularity, only the portions of textures that the GPU needs for a scene, as it needs it. This enables far better memory utilization for textures, which is important given that every 4K texture consumes 8MB of memory. Because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance.

Point is, SFS will not give a constant multiplier, just like compressed data will not give a constant output value. It is variable and depends on the situation. 4x is not out of the question. Neither is 1.1x. They expect the average to be between 2x and 3x.

People can argue all they want about this being done in the past. Whether it has or not, who cares? Obviously the majority of games are not doing it. The XSX has hardware to facilitate this. And in the case that it's easy to implement on the XSX simply the majority of games will do it. If it's less easy on the PS5 and would use CPU resources as well, they'd prefer to simply use the raw bandwidth of the PS5 and implement the technique on the XSX only, reducing the practical gap between the two consoles. And if they decide to do it on the PS5 as well, well, good for them.

Saying this has been done before and therefore that it's unimportant is like saying "you hit a homerun but it was done before, so your homerun doesn't actually get a run on the scoreboard". It's ridiculous.
 
Don't you think it's kind of weird with Microsoft being transparent they didn't go into so much depth with their I/O system?

Same reason why Cerny avoided talking about the GPU alot while he didn't hesitate to give us details on the PS5 I/O.

Then there's this.

xboxseriesxvsps5.jpg

ALL I know and have stated is that both companies are entering in to a season of deep dives about their projects. Im sure theres lots more to know about PS5s VRS, RT and SF solutions. I also know that there has not yet been a Microsoft dive into the XVA.

In the I/O wars we know a lot(but not everything) about PS5 solution. We know about the names and technologies within the XVA but we dont have the insight as to how they work together or the numbers regarding their output whether target or actual. SO theres lots more to learn.

What doesnt work are assertions that MS is misleading or people just "dont understand." Thats not good personal or technical chat.
 
ALL I know and have stated is that both companies are entering in to a season of deep dives about their projects. Im sure theres lots more to know about PS5s VRS, RT and SF solutions. I also know that there has not yet been a Microsoft dive into the XVA.

In the I/O wars we know a lot(but not everything) about PS5 solution. We know about the names and technologies within the XVA but we dont have the insight as to how they work together or the numbers regarding their output whether target or actual. SO theres lots more to learn.

What doesnt work are assertions that MS is misleading or people just "dont understand." Thats not good personal or technical chat.
Also could be because the XVA doesn't has many things to talk I mean in the end Xbox want to put all its games in PC the same day in PC and for the first two years they will release
all its first parties also in Xbox one and even SF is compatible with Turing so maybe that's all.
 

Segslack

Neo Member
Sorry guys, but it seems this thread does not have anything new, everytime I come here I see the same old twittes / info from april and march from the guys from MS... lol
 

Three

Member
Dude I’ve read your previous replies and you seem very reasonable but this is such a false equivalence. MSFt has made some bold claims about their texture streaming. Maybe Sony has something as good or close or maybe even better. But the fundamental idea is sound. If you can efficiently stream textures you can get more bang for your I/O throughput.

So are you going to stop calling people a liar. Can you at least admit that SF is in older GPUs other than XSX?

They've not made a bold claim. It's their fans who are connecting the wrong dots and they are probably loving that and didn't respond to those asking for clarification.

MS made a clear claim:

"This wastage comes principally from the textures. Textures are universally the biggest consumers of memory for games. However, only a fraction of the memory for each texture is typically accessed by the GPU during the scene. For example, the largest mip of a 4K texture is eight megabytes and often more, but typically only a small portion of that mip is visible in the scene and so only that small portion really needs to be read by the GPU."

"So if a game never had to load pages that are ultimately never actually used, that means a 2-3x multiplier on the effective amount of physical memory, and a 2-3x multiplier on our effective IO performance."

Any impartial person would understand that any current technology that loads only parts of textures visible in the scene would also already benefit.

Instead this was turned into a MS only feature because of someone pointing to a tweet not related to that figure. Then we got the but is it in others 'because we don't know if it is'. You can clearly deduce it in how they claim they are getting that 2x saving and understand that any other method doing the exact same thing (only loading visible textures) does it for the same reason and would stand to benefit the same.

Instead we have magic unknown tech.

Smithg mentions "sampling faster" but this would actually have the opposite effect on bandwidth (ie tax it more as you create your sampler textures) and no effect on the memory usage as it's still the same scene you're rendering. It would only affect framerate. They only thing that would make sense is if it somehow did a 2x better job than others at knowing which textures to remove but there has been no real good evidence of that presented here and compared to similar tech.
 
Last edited:
So are you going to stop calling people a liar. Can you at least admit that SF is in older GPUs other than XSX?

They've not made a bold claim. It's their fans who are connecting the wrong dots and they are probably loving that and didn't respond to those asking for clarification.

MS made a clear claim:

"This wastage comes principally from the textures. Textures are universally the biggest consumers of memory for games. However, only a fraction of the memory for each texture is typically accessed by the GPU during the scene. For example, the largest mip of a 4K texture is eight megabytes and often more, but typically only a small portion of that mip is visible in the scene and so only that small portion really needs to be read by the GPU."

"So if a game never had to load pages that are ultimately never actually used, that means a 2-3x multiplier on the effective amount of physical memory, and a 2-3x multiplier on our effective IO performance."

Any impartial person would understand that any current technology that loads only parts of textures visible in the scene would also already benefit.

Instead this was turned into a MS only feature because of someone pointing to a tweet not related to that figure. Then we got the but is it in others 'because we don't know if it is'. You can clearly deduce it in how they claim they are getting that 2x saving and understand that any other method doing the exact same thing (only loading visible textures) does it for the same reason and would stand to benefit the same.

Instead we have magic unknown tech.

Smithg mentions "sampling faster" but this would actually have the opposite effect on bandwidth (ie tax it more as you create your sampler textures) and no effect on the memory usage as it's still the same scene you're rendering. It would only affect framerate. They only thing that would make sense is if it somehow did a 2x better job than others at knowing which textures to remove but there has been no real good evidence of that presented here and compared to similar tech.

Is not like someone can show a extreme examples when present a new technology:

Gn9yxa7.jpg


Now an example from Microsoft:
https://blogs.windows.com/windowsex...rces-enables-optimized-pc-gaming-experiences/

IgclpOI.jpg
 
Last edited:

rntongo

Banned
So are you going to stop calling people a liar. Can you at least admit that SF is in older GPUs other than XSX?

They've not made a bold claim. It's their fans who are connecting the wrong dots and they are probably loving that and didn't respond to those asking for clarification.

MS made a clear claim:

"This wastage comes principally from the textures. Textures are universally the biggest consumers of memory for games. However, only a fraction of the memory for each texture is typically accessed by the GPU during the scene. For example, the largest mip of a 4K texture is eight megabytes and often more, but typically only a small portion of that mip is visible in the scene and so only that small portion really needs to be read by the GPU."

"So if a game never had to load pages that are ultimately never actually used, that means a 2-3x multiplier on the effective amount of physical memory, and a 2-3x multiplier on our effective IO performance."

Any impartial person would understand that any current technology that loads only parts of textures visible in the scene would also already benefit.

Instead this was turned into a MS only feature because of someone pointing to a tweet not related to that figure. Then we got the but is it in others 'because we don't know if it is'. You can clearly deduce it in how they claim they are getting that 2x saving and understand that any other method doing the exact same thing (only loading visible textures) does it for the same reason and would stand to benefit the same.

Instead we have magic unknown tech.

Smithg mentions "sampling faster" but this would actually have the opposite effect on bandwidth (ie tax it more as you create your sampler textures) and no effect on the memory usage as it's still the same scene you're rendering. It would only affect framerate. They only thing that would make sense is if it somehow did a 2x better job than others at knowing which textures to remove but there has been no real good evidence of that presented here and compared to similar tech.

I don't understand what your argument is. MSFT's claim is that SF(Which is already available elsewhere) is only one part of SFS and that SF will be part of DX12U but the XSX will have custom hardware for SF and also for SFS. The onus is on MSFT to prove their 2-3x figures and on Sony to produce something equivalent in terms of SFS(not SF), maybe even better. It's that simple.
 

rntongo

Banned
ALL I know and have stated is that both companies are entering in to a season of deep dives about their projects. Im sure theres lots more to know about PS5s VRS, RT and SF solutions. I also know that there has not yet been a Microsoft dive into the XVA.

In the I/O wars we know a lot(but not everything) about PS5 solution. We know about the names and technologies within the XVA but we dont have the insight as to how they work together or the numbers regarding their output whether target or actual. SO theres lots more to learn.

What doesnt work are assertions that MS is misleading or people just "dont understand." Thats not good personal or technical chat.

Exactly. Spot on
 
Smithg mentions "sampling faster" but this would actually have the opposite effect on bandwidth (ie tax it more as you create your sampler textures) and no effect on the memory usage as it's still the same scene you're rendering. It would only affect framerate. They only thing that would make sense is if it somehow did a 2x better job than others at knowing which textures to remove but there has been no real good evidence of that presented here and compared to similar tech.

It’s kind of amazing to me that this is your takeaway from my position after all that I’ve said to you. It’s discouraging how little you seem to understand what I’ve been trying to say. I’m not saying you are unintelligent, and who knows, maybe I’m just wrong. But this is clearly an incredible communication failure for me personally.

I guess this just illustrates the futility of arguing on message boards.
 

Three

Member
I don't understand what your argument is. MSFT's claim is that SF(Which is already available elsewhere) is only one part of SFS and that SF will be part of DX12U but the XSX will have custom hardware for SF and also for SFS. The onus is on MSFT to prove their 2-3x figures and on Sony to produce something equivalent in terms of SFS(not SF), maybe even better. It's that simple.
The tweet made it clear what that custom hardware is (ie a failsafe already in memory when the texture fails to load in time) and it wasn't related to the figure, that was somebody connecting the wrong dots.
 

rntongo

Banned
The tweet made it clear what that custom hardware is (ie a failsafe already in memory when the texture fails to load in time) and it wasn't related to the figure, that was somebody connecting the wrong dots.

SF which is part of SFS, which(SFS) MSFT claims brings about a 2-3x multiplier, doesn't play a part in the 2-3x figure?? Do you see how ridiculous that sounds? I think you should go read about SFS on the Xbox Series X technology glossary, then read up the tweets from JamesStanard on twitter.
 
It’s kind of amazing to me that this is your takeaway from my position after all that I’ve said to you. It’s discouraging how little you seem to understand what I’ve been trying to say. I’m not saying you are unintelligent, and who knows, maybe I’m just wrong. But this is clearly an incredible communication failure for me personally.

I guess this just illustrates the futility of arguing on message boards.

Its not just you my friend. Its hard to have a decent conversation when the key takeaways are always:

"Yeah well thats not anything new" when it is.

"Those numbers can't be real" when no one knows if any numbers are real.

"Microsoft power of the cloud" as some sort of dismissive reach back to an unrelated technology/service/device.

"Thats just magic sauce" when engineers give direct specifications that you can source and cite in multiple places.

Yet they totally understand everything Cerny was talking about and there can be no dispute as whether or not PS5 technology can actually achieve its claims, nor whether or not what MS has included in a box with other technology beyond what's in the PS5 can match or exceed Sony's claims.

Its Sony or die around here.

Very strange for technology forum.
 

rntongo

Banned
It’s kind of amazing to me that this is your takeaway from my position after all that I’ve said to you. It’s discouraging how little you seem to understand what I’ve been trying to say. I’m not saying you are unintelligent, and who knows, maybe I’m just wrong. But this is clearly an incredible communication failure for me personally.

I guess this just illustrates the futility of arguing on message boards.

It's honestly sad. I feel your pain. The good thing is we're soon going to get more info and see how exactly the SFS performs.
 
Last edited:

Three

Member
SF which is part of SFS, which(SFS) MSFT claims brings about a 2-3x multiplier, doesn't play a part in the 2-3x figure?? Do you see how ridiculous that sounds? I think you should go read about SFS on the Xbox Series X technology glossary, then read up the tweets from JamesStanard on twitter.
We have better information than the glossary. We have an interview with Goosen on Eurogamer where he describes where it's from. The tweets have been discussed to death already. You're just muddying the waters. "Virtual textures, tiled resources, which is part of SF which is part of SFS which is part of DX12U which is using custom hardware on XSX." you're just confusing everything needlessly or intentionally .

The part about the custom hardware on XSX for SFS page misses is not related to the 2x figure. It's in fact assuming it failed to load in time.

It’s kind of amazing to me that this is your takeaway from my position after all that I’ve said to you. It’s discouraging how little you seem to understand what I’ve been trying to say. I’m not saying you are unintelligent, and who knows, maybe I’m just wrong. But this is clearly an incredible communication failure for me personally.

I guess this just illustrates the futility of arguing on message boards.

I've been reading and interpreting everything you've typed and can only apologise if I've misinterpreted something you've said but you said this when asked about what it could be doing better

I didn't say all the texture calls. I said "more frequent". They both incur a cost (I think I've said that before in this thread).

This all boils down to a specific implementation, but I believe the general assertion is that this is a lot cheaper so you can do it a lot more. Also because of the precision involved you can rely on it more.
You said more frequent. I asked for clarification already but you didn't give any. More frequent sampling would actually be more costly on bandwidth.


My main points of contention from the beginning have been
1) what is the baseline for 2x memory AND bandwidth saving. It is likely based on: Old engines that rely on a HDD, and therefore assuming the game doesn't use PRT that efficiently to stream from it.
2) if the baseline is the above this is possible on old GPUs and almost certainly new ones, not some custom XSX hardware to get some x2 or x3 edge over other GPUs that some people are trying to make it out to be.

That's it. If you can make a case that is contrary to that then I would gladly take it on board.
 
Last edited:

oldergamer

Member
Sounds like its a waste of time for people trying to prove its an old feature. The hardware either has it or doesn't. Im betting ps5 doesn't or it wont be able to compete on this feature without sacrificing performance elsewhere. Either way this is a win for xbox first party titles if they use this feature.
 
You said more frequent. I asked for clarification already but you didn't give any. More frequent sampling would actually be more costly on bandwidth.

I have so little will power in the face of an argument.

Texture sampling occurs for every pixel in every frame. Thats just inherent in texture mapping for a 3d game. Sampler Feedback just exposes the results of this. It doesn’t need to sample again because the information is already there. That’s the whole point.

This information (which texels were used) that’s previously been discarded (and then approximated again with a shader for texture streaming) is now exposed because of new hardware allowing it to be written back. No approximations needed!

You’re applying the limitations of that old method to Sampler Feedback when they are fundamentally different. No additional sampling required - it’s already done when you mapped that texture! Do you see how it’s different?

As for the Microsoft claims about how great this is, who knows!

Not any of us!
 

just tray

Banned
I m impressed with the PS5 SSD but damn. Turn off the fanboyism and take a trip back to reality. People act as if GPUs, CPUs, and Ram doesn't matter. Both systems will be great and PS5 will have the best Sony exclusives in any of it's consoles lifespans.

I'm quite satisfied with both.
If only Nintendo would partner with Alienware for the next Switch and it's a trifecta
 

Tiamat2san

Member
So?
Trying to follow this topic but it became less clear as it is progressing.

Can someone summarise in simple terms if the technology Microsoft is using has a big benefit?
And how?
Explains as if a were your great grandmother.
 
Last edited:
So?
Trying to follow this topic but it became less clear as it is progressing.

Can someone summarise in simple terms if the technology Microsoft is using has a big benefit?
And how?
Explains as if a were your great grandmother.

I summary there's a theory that because of BCPACK, SFS and other features in the Xbox I/O system it will be superior to Sonys I/O system.

I'm still not seeing how that's possible since it goes against the spec sheets.

But I guess we will have to wait until June to find out if the rumors are true.
 
Last edited:

rntongo

Banned
We have better information than the glossary. We have an interview with Goosen on Eurogamer where he describes where it's from. The tweets have been discussed to death already. You're just muddying the waters. "Virtual textures, tiled resources, which is part of SF which is part of SFS which is part of DX12U which is using custom hardware on XSX." you're just confusing everything needlessly or intentionally .

The part about the custom hardware on XSX for SFS page misses is not related to the 2x figure. It's in fact assuming it failed to load in time.



I've been reading and interpreting everything you've typed and can only apologise if I've misinterpreted something you've said but you said this when asked about what it could be doing better


You said more frequent. I asked for clarification already but you didn't give any. More frequent sampling would actually be more costly on bandwidth.


My main points of contention from the beginning have been
1) what is the baseline for 2x memory AND bandwidth saving. It is likely based on: Old engines that rely on a HDD, and therefore assuming the game doesn't use PRT that efficiently to stream from it.
2) if the baseline is the above this is possible on old GPUs and almost certainly new ones, not some custom XSX hardware to get some x2 or x3 edge over other GPUs that some people are trying to make it out to be.

That's it. If you can make a case that is contrary to that then I would gladly take it on board.

I'm honestly amazed by how much you have misunderstood this whole thing. You do realize all the custom hardware for texture streaming is under SFS?? And that SFS is responsible for efficient texture streaming?

Here is a quote from The eurogamer article with Andrew Goossen. Where he explains how a 2-3x gain is made using SFS and it's features:

"From this, we found a game typically accessed at best only one-half to one-third of their allocated pages over long windows of time," says Goossen. "So if a game never had to load pages that are ultimately never actually used, that means a 2-3x multiplier on the effective amount of physical memory, and a 2-3x multiplier on our effective IO performance."

A technique called Sampler Feedback Streaming - SFS - was built to more closely marry the memory demands of the GPU, intelligently loading in the texture mip data that's actually required with the guarantee of a lower quality mip available if the higher quality version isn't readily available, stopping GPU stalls and frame-time spikes
. Bespoke hardware within the GPU is available to smooth the transition between mips, on the off-chance that the higher quality texture arrives a frame or two later."

link to the eurogamer article:
 

psorcerer

Banned
This information (which texels were used) that’s previously been discarded (and then approximated again with a shader for texture streaming) is now exposed because of new hardware allowing it to be written back.

This will heavily impair performance. You need to sync-write stuff back to VRAM. That's why MSFT suggest using only 1% of the feedback maps.
 
Don't you think it's kind of weird with Microsoft being transparent they didn't go into so much depth with their I/O system?

Same reason why Cerny avoided talking about the GPU alot while he didn't hesitate to give us details on the PS5 I/O.

Then there's this.

xboxseriesxvsps5.jpg

Because MS are doing a system architecture presentation in August where they'll probably go in-depth on Velocity Architecture, if they haven't done so already between now and the July event. Also FWIW they've actually talked a lot about the I/O system, arguably as much as Sony has. However, much of MS's approach is software-driven and the software implementation is being tuned probably even right now, so it would be premature to go in-depth on those aspects until they are finalized.

That graph also has a few things wrong; I'm nitpicking here but the TF numbers should be 10.275 TF and 12.147 TF respectively. Also, if they're using "bit" for describing the buses why not just consistently use bit between both columns? Just really small errors that don't mean too much at the end of the day, but I like numbers and nomenclature to be clean and consistent with this kind of stuff, personally :p

I summary there's a theory that because of BCPACK, SFS and other features in the Xbox I/O system it will be superior to Sonys I/O system.

I'm still not seeing how that's possible since it goes against the spec sheets.

But I guess we will have to wait until June to find out if the rumors are true.

Wait, who's been suggesting this? It reads like a bad interpretation. The prevailing idea I've seen is that those things will help narrow the delta between the two I/O systems, which is perfectly plausible considering there are multiple ways of addressing the bottlenecks Cerny mentioned. Sony has taken their approach, and MS has taken theirs.

Now, if MS's hardware on the I/O stack were a little bit beefier, then I suspect the software-optimized implementations would probably make the delta imperceptible or perhaps even eliminated. To my knowledge, that isn't going to happen, but I can see a scenario where the delta presented by the paper specs ends up with a smaller real-world delta in terms of actual performance when all things are considered.

People are just trying to assume where that delta actually shrinks to. I'd think for the benefit of multi-platform games across both platforms the smaller the delta the better. Probably something around 50% - 75% still favoring PS5's approach is likely favorable. But we'll have a better picture for where that actually potentially falls at once they give a deeper system analysis, most likely in August but some parts could be discussed before then.

My understanding is, just like with the GPUs, it's perfectly fine to assume that the paper specs between the SSD I/Os is not truly indicative of actual performance. As in there are some areas with the GPUs (as far as we know) where Sony's made some decisions to help them punch a bit above their weight, such as the clocks, which helps with things like pixel fillrate and cache speed (NOT cache bandwidth, that's something different).

All the same, we could have scenarios where XSX's SSD I/O performs better than Sony's in selective aspects, and punches above its weight, but Sony's still holds the overall advantage since the hardware is beefier. At the very least MS seem confident their approach is competitive enough, so I would think there's more to their implementation than what their SSD I/O paper specs belay.

But as you said, we'll have to wait until more official information arrives.
 
Last edited:

Sw0pDiller

Banned
Funny to see all the theories saying that Ms has some kind of software to balance out the hardware advantage ps5 has. While all the time saying sony likely will not have these software related advantages. (It's important because if they did, the whole story would fall apart)

Story around the web is that the API made by sony is vastly superior than Ms has made with direct x. Plus anything can be altered on the software side going into the generation. Hardware, not so much.
 

rntongo

Banned
We have better information than the glossary. We have an interview with Goosen on Eurogamer where he describes where it's from. The tweets have been discussed to death already. You're just muddying the waters. "Virtual textures, tiled resources, which is part of SF which is part of SFS which is part of DX12U which is using custom hardware on XSX." you're just confusing everything needlessly or intentionally .

The part about the custom hardware on XSX for SFS page misses is not related to the 2x figure. It's in fact assuming it failed to load in time.



I've been reading and interpreting everything you've typed and can only apologise if I've misinterpreted something you've said but you said this when asked about what it could be doing better


You said more frequent. I asked for clarification already but you didn't give any. More frequent sampling would actually be more costly on bandwidth.


My main points of contention from the beginning have been
1) what is the baseline for 2x memory AND bandwidth saving. It is likely based on: Old engines that rely on a HDD, and therefore assuming the game doesn't use PRT that efficiently to stream from it.
2) if the baseline is the above this is possible on old GPUs and almost certainly new ones, not some custom XSX hardware to get some x2 or x3 edge over other GPUs that some people are trying to make it out to be.

That's it. If you can make a case that is contrary to that then I would gladly take it on board.

You're grasping at straws and no amount of explanation can help you. Until the actual tech is showcased and explained you're going to maintain negative presumptions.
 

rntongo

Banned
Funny to see all the theories saying that Ms has some kind of software to balance out the hardware advantage ps5 has. While all the time saying sony likely will not have these software related advantages. (It's important because if they did, the whole story would fall apart)

Story around the web is that the API made by sony is vastly superior than Ms has made with direct x. Plus anything can be altered on the software side going into the generation. Hardware, not so much.

Just hear me out. The fundamental argument is simply that on the I/O side, MSFT has made an innovative solution by focusing on efficient texture streaming and this could offer overall similar performance to the PS5 SSD. On the other hand, it's a hybrid approach involving software and hardware. We've seen Sony do something similar.

For example look at the power unit in the PS5 APU, it uses an algorithm to determine whether a workload is GPU or CPU intensive then pushes the GPU to 2.23GHz if its the former. In essence this enables the PS5 APU to maintain GPU clockspeeds that are insane and enable the console to effectively hit a 10 Tflop target with fewer CUs. If they had added just 8 more Compute Unites to the GPU(36 to 44 to get 12.59Tflops) they would have had a more powerful GPU than the XSX despite having 8 fewer CUs than the XSX. And this would have been really impressive.

So those arguing for MSFT's I/O solution have a compelling case similar to the PS5's variable processor clocks. Both are highly innovative solutions and if they work as advertised they can augment performance of the console. It's just that on the PS5, the XSX still maintains a significant GPU advantage because Sony kept the CUs in the GPU at 36. But at the same time if the PS5 has a solution close to or equivalent to SFS then it holds a significant advantage in terms of I/O.
 

Ar¢tos

Member
Funny to see all the theories saying that Ms has some kind of software to balance out the hardware advantage ps5 has. While all the time saying sony likely will not have these software related advantages. (It's important because if they did, the whole story would fall apart)

Story around the web is that the API made by sony is vastly superior than Ms has made with direct x. Plus anything can be altered on the software side going into the generation. Hardware, not so much.
Xbox must be superior in everything... EVERYTHING!
PS5 must not be allowed to have any component better than XSX, not even by 1 bit/s.


DirectX has its flaws (could use a serious cleanup, although the console version already got rid of a lot of useless legacy stuff), but is easier to use than GNM.
GNM is lower level than DirectX, but it's harder to learn and being lower level creates its own problems, it's easier to make things "break".

We don't know anything about the OSes yet, if MS still uses the 3 OSes system of X1, then XSX will very likely have a bigger overhead than the ps5, which will also impact performance.

I think there is too much unknown at the moment to even try to make sense of the real world differences between the consoles, but imagining magic ssd speed multipliers and hidden i/o compression boosters is definitely not the way to go.
 

Three

Member
I have so little will power in the face of an argument.

Texture sampling occurs for every pixel in every frame. Thats just inherent in texture mapping for a 3d game. Sampler Feedback just exposes the results of this. It doesn’t need to sample again because the information is already there. That’s the whole point.

This information (which texels were used) that’s previously been discarded (and then approximated again with a shader for texture streaming) is now exposed because of new hardware allowing it to be written back. No approximations needed!

You’re applying the limitations of that old method to Sampler Feedback when they are fundamentally different. No additional sampling required - it’s already done when you mapped that texture! Do you see how it’s different?

As for the Microsoft claims about how great this is, who knows!

Not any of us!

I think you've very clearly misunderstood this technology then.

The previous technology uses the same feedbackBuffer and feedback texture for selecting the textures to stream. Texture space shading is different.

Even if I assume that it is doing this 300% faster what you are misunderstanding is that this isn't a difference in efficency in streaming assets in and out of memory this would be compute efficiency.
Doing sampling "more frequently" as you said before is worse for the memory.

For a given scene you have not explained how SF will result in loading into memory 2x-3x less textures. This is what you're not getting.

This is the exact steps for SFS streaming

.
To adopt SFS, an application does the following:

  • Use a tiled texture (instead of a non-tiled texture), called a reserved texture resource in D3D12, for anything that needs to be streamed.
  • Along with each tiled texture, create a small “MinMip map” texture and small “feedback map” texture.
    • The MinMip map represents per-region mip level clamping values for the tiled texture; it represents what is actually loaded.
    • The feedback map represents and per-region desired mip level for the tiled texture; it represents what needs to be loaded.
  • Update the mip streaming engine to stream individual tiles instead of mips, using the feedback map contents to drive streaming decisions.
  • When tiles are made resident or nonresident by the streaming system, the corresponding texture’s MinMip map must be updated to reflect the updated tile residency, which will clamp the GPU’s accesses to that region of the texture.
  • Change shader code to read from MinMip maps and write to feedback maps. Feedback maps are written using special-purpose HLSL constructs

You can look at that and mention a step it is doing 3x better on what should or shouldn't be resident in memory.

As for the Microsoft claims about how great this is, who knows!

Not any of us!
We know exactly what it is doing in that regard so why are we still proclaiming this as magical tech that only MS knows how it works while at the same time saying you do ?
I'm honestly amazed by how much you have misunderstood this whole thing. You do realize all the custom hardware for texture streaming is under SFS?? And that SFS is responsible for efficient texture streaming?

Here is a quote from The eurogamer article with Andrew Goossen. Where he explains how a 2-3x gain is made using SFS and it's features:



link to the eurogamer article:

And what you fail to realise is that SFS is using SF and SF is not from custom XSX hardware in any way.

Even MS themselves have mentioned this to you:



Like I said, you're intentionally muddying the water. They've clarified what the exta hardware does. It isn't related to streaming 2x or 3x less textures. It's related to framerate/gpu stalling when you FAIL to stream fast enough. A page miss.

You know what, I give up on this thread. Fine you will get a 3x+ boost in memory and bandwidth from this mysterious yet known tech. Only on XSX baby! Have fun.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Xbox must be superior in everything... EVERYTHING!
PS5 must not be allowed to have any component better than XSX, not even by 1 bit/s.


DirectX has its flaws (could use a serious cleanup, although the console version already got rid of a lot of useless legacy stuff), but is easier to use than GNM.
GNM is lower level than DirectX, but it's harder to learn and being lower level creates its own problems, it's easier to make things "break".

We don't know anything about the OSes yet, if MS still uses the 3 OSes system of X1, then XSX will very likely have a bigger overhead than the ps5, which will also impact performance.

I think there is too much unknown at the moment to even try to make sense of the real world differences between the consoles, but imagining magic ssd speed multipliers and hidden i/o compression boosters is definitely not the way to go.

PS5 is allowed one superior component, even massively so, as long as it is wasteful and not able to bring a tangible user/developer benefit.

As soon as someone figures out a tangible real benefit, a secret XSX feature will be found that nullifies what was previously agreed to be a vast yet worthless gap.
Then a long thread with unsubstantiated bold claims will ensue with some people obsessed and unable to listen to any criticism raised who keep on posting as if repetition meant proof and suddenly the gap will become up for grabs: the length of the discussion is seemingly taken as proof that the matter is indeed up for debate and not settled.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Im pretty sure that’s not happening here.
I do not see how it is likely though... I do not see evidence in what they are saying, their documentation, etc... that we should expect a monstruous 2-3x or more performance and memory space improvement over the previous state of the art texture streaming solutions. You are underselling how big that would be and something that big leaves a little bit more concrete evidence than this.

It's honestly bias at this point.
Oh I agree :).
 

Deto

Banned
Dude I’ve read your previous replies and you seem very reasonable but this is such a false equivalence. MSFt has made some bold claims about their texture streaming. Maybe Sony has something as good or close or maybe even better. But the fundamental idea is sound. If you can efficiently stream textures you can get more bang for your I/O throughput.

It is exactly the same.
Stop this delirium of hidden power.

2013: Hidden GPU.
2020: Hidden SSD bandwitch.

I was wondering why it is always on the xbox side that we have this.

With the PS5 I didn't see anyone finding the hidden TF multiplier to make the PS5 have more TF than the SX, but on the SX SSD we have the bandwitch multiplier to match the PS5.


"Hidden Xbox technique increases SSD power by 300%."
"Hidden PS5 TF technique increases GPU power by 30%, GPU with 13TF"

Which of the two above is most likely to be real?

And yet I don't see anyone raving about hidden TF multiplier.
 
Last edited:
Top Bottom