• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Velocity Architecture - 100 GB is instantly accessible by the developer through a custom hardware decompression block

Three

Member
Sorry to be condescending. You may very well have more knowledge and experience than me in this area, so I don't mean to speak down to you. It's just it seems like you're being pretty obtuse to first gloss over her clear statement that this is a hardware feature, and now you're quibbling over the definition of "hardware feature".

She's talking about the texture sampling process, so my guess is this some extra hardware in the TMU. They're not exactly showing a block diagram where the hardware is, probably because it sounds like there is more than one implementation in hardware. Here is someone talking about this on the nvidia hardware side (in relation to the texture space shading feature that nvidia promoted in 2018)




I probably don't know more, but from my understanding of the subject I'm trying to make the point that there is nothing custom in the GPU hardware specific to the XSX that gains this 2x-x3 streaming over hardware even already out there.

From reading what you posted what is being discussed is procedural virtual textures in turing cards. The part you highlighted is related to texture space shading. It is something SF can be used for but it is the second scenario mentioned in your SF video, not the streaming scenario. This saves you shader calculations tracking what texels are affected but it doesn't save you memory or storage bandwidth as far as I know. It can be done on older cards but you would manually have to keep track.
 
ToadMan and rnlval , the guy in this video argues that DF using RTX 2080 to compare XboxSX is a cherry picked benchmark.


I have watched this video before. I thought he made an interesting point, said so in the Spec OT. He basically says the XsX beats a 5700xt, not so much matches a 2080, if I remember correctly, at least in that example
 
Last edited:
From reading what you posted what is being discussed is procedural virtual textures in turing cards. The part you highlighted is related to texture space shading. It is something SF can be used for but it is the second scenario mentioned in your SF video, not the streaming scenario. This saves you shader calculations tracking what texels are affected but it doesn't save you memory or storage bandwidth as far as I know. It can be done on older cards but you would manually have to keep track.

I'm in a rush, so I'll give you some quick unorganized thoughts to reply to this:

  • According to Microsoft, Sampler Feedback is a new hardware feature that enables two use-cases: texture streaming and texture space shading.
  • The hardware is present only in Turing and RDNA2.
  • The hardware that enabled Turing's Texture Space Shading is the same as what allows Turing to support Sampler Feedback.
  • The description from that Texture Space Shading article of what the new hardware does is precisely what Sampler Feedback does. Without shader calculated approximations, you can directly write back from the texture sampling hardware precisely what part of what tile/mip live/whatever was sampled.
  • Not having to calculate with a shader what texels were used for texture sampling is the whole point of Sampler Feedback!
  • The bandwidth savings for texture streaming (supposedly) come from the fact that you can get more frequent and more precise readings on what mip level/tile is needed.
I think there's an argument to be made whether or not Sampler Feedback as an improvement to texture streaming can provide a 2-3X improvement in efficiency. I have a positive opinion about that but I'm generally an optimist so maybe I'm wrong. What I'm pretty sure I'm not wrong about is that "Sampler Feedback" is definitely not old hardware features (pre-2018) wrapped in new clothes.
 
Last edited:
I'm in a rush, so I'll give you some quick unorganized thoughts to reply to this:

  • According to Microsoft, Sampler Feedback is a new hardware feature that enables two use-cases: texture streaming and texture space shading.
  • The hardware is present only in Turing and RDNA2.
  • The hardware that enabled Turing's Texture Space Shading is the same as what allows Turing to support Sampler Feedback.
  • The description from that Texture Space Shading article of what the new hardware does is precisely what Sampler Feedback does. Without shader calculated approximations, you can directly write back from the texture sampling hardware precisely what part of what tile/mip live/whatever was sampled.
  • Not having to calculate with a shader what texels were used for texture sampling is the whole point of Sampler Feedback!
  • The bandwidth savings for texture streaming (supposedly) come from the fact that you can get more frequent and more precise readings on what mip level/tile is needed.
I think there's an argument to be made whether or not Sampler Feedback as an improvement to texture streaming can provide a 2-3X improvement in efficiency. I have a positive opinion about that but I'm generally an optimist so maybe I'm wrong. What I'm pretty sure I'm not wrong about is that "Sampler Feedback" is definitely not old hardware features (pre-2018) wrapped in new clothes.
You only miss the last point that cannot be use for all the textures calls that is why I disagree that improve so much the performance or memory use.

2qK5dGJ.png
 
You only miss the last point that cannot be use for all the textures calls that is why I disagree that improve so much the performance or memory use.

I didn't say all the texture calls. I said "more frequent". They both incur a cost (I think I've said that before in this thread).

This all boils down to a specific implementation, but I believe the general assertion is that this is a lot cheaper so you can do it a lot more. Also because of the precision involved you can rely on it more.
 
I've been explaining this multiple times on the forum. Increasing the power budget for one decreases what's available for the other and the clock speed it can reach.
I know. Nothing you said is wrong.
Unless it's a super bad optimized and utilized application. But in that case you have power budget headroom anyways.
 

rnlval

Member
ToadMan and rnlval , the guy in this video argues that DF using RTX 2080 to compare XboxSX is a cherry picked benchmark.

FACTS: XSX GPU is about 25% higher in TFLOPS, TMU, TFU, SRAM, and memory bandwidth when compared to RX-5700 XT with 9.66 TFLOPS average.

RX-5700 XT has 1887Mhz average clock speed, hence 9.66 TFLOPS average.

RTX 2080 happens to land RX 5700 XT+25% extra for Gears 5.


relative-performance_3840-2160.png


In terms of TPU's benchmarks average, there's about 23% gap between RX 5700 XT and RTX 2080. There are games that exceeds the 25% gap between RX-5700 XT and RTX 2080.

Gears 5 is not AMD GPU friendly games like Battlefield V, FarCry 5, Forza Motorsport 7, Hitman, Titanfall, Killer Instinct and 'etc'.
 
Last edited:
Looks like you are trying to downplay the actual advantages.

There isn’t going to be much “matching” or small differences going on.

I don’t have anything else to say here until the games start proving it.

I think some people are about to get woken up though.
I'm not saying there won't be any differences, but I think these developers/engineers are talented enough to make it work. Work enough to where you probably won't noticed the difference, well you might, but really that's just pixel peeping. If you're shown the back-end and know what's going on, then i'm sure you'd know the differences, like oh instead of 10 million triangles in this scene there's only 7 million in this version. oh we dialed back the GI a little in this version, but turned this up to bring it up to about equal.
 

Three

Member
  • The bandwidth savings for texture streaming (supposedly) come from the fact that you can get more frequent and more precise readings on what mip level/tile is needed.

And this is where we disagree strongly since there has been zero evidence given, just wishful thinking and hype. They can't. If so how does it achieve this? That's what we're discussing. There is zero evidence of this.

I don't think anyone claimed all cards prior to 2018 could do SF as efficiently as 2020 cards (They can do it though ). I'm pretty sure even GCN cards can. Hardware does get better incrementally no doubt just not all of a sudden 2x from something that we are pretending is secret sauce yet at the same time known.

So think logically. When the RTX cards get this DX12U update does the introduction of SF in these 2018 cards result in needing half the VRAM on nvidia cards? Meaning they can store 2x as much and have 2x as much bandwidth. Would it suddenly free up all these resources on all the games you say use this streaming tech from this gen meaning they can do twice as much now?

We would have heard about that right? Nvidia would be shouting about the update from the rooftops. Why are we suddenly finding out about this hyped mysterious yet known feature that will offer 2x 3x the memory and bandwidth savings after an UE5 demo? I know why.

I'm saying this isn't XSX secret sauce as it's in GPUs from 2018, it isn't even patented by MS. I'm not sure what you're expecting from it but as somebody already said I think you will be disappointed in the incremental jump of this particular technology. You will not get 2x-3x as much memory savings and bandwidth, and certainly not in comparison to other GPUs that are years old let alone new ones that are going to come out. You will however get new games and engines that are more memory efficient and stream a hell of a lot better due to SSDs. That's the burger, you won't get the exclusive 3x as efficienct hardware sauce.
 
Last edited:
This all boils down to a specific implementation, but I believe the general assertion is that this is a lot cheaper so you can do it a lot more. Also because of the precision involved you can rely on it more.
I don't follow you here could you please explain yourself ?
 

rnlval

Member
Yes but what is matter here, looks in a normal game even if is AAA he workloads where the use of CPU and GPU use 100% of processing power are very rarely.

The thing is XSX follow the normal approach of have clock fixed in order to have control of the temperature. Cerny follow the another approach he just follow
the consume of the SoC to do that.

That is why when DF asked what will happens if a workload where use the 100% of CPU and GPU happens in PS4 what will happens ? and then he answer the console
will shutdown.

Many people I don't say you guys confuse clock with workload, yes if the PS5 was a kind of server which should be to 100% all the time yeah that will a disadvantage but
in a normal scenario doesn't work like that anyway.
Clock speed may not reflect actual workloads e.g. 128bit SSE can trigger max clock speed which has less workload than 256bit AVX 2 which has higher energy consumption.

When the cooling solution is sufficient, Smartshift = shared VRM power.

MS has budgeted sufficient VRM power feed for both CPU and GPU at their clock speed specs with peak usage.
 
FACTS: XSX GPU is about 25% higher in TFLOPS, TMU, TFU, SRAM, and memory bandwidth when compared to RX-5700 XT with 9.66 TFLOPS average.

RX-5700 XT has 1887Mhz average clock speed, hence 9.66 TFLOPS average.

RTX 2080 happens to land RX 5700 XT+25% extra for Gears 5.


relative-performance_3840-2160.png


In terms of TPU's benchmarks average, there's about 23% gap between RX 5700 XT and RTX 2080. There are games that exceeds the 25% gap between RX-5700 XT and RTX 2080.

Gears 5 is not AMD GPU friendly games like Battlefield V, FarCry 5, Forza Motorsport 7, Hitman, Titanfall, Killer Instinct and 'etc'.
I am agree with you in that but you will see this happens usually in only PC gamer who believe a console like XSX cannot have TF equivalent to a 2080 or the PS5
compare to a 2070. Are not exactly the performance but are close also because both system are easier to optimize compare to just dev for PC.

They love to believe his card of 700 USD actually cost that and is not a big margin of profit from NVIDIA.

That obviously doesn't mean the consoles will be superior a high end pc in raw graphics both AMD and NVIDIA will lunch a new generation of GPUs this years.
 

rnlval

Member
I am agree with you in that but you will see this happens usually in only PC gamer who believe a console like XSX cannot have TF equivalent to a 2080 or the PS5
compare to a 2070. Are not exactly the performance but are close also because both system are easier to optimize compare to just dev for PC.

They love to believe his card of 700 USD actually cost that and is not a big margin of profit from NVIDIA.

That obviously doesn't mean the consoles will be superior a high end pc in raw graphics both AMD and NVIDIA will lunch a new generation of GPUs this years.
XSX GPU has 56 CU design, hence PC version has potential for 56 CU XT variant e.g. RX 6800 XT, while RX 6800 has 52 CU.
PS5 GPU has 40 CU design, hence PC version has potential for 40 CU XT variant e.g. RX 6700 XT, while RX 6700 has 36 CU. RX 5700 XT 40 CU and RX 5700 36 CU needs to be refreshed with DX12U capable models e.g. RX 6700 XT and RX 6700 respectively.

Not factoring the rumored "BiG NAVI" with 2X scaling from RX 5700 XT.
 
MS has budgeted sufficient VRM power feed for both CPU and GPU at their clock speed specs with peak usage.
I am not complety sure about that, the reason is simple as equals to PS5 the games are not test bench which use 100% of both CPU and GPU all the time of
for long time.

Cerny say the PS4 will shutdown in a similar workload and I don't heard about a game which shutdown you PS4 that is because if happens could just for very
specific moments in a short time.

Microsoft follow the normal approach than console that is not bad I am not saying that they don't put to work both chips to the maximum for much time because
that is not realistic as a normal workload for a game.
 

rnlval

Member
I am not complety sure about that, the reason is simple as equals to PS5 the games are not test bench which use 100% of both CPU and GPU all the time of
for long time.

Cerny say the PS4 will shutdown in a similar workload and I don't heard about a game which shutdown you PS4 that is because if happens could just for very
specific moments in a short time.

Microsoft follow the normal approach than console that is not bad I am not saying that they don't put to work both chips to the maximum for much time because
that is not realistic as a normal workload for a game.
Is my RTX 2080 Ti + 9900K a joke to you when playing games at ultra/uber graphics details? PC RTS games with many AI units can have heavy CPU and GPU usage.
 
Last edited:

rntongo

Banned
You are the one that claimed concrete proof from devkits :). Leadbetter is hardly a credible source when he interprets Sony unless he is reporting actual quotes, but his DF history (him and Battaglia) speaks for itself.

According to you Rich is not credible to the extent he was one of if not the only tech journalists Cerny spoke to after the road to ps5. Make it make sense
 
Is my RTX 2080 Ti + 9900K a joke to you when playing games at ultra/uber graphics details? PC RTX games with many AI units can have heavy CPU and GPU usage.
How do you know if you GPU and CPU use 100% all time? we are talking workload not clock usually programs like afterburner only mark clock not the workload.

Show me a game running to 60 fps or even 120 fps if you want using 100% of both for longs period of time. I am sure you will find with luck 99-100% GPU and 90% CPU only
in clocks not the workload.
 
How do you know if you GPU and CPU use 100% all time? we are talking workload not clock usually programs like afterburner only mark clock not the workload.

Show me a game running to 60 fps or even 120 fps if you want using 100% of both for longs period of time. I am sure you will find with luck 99-100% GPU and 90% CPU only
in clocks not the workload.

Could a lower CPU clock POTENTIALLY affect I/O efficiency/SSD rate during heavy load?
 
Could a lower CPU clock POTENTIALLY affect I/O efficiency/SSD rate during heavy load?
In the last consoles yes in XSX and PS5 not so much they move many of this workload to the IO.

For example the Nintendo Switch only use max of its CPU in loading screen but in that moment usually the GPU doesn't need so much power so doesn't matter.
 
The worst thing PS did was place "variable" in front of their clock speed. If they hadn't placed that there, we would have never known and it wouldn't matter. Even when the man himself, yes Mark Cerny, said there won't be much drop and that drop doesn't scale, aka 10% drop in frequency doesn't mean 10% drop in performance... we still don't get it or believe him. Cause the only thing running in our heads now is "omg, it's going to drop to under 2mhz it has to. it could go even lower than that" Our tiny brains can't comprehend it :messenger_pensive:
 

Three

Member
I didn't say all the texture calls. I said "more frequent". They both incur a cost (I think I've said that before in this thread).

This all boils down to a specific implementation, but I believe the general assertion is that this is a lot cheaper so you can do it a lot more. Also because of the precision involved you can rely on it more.

What you're saying doesn't really make sense if you think about it. What I'm getting from your initial comment is that it can create the sampler texture more frequently. There is no evidence of this and even if there were this does not effect the amount of loaded textures in memory or the streaming efficiency. It is not related.
 
I don't think anyone claimed all cards prior to 2018 could do SF as efficiently as 2020 cards (They can do it though ). I'm pretty sure even GCN cards can.

To be honest, and I don't mean to offend, but I don't think it's worth debating whether Sampler Feedback is a new/old/hardware/software feature anymore. I believe I've made a pretty strong case that Sampler Feedback is a specific GPU hardware feature that didn't exist in hardware before Turing. Yes, it does a job that has been done before in other ways (calculated with a shader), but can now can just be directly referenced from the texture sampler because the sampler hardware has been extended to provide this feedback. It's unavoidably obvious from the TSS hardware description compared with the Sampler Feedback hardware description. It is literally the same thing (list of texels sampled by the texture sampler made available through hardware).

There's really nothing else to say on that subject.

So think logically. When the RTX cards get this DX12U update does the introduction of SF in these 2018 cards result in needing half the VRAM on nvidia cards? Meaning they can store 2x as much and have 2x as much bandwidth. Would it suddenly free up all these resources on all the games you say use this streaming tech from this gen meaning they can do twice as much now?

No, I never claimed that this would suddenly transform existing games. For one thing they actually have to use the new API resources.

As I mentioned in reply to Metroiddarks, the details of the improvement come down to the implementation. An engine that does texture streaming isn't done in just one all-or-nothing way. There are trade-offs to how aggressively you evict mip levels and tiles from memory, how frequently you get feedback on what tiles/mips are needed, etc.. I doubt there are engines out there that evict 100% of unused tiles/mip levels. This is the gap Microsoft is trying to close with SFS. They want to make acquiring mip requests cheaper and more accurate, and then (with that filter hardware that's been discussed) make it less painful to have a miss.

Does that pan out to a 2X-3X difference? I don't know! It's just fun to speculate till we can see some results.

To partially repeat myself from earlier in this thread, there are three open questions as far as I'm concerned, and these get at some of our disagreements:

1) Does the 2X-3X claim compare against games that use texture streaming as an optimization or games that do not use it? Meaning, how much of that benefit is coming from new features enabled by SFS and related hardware vs. generalized streaming techniques paired with PRT and an SSD?
2) Assuming the claim in #1 is all about "SFS and related hardware", can they actually pull off 2X-3X?
3) Will the RDNA 2 feature set in the PS5 include the Sampler Feedback hardware?

I know we disagree on these points but I think this part of the debate just comes down to speculation and guessing, and I don't blame you for taking the other side on any of this.
 

Panajev2001a

GAF's Pleasant Genius
According to you Rich is not credible to the extent he was one of if not the only tech journalists Cerny spoke to after the road to ps5. Make it make sense

He has following and DF did the exposé on XSX and was one of the popular tech blogs most helping these rumours to spread. Possibly after the two Wired interviews and the Road to PS5 he went and did a livestream with DF too. The two things are orthogonal, they are still (Leadbetter and Battaglia) quite green leaning so to speak, but not going there and antagonising them would be way worse and they still have people like Tom and John L. who are solid and trustworthy blokes.
 

Ailynn

Faith - Hope - Love
I'm by no means a tech-savvy person, so most of the ins and outs of the Xbox Series X and Playstation 4 specs just go over my head. :lollipop_grinning_sweat:

It just makes me happy to see people talking about all these tech details, even if I don't fully understand the architecture and what it means for the art of videogames. The vast jumps in technology that people have managed to create in just a few decades is incredible and inspiring.

It's exciting times, and I can only imagine what the next 20 years will bring. Much love, everyone. 🤗
 

Panajev2001a

GAF's Pleasant Genius
To be honest, and I don't mean to offend, but I don't think it's worth debating whether Sampler Feedback is a new/old/hardware/software feature anymore. I believe I've made a pretty strong case that Sampler Feedback is a specific GPU hardware feature that didn't exist in hardware before Turing. Yes, it does a job that has been done before in other ways (calculated with a shader), but can now can just be directly referenced from the texture sampler because the sampler hardware has been extended to provide this feedback. It's unavoidably obvious from the TSS hardware description compared with the Sampler Feedback hardware description. It is literally the same thing (list of texels sampled by the texture sampler made available through hardware).

There's really nothing else to say on that subject.



No, I never claimed that this would suddenly transform existing games. For one thing they actually have to use the new API resources.

As I mentioned in reply to Metroiddarks, the details of the improvement come down to the implementation. An engine that does texture streaming isn't done in just one all-or-nothing way. There are trade-offs to how aggressively you evict mip levels and tiles from memory, how frequently you get feedback on what tiles/mips are needed, etc.. I doubt there are engines out there that evict 100% of unused tiles/mip levels. This is the gap Microsoft is trying to close with SFS. They want to make acquiring mip requests cheaper and more accurate, and then (with that filter hardware that's been discussed) make it less painful to have a miss.

Does that pan out to a 2X-3X difference? I don't know! It's just fun to speculate till we can see some results.

To partially repeat myself from earlier in this thread, there are three open questions as far as I'm concerned, and these get at some of our disagreements:

1) Does the 2X-3X claim compare against games that use texture streaming as an optimization or games that do not use it? Meaning, how much of that benefit is coming from new features enabled by SFS and related hardware vs. generalized streaming techniques paired with PRT and an SSD?
2) Assuming the claim in #1 is all about "SFS and related hardware", can they actually pull off 2X-3X?
3) Will the RDNA 2 feature set in the PS5 include the Sampler Feedback hardware?

I know we disagree on these points but I think this part of the debate just comes down to speculation and guessing, and I don't blame you for taking the other side on any of this.

Both you, myself, and many others said quite enough on SFS, SF, PRT, MegaTextures, other virtual texturing schemes.

Nobody is saying SFS is without its advantages, but the likelihood is that the average game with a decent virtual texturing scheme gets a 2-3x speed up in effective I/O bandwidth is something I can only describe as likely very low if not zero.

At this point, with such a burden of proof, someone could say that the cache scrubbers and coherency engines improve bandwidth and latency by 4-5x if not more and it would have equal merit then...
 
Last edited:
At this point, with such a burden of proof, someone could say that the cache scrubbers and coherency engines improve bandwidth and latency by 4-5x if not more and it would have equal merit then...

The difference is that I'm not saying it's definitely 2X-3X better. I'm saying that I'm pretty sure Microsoft is saying that. It's not a number I invented and am attaching to SFS. Microsoft did that and everyone just explains it away because it sounds like too much to them. Maybe it's not?
 

rntongo

Banned
The difference is that I'm not saying it's definitely 2X-3X better. I'm saying that I'm pretty sure Microsoft is saying that. It's not a number I invented and am attaching to SFS. Microsoft did that and everyone just explains it away because it sounds like too much to them. Maybe it's not?

You should stop arguing with the guy. MSFT said 2 or 3 times or even higher. And a MSFT engineer gave an example of using only 25% of a texture which is 4x. We need to wait and see if it lives up to these claims but no need arguing with someone who’s mind is made up
 

Deto

Banned
I wonder if Sony came up with some fancy name for something that tripled the PS5's TF.

"sample teraflopback TF, makes the PS5 30TF"

what would be the reaction of the internet?

Alex would have his ass pouting to say "madness" on twitter.
 

Panajev2001a

GAF's Pleasant Genius
The difference is that I'm not saying it's definitely 2X-3X better. I'm saying that I'm pretty sure Microsoft is saying that. It's not a number I invented and am attaching to SFS. Microsoft did that and everyone just explains it away because it sounds like too much to them. Maybe it's not?

I think MS said what they said how they said it because they wanted to imply something that would give good weapons to their fan base and hype the console up without committing to something they could be called up on... which is why they were very detailed on some other bits and not at all on what mattered there: 2-3x faster vs what baseline? Don’t know...
 

Panajev2001a

GAF's Pleasant Genius
I wonder if Sony came up with some fancy name for something that tripled the PS5's TF.

"sample teraflopback TF, makes the PS5 30TF"

what would be the reaction of the internet?

Alex would have his ass pouting to say "madness" on twitter.

Look at when they dated to speak about RPM, double rate FP16 processing, for PS4 Pro.

They said something factual with a clear baseline and clear context, but the professional concern trolls still try to bring it up as their anti-Cerny trump card...
 
I'm by no means a tech-savvy person, so most of the ins and outs of the Xbox Series X and Playstation 4 specs just go over my head. :lollipop_grinning_sweat:

It just makes me happy to see people talking about all these tech details, even if I don't fully understand the architecture and what it means for the art of videogames. The vast jumps in technology that people have managed to create in just a few decades is incredible and inspiring.

It's exciting times, and I can only imagine what the next 20 years will bring. Much love, everyone. 🤗

614de537ea516c274f9c95e722fbf9d5.gif
 

Panajev2001a

GAF's Pleasant Genius
You should stop arguing with the guy. MSFT said 2 or 3 times or even higher. And a MSFT engineer gave an example of using only 25% of a texture which is 4x. We need to wait and see if it lives up to these claims but no need arguing with someone who’s mind is made up

Yes, you are using a lot of tech terms and quoting a lot of examples, trying to do research, but again the way you are connecting them and the maths you get out of are still not backed up by them.

You keep saying “but MS said 3x the bandwidth” and lots of people tried to nudge in the reality check that likely this is not against games that already take advantage of virtual texturing / PRT. You are convinced this is not the case, but the burden of proof (whether you are right or wrong) is on you here.
 

FireFly

Member
This slide is always great to show that the Ps5 max CPU and GPU clocks are mutually exclusive.
But Sony is at fault for the misunderstanding. Cerny was always talking about Clock speeds and not power consumption. But at the end of the day, the increased power target will lead to higher clocks for one or the other.
The slide doesn't show that, though. Since whether or not both the GPU and CPU will be able to hit their the maximum clocks is determined not only by the power budget available, but by how power intensive the code is.
 

Deto

Banned
Look at when they dated to speak about RPM, double rate FP16 processing, for PS4 Pro.

They said something factual with a clear baseline and clear context, but the professional concern trolls still try to bring it up as their anti-Cerny trump card...

I've never seen an xbox fan complaining about the promise of the cloud that would triple the power of the xbox, but I've seen many say that Sony promised 8TF "true 4k" on the PS4 PRO.
 
Last edited:
I think MS said what they said how they said it because they wanted to imply something that would give good weapons to their fan base and hype the console up without committing to something they could be called up on... which is why they were very detailed on some other bits and not at all on what mattered there: 2-3x faster vs what baseline? Don’t know...

I guess we'll find out! Exciting times.
 

rntongo

Banned
I wonder if Sony came up with some fancy name for something that tripled the PS5's TF.

"sample teraflopback TF, makes the PS5 30TF"

what would be the reaction of the internet?

Alex would have his ass pouting to say "madness" on twitter.

Dude I’ve read your previous replies and you seem very reasonable but this is such a false equivalence. MSFt has made some bold claims about their texture streaming. Maybe Sony has something as good or close or maybe even better. But the fundamental idea is sound. If you can efficiently stream textures you can get more bang for your I/O throughput.
 

rntongo

Banned
I've been explaining this multiple times on the forum. Increasing the power budget for one decreases what's available for the other and the clock speed it can reach.

It’s so obvious, people just don’t want to believe because it doesn’t fit with their idea. It’s inpressive because the PS5 will always be able to hit 2.23GHz when it needs to and revert back to say 2GHz when it doesn’t. It’s a boost clock based on the workload that determines what processor gets more power
 
The I/O system isn't equal though. One of them is faster than the other.

Actually we DO NOT know this and its the ENTIRE reason for this long ass thread. The entirety of the I/O solution on neither system is well understood by the public. We only know the stated speeds of the SSD and controllers. We do NOT know system I/O speeds. The XSX developers clearly prioritized giving the XSX GPU as much bandwidth as possible by declaring it has 560 Gb/s of bandwidth. Why would they go through the effort of doing, that when they could have gone straight 448 system-wide? Obviously they have an insight which we do not, that they will be able to feed the GPU at that rate which includes the XVA. We just don't understand how yet.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Actually we DO NOT know this and its the ENTIRE reason for this long ass thread.

Since when did this go up for grabs?! Are 12 TFLOPS max on one side vs 10 TFLOPS on the other also something we can turn upside down?

If a bunch of people start a thread saying that actually given X bold unsubstantiated claim XSX is actually slower than PS5 and they keep the thread going for ages, refusing to see that they are just patching unrelated pieces of info together, does it mean XSX having faster shaders throughput is up for debate?
 
Nobody is saying SFS is without its advantages, but the likelihood is that the average game with a decent virtual texturing scheme gets a 2-3x speed up in effective I/O bandwidth is something I can only describe as likely very low if not zero.

Unfortunately though this is simply your pronouncement not based in knowledge of the hardware testing involved to the get to the number as expressed by microsoft engineers who built the system. I would think that this would come from profiling software use cases ad naseum in order to be able to make a public statement like that.

I'm not sure why you think people should believe your gut rather than what for all intents and purposes is factual statement by system architects. It really just bias in full display rather than any honest critique.

Just say "I don't understand how they accomplished that" and leave it at that.
 
Since when did this go up for grabs?! Are 12 TFLOPS max on one side vs 10 TFLOPS on the other also something we can turn upside down?

If a bunch of people start a thread saying that actually given X bold unsubstantiated claim XSX is actually slower than PS5 and they keep the thread going for ages, refusing to see that they are just patching unrelated pieces of info together, does it mean XSX having faster shaders throughput is up for debate?

What are you even talking about? I know that you cannot explain the benchmarks for either system as we only know the bits and pieces that have been exposed and you have a system preference. What about Us not knowing the full extent of the I/O systems is subject to interpretation?

There are future deep dives on both systems coming... for a reason.
 
Actually we DO NOT know this and its the ENTIRE reason for this long ass thread.

Actually we already do know the answer to this. What people are debating is whether or not SFS and BCPak will put the Xbox Series Xs I/O system on par with the PS5s. This is going beyond just comparing the specifications that Microsoft and Sony gave us.
 

jimbojim

Banned
Both you, myself, and many others said quite enough on SFS, SF, PRT, MegaTextures, other virtual texturing schemes.

Nobody is saying SFS is without its advantages, but the likelihood is that the average game with a decent virtual texturing scheme gets a 2-3x speed up in effective I/O bandwidth is something I can only describe as likely very low if not zero.

At this point, with such a burden of proof, someone could say that the cache scrubbers and coherency engines improve bandwidth and latency by 4-5x if not more and it would have equal merit then...

Dude, you made up stuff. I dunno why people arguing with you who's mind is made up. But check the bolded part below.

You should stop arguing with the guy. MSFT said 2 or 3 times or even higher. And a MSFT engineer gave an example of using only 25% of a texture which is 4x. We need to wait and see if it lives up to these claims but no need arguing with someone who’s mind is made up

When MSFTWTFBBQ said on official page it is 2x-3x crap ( or higher ), but nevertheless, it's much higher for some reason, it's 4x. Pack it up, Panajev.
 
Last edited:

rntongo

Banned
Dude, you made up stuff. I dunno why people arguing with you who's mind is made up. But check the bolded part below.



When MSFTWTFBBQ said on official page it is 2x-3x crap, but nevertheless, it's much higher for some reason, it's 4x.

You respond without researching then you start spreading lies based off your biases. They said 2 or 3 or even higher.

This is from the xsx technology glossary:
“it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance.“
 
Last edited:

jimbojim

Banned
You respond without researching before you start spreading lies based off your biases. They said 2 or 3 or even higher.

This is from the xsx technology glossary:
“it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance.“

I've edited it before you made the post. But you need to prove you "biases" with 4x. Like you tried on ERA to push 4.6-6 GB/s. Luckily, people stopped you there.
 
Last edited:

rntongo

Banned
I've edited it before you made the post. But you need to prove you "biases" with 4x. Like you tried on ERA to push 4.6-6 GB/s. Luckily, people stopped you there.
So you edited your post where you lied and didn’t even have the decency to apologize, just doubled down on an ad hominem.

You forgot to edit out the red part that you claimed was a lie. Please resort to research before posting here. I already posted a tweet from a MSFT engineer with proof of the claim.
 
Last edited:

jimbojim

Banned
So you edited your post where you lied and didn’t even have the decency to apologize, just doubled down on an ad hominem.

Wait, what? Apologize to you, Phil Spencer, to Xbox Series X? I've edited a post because i saw it says "or higher" on official page. But before that i thought it was 2x-3x because it's often mentioned here.

Did you ever apologize for spreading FUD on ERA? No, you didn't. Not even tried to edit posts. But whatever.

Sorry, XSX! I apologize. You're the most powerful console in the universe in every segment damn segment. Your SSD speed like speed of light. I'm so sorry that i've insulted MSFT and Velocity. All hail to rntongo
 
Last edited:
Actually we already do know the answer to this. What people are debating is whether or not SFS and BCPak will put the Xbox Series Xs I/O system on par with the PS5s. This is going beyond just comparing the specifications that Microsoft and Sony gave us.

No we actually do not. NO one yet understands how the 100 Gb partition works. No one understands why MS went with a distinct VRAM solution at higher bandwidth with a slower SSD implementation (how do they intend to feed that?) etc. Clearly SFS And BCPACK are part of the XVA solution along with Direct Storage and the HW block. EEc on the memory block...? for what? Its a console not a server!

If you know tell us. LOL
 
Last edited:

jimbojim

Banned
No we actually do not. NO one yet understands how the 100 Gb partition works. No one understands why MS went with a distinct VRAM solution at higher bandwidth with a slower SSD implementation (how do they intend to feed that?) etc. If you know tell us. LOL

Are you sure about that? Think twice. I know who knows.
 
Last edited:
Top Bottom