• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

HW| does Xbox Series X really have 12 Teraflops ?

DeepEnigma

Gold Member
Sony's libraries and development tools are the best in the industry. Devs are very familiar with it from the PS4 generation.
It's no secret that DX12U is newer and has teething issues. You can see this not only with the Xbox, but PC releases have also sucked arse

It may come back to issues with the GDK, not sure.
MS should use Vulkan and relieve everyone of the headaches. But oh, you know...
 
This generation has proved that Teraflops, within a certain range (of less than 30%), are basically nothing more than a marketing gimmick. And I believe that's what Microsoft was chasing with Xbox Series X.

I don't get the feeling that Series X was made to be efficiently produced from a cost or performance standpoint. They wanted to get a bigger number than Sony. They felt that if they did, it would convince the console audience to shift just like it did during the PS4 era when PS4 was more popular. They would then somehow get a "pincer" movement with the Series S so they could attract price conscious consumers.

Clearly, that marketing strategy was a big failure. PS5 runs most games better despite "only" being 10 TF. We can speculate why, but IMHO it simply signals to me that Sony found that their approach to the clock speeds, unified RAM, and lower level software access simply enabled a console that could achieve better results than a brute force 12 TF with a lot of headscratching memory and software decisions.

As a result, Sony struck the right balance of cost and power, delivering what in "reality" has been the "world's most powerful console where it counts" at an attractive price of $399 starting. Cost conscious consumers in the console market are not as important as they used to be. I feel like this is the "Apple effect" taking hold for consumer purchases of consoles - sort of like, this is my primary gaming machine for the next 5+ years, I'd gladly pay up for the product I want.
MS set the 12 an 4 number without regard to the PS5. Let's be honest here, they are both using AMD Zen CPUs, the same AMD GPU tech, and so the power number was always going to be which console put the most CUs in with the best speed. There was no secret sauce in the power numbers

Sony's approach to their console wad set around back compatibility. The way they did it was around speed profile settings an CU count. The PS4 had 16 CUs. The Pro had 32 in a butterfly design where when someone put in a PS4 game half the GPU would shut down and the GPU speed would match the PS4 GPU speed.
Similarly, the PS5 kept the same CU count and when a PS4 or Pro game is uses the GPU drops down to the GPU speed profile of that console.
In the keeping with Sony's prior way of keeping BC, the next PS will have 72 CU in a butterfly design.

The Xbox went to a totally different GPU set up from the XO, a totally new development kit from the XDK to the GDK, and a new API in DX12U and new tools and libraries which are nowhere near as mature as the Playstation one.

As far as the slight differences between the two console which might add efficiencies or performance, there is the cache scrubbers on the PS5, which don't add to performance by any degree, and which could have been adopted by AMD, but they chose not to.
Primitive Shaders are basically the same as Mesh Shaders, and while no game on Xbox has used Mesh Shaders, some Sony first party games have used Primitive shaders.
The GE is AMD tech and not Sony, regardless if what people may wish to think.
On the XSX there is Sampler Feedback Streaming which absolutely will help with RAM usage and optimisation, which again, hasn't been used yet. VRS has been used and as the VRS 2.0 is adapted further it will help with performance. But don't forget, VRS doesn't add any power to a console, it's just robbing Peter to pay Paul.
There is also the Direct ML and lower precision ML in the XSX, which hasn't been used, may never be used, and might not do anything of value if it is used.

Software is a far bigger influence on performance this gen than the hardware is, and as it stands, Sony has by far the better software than MS at this point.
MS will adapt DX12 further, so who know what or when they will entail.
 

onQ123

Member
MS set the 12 an 4 number without regard to the PS5. Let's be honest here, they are both using AMD Zen CPUs, the same AMD GPU tech, and so the power number was always going to be which console put the most CUs in with the best speed. There was no secret sauce in the power numbers

Sony's approach to their console wad set around back compatibility. The way they did it was around speed profile settings an CU count. The PS4 had 16 CUs. The Pro had 32 in a butterfly design where when someone put in a PS4 game half the GPU would shut down and the GPU speed would match the PS4 GPU speed.
Similarly, the PS5 kept the same CU count and when a PS4 or Pro game is uses the GPU drops down to the GPU speed profile of that console.
In the keeping with Sony's prior way of keeping BC, the next PS will have 72 CU in a butterfly design.

The Xbox went to a totally different GPU set up from the XO, a totally new development kit from the XDK to the GDK, and a new API in DX12U and new tools and libraries which are nowhere near as mature as the Playstation one.

As far as the slight differences between the two console which might add efficiencies or performance, there is the cache scrubbers on the PS5, which don't add to performance by any degree, and which could have been adopted by AMD, but they chose not to.
Primitive Shaders are basically the same as Mesh Shaders, and while no game on Xbox has used Mesh Shaders, some Sony first party games have used Primitive shaders.
The GE is AMD tech and not Sony, regardless if what people may wish to think.
On the XSX there is Sampler Feedback Streaming which absolutely will help with RAM usage and optimisation, which again, hasn't been used yet. VRS has been used and as the VRS 2.0 is adapted further it will help with performance. But don't forget, VRS doesn't add any power to a console, it's just robbing Peter to pay Paul.
There is also the Direct ML and lower precision ML in the XSX, which hasn't been used, may never be used, and might not do anything of value if it is used.

Software is a far bigger influence on performance this gen than the hardware is, and as it stands, Sony has by far the better software than MS at this point.
MS will adapt DX12 further, so who know what or when they will entail.
You know your numbers wrong right?
 
Last edited:

Topher

Identifies as young
MS should use Vulkan and relieve everyone of the headaches. But oh, you know...

Would be hilarious if Naughty Dog released a statement like Arkane did with Redfall, but instead of talking about working through covid and whatnot, bring up having to work with DirectX 12

Animated GIF
 

SlimySnake

Flashless at the Golden Globes
This generation has proved that Teraflops, within a certain range (of less than 30%), are basically nothing more than a marketing gimmick. And I believe that's what Microsoft was chasing with Xbox Series X.

I don't get the feeling that Series X was made to be efficiently produced from a cost or performance standpoint. They wanted to get a bigger number than Sony. They felt that if they did, it would convince the console audience to shift just like it did during the PS4 era when PS4 was more popular. They would then somehow get a "pincer" movement with the Series S so they could attract price conscious consumers.

Clearly, that marketing strategy was a big failure. PS5 runs most games better despite "only" being 10 TF. We can speculate why, but IMHO it simply signals to me that Sony found that their approach to the clock speeds, unified RAM, and lower level software access simply enabled a console that could achieve better results than a brute force 12 TF with a lot of headscratching memory and software decisions.

As a result, Sony struck the right balance of cost and power, delivering what in "reality" has been the "world's most powerful console where it counts" at an attractive price of $399 starting. Cost conscious consumers in the console market are not as important as they used to be. I feel like this is the "Apple effect" taking hold for consumer purchases of consoles - sort of like, this is my primary gaming machine for the next 5+ years, I'd gladly pay up for the product I want.
They are losing $200 on the series x, it was definitely not designed to be efficiently produced. I took one look at that ridiculous dual motherboard design and knew it was going to be goddamn expensive. They just wanted to have 12 tflops and went with a wide and slow design when they knew RDNA2.0 was going to hit much higher clocks. The 40 tflops 6700xt hits 2.7 ghz on PC. They couldve just gone with the x1x design and saved on not just the cost of the chip but also the cooling.

The sandwich strategy was also quite dumb. You are not Nintendo. You are not going to get people to buy a gimped 4 tflops console. Again, a better solution wouldve been to copy what sony did and release a discless $399 sku with the same tflops. A lower clocked 8 tflops GPU wouldve been a lot better as well and allowed them to save on cooling costs.

That said, I have seen enough XSX games perform better than the PS5 to know that this is mostly a driver issue. MS has found themselves in the same place Sony was in thanks to the cell almost 2 decades ago. I can see the higher clocked PS5 GPU keep up with the 18% faster GPU in some games that might favor higher clocks, but to see xsx perform up to 20% WORSE makes no sense. That would mean the 12 tflops xsx is performing like an 8 tflops gpu and im not buying that.

It is definitely not getting any kind of optimization and after playing the PC ports its obvious that the same issues are also impacting PC ports. MS needs to take an active role like Sony did with the cell and send their engineers at all of these third party studios to ensure games ship with the same performance as the PS5 versions. It took sony a couple of years but by 2009 most games were performing roughly identical on both the PS3 and 360. All thanks to those nameless engineers who went around teaching people about the cell architecture.

it's time for MS to do the same thing.
 
Last edited:

Zoej

Member
I agree, the difference between the machines is small, like 17% in compute power so I doubt most people will notice much difference apart from pixel counters blowing up 400x screenshots.
I'd say Sony and the clock speed advantage does well on scaling up last gen engines with the same 36cu setup as PS4 Pro and big upgrades in memory and CPU giving it an easy upgrade path for Devs from PS4 Pro.
That doesn't quite hold true for Xbox coming from the One X, with a very different memory setup.
As we get into UE5 heavily and we're more compute heavy I expect Series X to come into its own, as shown by games I've mentioned from PC centric developers already.
Still waiting huh?
tQavZ1T.jpg
 

Soosa

Banned
They are pretty similar on graphics.

PS5 have had More impressive graphics in my opinion, but most games are still last gen poop and there are no big differences.

Ps5 loads faster and have superior controller, sxs is quiet(coil whine on my ps5) and cheap vpn trick game pass is nice

But both complete each other.

30 fps games are unplayable on oled(micro lag) so I use performance mode, if there is differences in RT/quality mode then I dont know.

But getting sea sick from 30fps isnt an option
 
Oh boy a teraflops thread just like the good ol days 😂 I remember how optimistic many were over paper specs and Phil's juicy ray traced specular pecs and buns sporting sub-surface scattering goodness. I remember many claiming Cerny was simply spouting marketing mumbo-jumbo. I have a 3060 and I'm not getting the performance I expect based on the specs, it's almost like things are more complicated than theoretical paper wank. How is Phil anyway, heard things are going swell at his camp. 😂 C'mon Starfield!
 

skit_data

Member
Threads like this make me miss the next gen-speculation thread.
The expectations of huge disparicies in res/framerate due to the TFLOP difference.
The variable overclock FUD.
The Cerny secret sauce vs VRS.
The expectations of a Infinity cache on the PS5 GPU.
The magic Velocity Architecture vs the magic SSD.
The overemphasis on wording in regards to 4k at 60fps vs 4k at up to 60fps (that was my personal favourite, the lengths people went to).
Everything Penello said.
Tom Warrens warri’n.
”The packaging wars” and it’s casualties.
Melting consoles on both sides.
Meltdowns on both sides.

Good times.
 
Threads like this make me miss the next gen-speculation thread.
The expectations of huge disparicies in res/framerate due to the TFLOP difference.
The variable overclock FUD.
The Cerny secret sauce vs VRS.
The expectations of a Infinity cache on the PS5 GPU.
The magic Velocity Architecture vs the magic SSD.
The overemphasis on wording in regards to 4k at 60fps vs 4k at up to 60fps (that was my personal favourite, the lengths people went to).
Everything Penello said.
Tom Warrens warri’n.
”The packaging wars” and it’s casualties.
Melting consoles on both sides.
Meltdowns on both sides.

Good times.
I used to love to read those threads.
We had the fake insiders who were writing in riddles, one who got banned, unbanned when he got the XSX right, then banned when he got the PS5 wrong.
 

skit_data

Member
I used to love to read those threads.
We had the fake insiders who were writing in riddles, one who got banned, unbanned when he got the XSX right, then banned when he got the PS5 wrong.
Wow, that’s an impressive feat! I can only imagine the mods frustration

Edit: Thinking about it I sort of recognize the part about writing in riddles, do you remember the users name?
 
Last edited:

Max_Po

Banned
I moved on from this TFlop bs within first 12 months.

They both reconstruct image to 4k and PS5 always has steady FPS..

When in doubt pick Cerny !!!

On yea .... I remember that fan boys saying that Cerny panicked after the 12 tflop reveal and overclocked 8.9 Tflop PS5 to make it come closer to Xbox... rofl
 

Elog

Member
That said, I have seen enough XSX games perform better than the PS5 to know that this is mostly a driver issue. MS has found themselves in the same place Sony was in thanks to the cell almost 2 decades ago. I can see the higher clocked PS5 GPU keep up with the 18% faster GPU in some games that might favor higher clocks, but to see xsx perform up to 20% WORSE makes no sense. That would mean the 12 tflops xsx is performing like an 8 tflops gpu and im not buying that.
I know you write 'mostly' but I just want to say that I see that as an over-simplification.

For a long time TFLOPS was the metric that determined FPS. The geometrical complexity of scenes was low and all textures were available in VRAM all the time. I think many here still do not realize that this has largely changed. Let me go through an illustrative (and very much simplistic) example (will only go through the impact of increased geometrical complexity and post-processing tasks):

To calculate something, hardware needs to fetch the raw data from memory, do the calculation in a low level cache environment and then move the data back to memory. We can simplify this with a schematic:

Data in memory -> (Move 1) -> Calculation (this is where TFLOPs matter) -> (Move 2) -> Data back in memory

Let's assume that Move 1 and Move 2 takes 10ns each ('I/O') and that a frame requires 0,01 TFLOPs of computational power. For a single CU GPU with 10 TFLOPs the following output is then produced as FPS:

10 vertices result in roughly 1,000 FPS (TFLOP limited)
100 vertices result in roughly 1,000 FPS (TFLOP limited)
1,000 vertices result in roughly 1,000 FPS (TFLOP limited)
10,000 vertices result in roughly 1,000 FPS (TFLOP limited)
100,000 vertices result in roughly 500 FPS (I/O limited)

If you increase the CU count @ same TFLOP, the I/O limitation kicks in even earlier since it increases the amount of moves.

Almost without exception, the new crop of games are I/O limited in terms of FPS due to geometrical complexity (which drives up the vertice count) and your CUs are almost never at 100% capacity since they are waiting for data. This was Cernys point in his presentation.

What determines I/O? Latency and memory frequency of the GPU cache environment, how easy the hardware and the API can delete old data from the cache pool et cetera.

XSX has a TFLOP advantage over the PS5. The PS5 has an I/O advantage (higher cache frequency and lower amount of CUs) than the XSX. Most people really miss how I/O limited games are and have become.

(Then we have the increased use of high number of high-resolution textures which results in VRAM being used dynamically during scenes but that is a separate story)
 
Last edited:
I don't even have a PS5 but I recognised the smart decisions Cerny employed and saw right through the FUD storm. Phil does have immaculate buns though, he has texture space diffusion to die for. Always nice to fall back on when let down by Redfall. 😂
 
Last edited:

azertydu91

Hard to Kill
Wow, that’s an impressive feat! I can only imagine the mods frustration

Edit: Thinking about it I sort of recognize the part about writing in riddles, do you remember the users name?
The two that I can remember are osirisblack or something like that but he got banned since and the other was @odium but is now call Gavon West Gavon West but has been banned too for being an xbox warrior.
 

Gaiff

SBI’s Resident Gaslighter
They are losing $200 on the series x, it was definitely not designed to be efficiently produced.
It's a bunch of bullshit to appeal to regulators to make their position in the industry seem more precarious than it really is. They probably include the cost of R&D in that. If anyone really believes that Sony is turning in a slight profit while Microsoft loses $200 per Xbox, I have a bridge to sell them.
 
All this FUD talk just had me remembering this nugget from one Andrew Goossen, an Xbox system architect, during pre-launch interviews that shows you are all wrong - XSX doesn't really have 12tf at all and is actually up to 25tf. Checkmate 3070 :messenger_smirking:

Without hardware acceleration, this work could have been done in the shaders but would have consumed over 13 TFLOPs alone. For the Xbox Series X, this work is offloaded onto dedicated hardware and the shader can continue to run in parallel with full performance. In other words, Series X can effectively tap the equivalent of well over 25 TFLOPs of performance while ray tracing.

Xbox Series X goes even further than the PC standard in offering more power and flexibility to developers. In grand console tradition, we also support direct to the metal programming including support for offline BVH construction and optimisation. With these building blocks, we expect ray tracing to be an area of incredible visuals and great innovation by developers over the course of the console's lifetime.
 

Mr.Phoenix

Member
MS set the 12 an 4 number without regard to the PS5. Let's be honest here, they are both using AMD Zen CPUs, the same AMD GPU tech, and so the power number was always going to be which console put the most CUs in with the best speed. There was no secret sauce in the power numbers

Sony's approach to their console wad set around back compatibility. The way they did it was around speed profile settings an CU count. The PS4 had 16 CUs. The Pro had 32 in a butterfly design where when someone put in a PS4 game half the GPU would shut down and the GPU speed would match the PS4 GPU speed.
Similarly, the PS5 kept the same CU count and when a PS4 or Pro game is uses the GPU drops down to the GPU speed profile of that console.
In the keeping with Sony's prior way of keeping BC, the next PS will have 72 CU in a butterfly design.

The Xbox went to a totally different GPU set up from the XO, a totally new development kit from the XDK to the GDK, and a new API in DX12U and new tools and libraries which are nowhere near as mature as the Playstation one.

As far as the slight differences between the two console which might add efficiencies or performance, there is the cache scrubbers on the PS5, which don't add to performance by any degree, and which could have been adopted by AMD, but they chose not to.
Primitive Shaders are basically the same as Mesh Shaders, and while no game on Xbox has used Mesh Shaders, some Sony first party games have used Primitive shaders.
The GE is AMD tech and not Sony, regardless if what people may wish to think.
On the XSX there is Sampler Feedback Streaming which absolutely will help with RAM usage and optimisation, which again, hasn't been used yet. VRS has been used and as the VRS 2.0 is adapted further it will help with performance. But don't forget, VRS doesn't add any power to a console, it's just robbing Peter to pay Paul.
There is also the Direct ML and lower precision ML in the XSX, which hasn't been used, may never be used, and might not do anything of value if it is used.

Software is a far bigger influence on performance this gen than the hardware is, and as it stands, Sony has by far the better software than MS at this point.
MS will adapt DX12 further, so who know what or when they will entail.
just some things to clear up.

AMD didn't use cache scrubbers but not for the reason you are implying here. The cache scrubbers are controlled by the PS5 coherency engine which is part of the PS5 I/O complex. No AMD hardware, be it CPU, GPU or APU has that I/O complex and hence doesn't have cache scrubbers. And its function is to find inactive data and evict them from cache quickly, (funny enough, the exact same thing Sampler feedback tries to accomplish but doing that on VRAM instead of cache) freeing up space for actual usable data which in turn makes the APU more efficient and also happens to help with RAM usage (obviously). And if you know what or why the cache is important in any chip, then you can not say that it has no performance benefits.

AMD would circumvent this by simply just adding more cache (infinity cache), but doing so wouldn't be cost-effective for a console. For AMD, using an infinity cache, even though that means they would need a bigger chip, offers them more benefits for their use case, as their GPU or CPU does not have to control data I/O the way the PS5s APU does

Sampler feedback allows data to be quickly pulled from the SSD just as quickly as possible as it tries to guess what data may be needed soon. Its basically trying to solve a problem that the PS5 doesn't have, cause the PS5 has an overall faster data throughput from SSD to RAM.

Also, I don't know if intentional, but your post reads like that thing people tend to do where they dismiss or disregard the pros of one thing and then try and talk up the pros of another. Eg, saying primitive shaders are the same as mesh shaders, they are not. Mesh shaders are a fixed function geometry pipeline that does... yh, mesh shading. Primitive shaders are basically programmable geometry, one of which programs could be mesh shading. Primitive shaders are not new, just not the same as mesh shaders.
 
Last edited:
just some things to clear up.

AMD didn't use cache scrubbers but not for the reason you are implying here. The cache scrubbers are controlled by the PS5 coherency engine which is part of the PS5 I/O complex. No AMD hardware, be it CPU, GPU or APU has that I/O complex and hence doesn't have cache scrubbers. And its function is to find inactive data and evict them from cache quickly, (funny enough, the exact same thing Sampler feedback tries to accomplish but doing that on VRAM instead of cache) freeing up space for actual usable data which in turn makes the APU more efficient and also happens to help with RAM usage (obviously). And if you know what or why the cache is important in any chip, then you can not say that it has no performance benefits.

AMD would circumvent this by simply just adding more cache (infinity cache), but doing so wouldn't be cost-effective for a console. For AMD, using an infinity cache, even though that means they would need a bigger chip, offers them more benefits for their use case, as their GPU or CPU does not have to control data I/O the way the PS5s APU does

Sampler feedback allows data to be quickly pulled from the SSD just as quickly as possible as it tries to guess what data may be needed soon. Its basically trying to solve a problem that the PS5 doesn't have, cause the PS5 has an overall faster data throughput from SSD to RAM.

Also, I don't know if intentional, but your post reads like that thing people tend to do where they dismiss or disregard the pros of one thing and then try and talk up the pros of another. Eg, saying primitive shaders are the same as mesh shaders, they are not. Mesh shaders are a fixed function geometry pipeline that does... yh, mesh shading. Primitive shaders are basically programmable geometry, one of which programs could be mesh shading. Primitive shaders are not new, just not the same as mesh shaders.
The cache scrubbers have a couple of purposes.
One is to help manage the temperature of the console's solid-state drive (SSD) by clearing out any old or unnecessary data from its cache. By doing this, the cache scrubbers can reduce the workload on the SSD, which can help to keep its temperature within safe operating limits.

Additionally, the cache scrubbers can also help to optimize system performance by ensuring that the cache contains the most frequently used data. As the PS5 runs programs and games, it stores frequently used data in the cache to make it faster to access in the future. However, if the cache becomes too full or contains outdated data, it can actually slow down system performance. The cache scrubbers help to prevent this by continuously monitoring and clearing out old data from the cache, which helps to maintain optimal system performance.

So there would have an application outside of the SSD, and AMD could have adopted it if they wished as this was part of the agreement Sony had with AMD.

The point is that the cache scrubbers don't add any real performance to the console.

The XSX has Sampler Feedback Streaming, not just Sampler feedback. They are not the exact same thing.

As for Primitive vs Mesh Shaders, they were AMD and Nvidias competing solutions for a new geometry pipeline. They both had the same end goal, but went about it a bit differently. AMD decided to adopt Mesh Shaders to standardise the process to make it easier for developers.

We don't need to theorise about it, this interview from AMD goes into it way beyond our knowledge.
Interestingly, AMD say that Sony went with Primitive Shaders, the Xbox is capable of doing both Primitive and Mesh Shaders.
Also of note is that Sony's first party studios have already used Primitive Shaders on their games while non have uses Mesh Shaders on Xbox.

https://www-4gamer-net.translate.go...=ja&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp
 
As for Primitive vs Mesh Shaders, they were AMD and Nvidias competing solutions for a new geometry pipeline. They both had the same end goal, but went about it a bit differently. AMD decided to adopt Mesh Shaders to standardise the process to make it easier for developers.

We don't need to theorise about it, this interview from AMD goes into it way beyond our knowledge.
Interestingly, AMD say that Sony went with Primitive Shaders, the Xbox is capable of doing both Primitive and Mesh Shaders.
Also of note is that Sony's first party studios have already used Primitive Shaders on their games while non have uses Mesh Shaders on Xbox.

https://www-4gamer-net.translate.go...=ja&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp
Well ,yes and no. This is tricky. From AMD recent interview "Primitive shaders" is both the name of the hardware units (in all AMD GPUs since Vega) and allegedly Sony own API using this hardware (again supposedly working very similarly to Mesh Shaders with the purpose of greatly accelerating triangles rendering).

On X Series "Mesh Shaders" is a DirectX API (shamelessly copied from Nvidia own API) that is using the hardware units "Primitive Shaders" in AMD GPUs

There are no "Mesh shaders hardware units" in RDNA2 GPUs. All RDNA GPUs have the same thing PS5s have: Primitive Shaders that are already used in some PS5 titles. AFAIK the first game running on Xbox Series using the primitive shaders hardware units (using Mesh shaders API) is Fortnite (because of Nanite).
 
Last edited:

Mr.Phoenix

Member
The cache scrubbers have a couple of purposes.
One is to help manage the temperature of the console's solid-state drive (SSD) by clearing out any old or unnecessary data from its cache. By doing this, the cache scrubbers can reduce the workload on the SSD, which can help to keep its temperature within safe operating limits.

Additionally, the cache scrubbers can also help to optimize system performance by ensuring that the cache contains the most frequently used data. As the PS5 runs programs and games, it stores frequently used data in the cache to make it faster to access in the future. However, if the cache becomes too full or contains outdated data, it can actually slow down system performance. The cache scrubbers help to prevent this by continuously monitoring and clearing out old data from the cache, which helps to maintain optimal system performance.

So there would have an application outside of the SSD, and AMD could have adopted it if they wished as this was part of the agreement Sony had with AMD.

The point is that the cache scrubbers don't add any real performance to the console.
This is the part I'm afraid I have to disagree with. You cannot say this. If you've something that makes the overall throughput of your APU more efficient, saves you time in replacing data...etc. You cannot say that it is not adding any performance. If I can start rendering something 0.5ms faster because I spent less time scrubbing through the cache and then ultimately having to pull data from RAM, that is performance gained.

And again, AMD does not have to use this, because the benefits are not things that they have to worry about. They have the infinity cache, which does way more for them than having cache scrubbers and that also means they don have to worry as much about cache congestion. Since they have a lot of it. And as for the SSD benefits, they leave stuff like that entirely to the controllers found in the SSD.

You focus on cache scrubbers reducing SSD activity as if that were the primary function, it's not, it's just a beneficial by-product. The primary function is to reduce time spent going through the cache and delays with having to pull data from the RAM to feed the cache. Allows the APU to literally spend more of its time doing and less of it waiting.
The XSX has Sampler Feedback Streaming, not just Sampler feedback. They are not the exact same thing.
Sampler feedback deals with data on the VRAM and SSD level. Not at cache level. Its primary function is to make data acquisition from the SSD more efficient. And that is the point I was making. That is not a problem the PS5 needs to worry about because the PS5 (even though it still has the coherency engines that do something similar for it) has twice the data throughput of the Xbox.
As for Primitive vs Mesh Shaders, they were AMD and Nvidias competing solutions for a new geometry pipeline. They both had the same end goal, but went about it a bit differently. AMD decided to adopt Mesh Shaders to standardise the process to make it easier for developers.

We don't need to theorise about it, this interview from AMD goes into it way beyond our knowledge.
Interestingly, AMD say that Sony went with Primitive Shaders, the Xbox is capable of doing both Primitive and Mesh Shaders.
Also of note is that Sony's first party studios have already used Primitive Shaders on their games while non have uses Mesh Shaders on Xbox.

https://www-4gamer-net.translate.go...=ja&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp
Yes, they are very similar, two different ways to arrive at solving a particular problem. But I think you are getting it backward. Mesh shaders are the primitive shader in the RDNA spec repurposed to specifically do mesh shading. Primitive shaders can do more than mesh shading. Albeit they are more complicated to use.
 
Last edited:

RaySoft

Member
Name one game that saturates all CU's at all times?
TF is just a ballpark way of distinguishing theoretical power. Other specs in the system actually matters more.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Yikes.

Sounds hard to believe, Source?
MS told CMA this. I am sure it will come up when you google it. they said $100-200. So im assuming 100 on series s, and $200 on series x.

It's a bunch of bullshit to appeal to regulators to make their position in the industry seem more precarious than it really is. They probably include the cost of R&D in that. If anyone really believes that Sony is turning in a slight profit while Microsoft loses $200 per Xbox, I have a bridge to sell them.
I mean if we are going to say they are lying to regulators then sure.

What we know is that they have never made a profit on xbox consoles. This was said at a court hearing under oath so the person could be charged with perjury. Not even the $500 x1x which came out a year after the $399 Pro using Polaris GPUs that had already been out for a year and a half by then. Why was it a $100 more if they only had 4 extra CUs and slight faster clocks? The extra 4GB of faster ram was probably the biggest cost increase but gddr5 ram back then wasnt really that expensive. I remember Panello on era saying that the x1x vaccum chamber cooling cost $30 and was shocked that sonys ps5 solution was just a few dollars and considered too expensive by sony.

MS for whatever reason just cant make cheaper consoles like Sony. This goes back to 2013 when Sony was the one with a 50% faster GPU, GDDR5 RAM and yet still managed to come in $100 cheaper.

Is it really that hard to believe that a 20% bigger chip when chip costs have almost doubled since last gen plus really expensive vaccum chamber cooling solution + faster vram + two motherboards would cost an extra $200 when the last two sony consoles came in $100 cheaper?

BTW, I remember how Sony used to manufacture their own consoles. Not all of them. but they had built themselves a new factory to mass produce millions of PS4s. That would also cut on assembly costs.
 

Gaiff

SBI’s Resident Gaslighter
MS told CMA this. I am sure it will come up when you google it. they said $100-200. So im assuming 100 on series s, and $200 on series x.


I mean if we are going to say they are lying to regulators then sure.
Oh, they aren't lying. Just bending the truth. If you include the R&D costs and a bunch of other shit, I'm sure Sony is also selling at a loss. Microsoft is trying to look weak before the CMA and will do whatever they can to make their situation more precarious than it really is so they can get the ruling in their favor.
What we know is that they have never made a profit on xbox consoles. This was said at a court hearing under oath so the person could be charged with perjury. Not even the $500 x1x which came out a year after the $399 Pro using Polaris GPUs that had already been out for a year and a half by then. Why was it a $100 more if they only had 4 extra CUs and slight faster clocks? The extra 4GB of faster ram was probably the biggest cost increase but gddr5 ram back then wasnt really that expensive. I remember Panello on era saying that the x1x vaccum chamber cooling cost $30 and was shocked that sonys ps5 solution was just a few dollars and considered too expensive by sony.

MS for whatever reason just cant make cheaper consoles like Sony. This goes back to 2013 when Sony was the one with a 50% faster GPU, GDDR5 RAM and yet still managed to come in $100 cheaper.

Is it really that hard to believe that a 20% bigger chip when chip costs have almost doubled since last gen plus really expensive vaccum chamber cooling solution + faster vram + two motherboards would cost an extra $200 when the last two sony consoles came in $100 cheaper?

BTW, I remember how Sony used to manufacture their own consoles. Not all of them. but they had built themselves a new factory to mass produce millions of PS4s. That would also cut on assembly costs.
The Xbox One was $100 more because of the Kinect peripheral. Otherwise, it was the same price as the PS4 without it, giving further credence that they're full of shit.

And yes, it's really unbelievable that the SX would cost $200 more. That Microsoft loses $50, $60 while Sony turns a profit? Yeah, I'd believe you. That an Series S costs more to manufacture than a PS5? Yeah, no.
 
Last edited:

Elog

Member
Just to be clear: Mesh Shaders is just a way to reduce the vertice/primitive count that goes through the rest of the pipeline by grouping chunks of vertices/primitives into groups and select what is processed and what is not on a crude geometry level instead of having to make decisions for each individual vertice/primitive. There are almost an infinite amount of ways to do this. That is why the XSX marketing was misleading since both the XSX and the PS5 allows for the same function - however XSX chose to go with the AMD prespecified process while PS5 did their own variant.

Same function - different names. The XSX marketing that implied that PS5 did not have this functionality was simply a lie. Is there an advantage for one of them over the other? No clue.
 

Riky

$MSFT
The difference between Mesh Shaders and Primitive shaders is on the developer side as Mesh Shaders give more granular control of the pipeline. It remains to be seen if that gives any performance advantage but it's helpful to developers.
 
The difference between Mesh Shaders and Primitive shaders is on the developer side as Mesh Shaders give more granular control of the pipeline. It remains to be seen if that gives any performance advantage but it's helpful to developers.
That still remains to be seen. Usually Sony gives more granular control to developers than the DirectX equivalent. We know it's true with their respective RT APIs.
 

Topher

Identifies as young
Just to be clear: Mesh Shaders is just a way to reduce the vertice/primitive count that goes through the rest of the pipeline by grouping chunks of vertices/primitives into groups and select what is processed and what is not on a crude geometry level instead of having to make decisions for each individual vertice/primitive. There are almost an infinite amount of ways to do this. That is why the XSX marketing was misleading since both the XSX and the PS5 allows for the same function - however XSX chose to go with the AMD prespecified process while PS5 did their own variant.

Same function - different names. The XSX marketing that implied that PS5 did not have this functionality was simply a lie. Is there an advantage for one of them over the other? No clue.

I'm a non-gaming developer but what you are describing sounds a lot like the various data structures that is common in development. Multiple ways of doing the same thing whether it is a basic array, a list, a stack, a queue and on and on. Each are variations of the same structures in memory. A list, for example, might have built in functions for sorting, but that doesn't mean a vanilla array cannot be sorted.

Interesting stuff. Thanks for the explanation.
 

Fafalada

Fafracer forever
One is to help manage the temperature of the console's solid-state drive (SSD) by clearing out any old or unnecessary data from its cache. By doing this, the cache scrubbers can reduce the workload on the SSD, which can help to keep its temperature within safe operating limits.

Additionally, the cache scrubbers can also help to optimize system performance by ensuring that the cache contains the most frequently used data. As the PS5 runs programs and games, it stores frequently used data in the cache to make it faster to access in the future. However, if the cache becomes too full or contains outdated data, it can actually slow down system performance. The cache scrubbers help to prevent this by continuously monitoring and clearing out old data from the cache, which helps to maintain optimal system performance.
Cant Speak Nathan Fillion GIF

I'm not gonna go into rabbit of hole of explaining how caches work here - but PS5s 'scrubbers' basically have one purpose - to reduce pipeline stalls caused by cache flushes & reloads. So they pretty much exist only to improve performance.
On PC - such granular control over cache behaviour is - impractical, since there's infinite configurations to deal with, but on consoles it can be a nice win.
 
Is Callisto Protocol still running better on PS5 or has that been patched? What about Hogwarts? Did the Toursyt get 8K 60fps on Series X? Jesus I've been away a while. 😂 I only play Alien: Isolation on ultra although I'm interested in Dead Space remake and Callisto.
 

Crayon

Member
Thread title reminds me of when people used to ask if you have the internet.

Oh, they aren't lying. Just bending the truth. If you include the R&D costs and a bunch of other shit, I'm sure Sony is also selling at a loss. Microsoft is trying to look weak before the CMA and will do whatever they can to make their situation more precarious than it really is so they can get the ruling in their favor.

The Xbox One was $100 more because of the Kinect peripheral. Otherwise, it was the same price as the PS4 without it, giving further credence that they're full of shit.

And yes, it's really unbelievable that the SX would cost $200 more. That Microsoft loses $50, $60 while Sony turns a profit? Yeah, I'd believe you. That an Series S costs more to manufacture than a PS5? Yeah, no.

If ms was rolling in r&d and other costs, Sony would have done the same thing. They were both crying poormouth. There's probably someone who was following the deal more closely and could help clear it up but nobody's chimed in yet so maybe not.
 

Riky

$MSFT
"Excited to see Tier 2 Variable Rate Shading is part of this release as well! We worked closely with Epic to carry over our VRS implementation from Gears 5 and it has been a huge win for us with UE5."



Right on time.
 

DenchDeckard

Moderated wildly
"Excited to see Tier 2 Variable Rate Shading is part of this release as well! We worked closely with Epic to carry over our VRS implementation from Gears 5 and it has been a huge win for us with UE5."



Right on time.

Was this someone from coalition?

They really are some of rhe best unreal devs out there.
 
Top Bottom