• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft Xbox Series X's AMD Architecture Deep Dive at Hot Chips 2020

Allandor

Member
LOL, you do realise that MS has the higher clock on the chip at 3.8 GHz ?

The only reason GPU are clocked lower than CPU is they have a longer logic pipeline, and also because at large percentage of the die the cooling technology requirements.

Do you think 2.23 Ghz is high, or 1.825 is low ? What do you think PC cards for RDNA2 will come in at :messenger_sunglasses:

People talking about yeilds and how final test affects yield parameters is amusing read, you were correct about yields over time though.
You are mixing up CPU frequencies and GPU frequencies. Just take a look that the dieshot. CPU logic is quite small therefore higher frequencies are way easier. GPUs are much more complicated because you have many many small cores. The result is you reach lower frequencies. Therefore GPUs have much higher power requirements. Higher frequencies -> much higher power requirements. E.g. GPUs can easily use 300W constant (if the chip is big enough to spread the heat). CPUs would melt if they would use that constant power because they are much smaller nowadays.

Well you're missing the fact that Sony has its own factories. For example the one in Japan churning out a PS4 every 30 seconds. If they start repurposing that for PS5?
Sony has no factories for those chips. They have at best assembly-factories where they stick the boards into a box. Not anything more. They still rely (like all others) on buying all those components inside they box.
 
Last edited:

geordiemp

Member
You are mixing up CPU frequencies and GPU frequencies. Just take a look that the dieshot. CPU logic is quite small therefore higher frequencies are way easier. GPUs are much more complicated because you have many many small cores. The result is you reach lower frequencies.

I am not mixing anything up, you stated yields vs frequency for GPU trying to ply XSX yields would be better as its GPU is 1.825 GHz....vs 2.23 Ghz Ps5

I was pointing out the CPU on XSX goes to 3.8 Ghz and a FinFET gate is the same on the die and therefor frequencies of around 2 GHz have NOTHING to do with yields, its heat capacity due to the GPU area of the die. And if Ps5 can handle that heat by design, there is no big yield concern.

The main yield impact will be, for EUV Litho, Particulate and die size related and number die per 300 mm wafer rather than parametric.
 
Last edited:

YoodlePro

Member
You are mixing up CPU frequencies and GPU frequencies. Just take a look that the dieshot. CPU logic is quite small therefore higher frequencies are way easier. GPUs are much more complicated because you have many many small cores. The result is you reach lower frequencies. Therefore GPUs have much higher power requirements. Higher frequencies -> much higher power requirements. E.g. GPUs can easily use 300W constant (if the chip is big enough to spread the heat). CPUs would melt if they would use that constant power because they are much smaller nowadays.


Sony has no factories for those chips. They have at best assembly-factories where they stick the boards into a box. Not anything more. They still rely (like all others) on buying all those components inside they box.
Aye, but do we know where the bottleneck is? Is it the chip or the assembly? We don't really know.
But out of the two MS will 100% struggle to produce as many units as Sony.
 

Bo_Hazem

Banned
It's a good effect and strangely, I can experience it right now.

Bo, I don't understand, how is this audio magic possible without Sony's world changing Tempest Engine?

Because it's pre-recorded. Games are interactive, and that's why they're pretty expensive calculation-wise. You can watch 2080Ti ultra 4K settings gameplay on your smart TV on youtube, but doesn't mean you can play it.
 
Last edited:

M1chl

Currently Gif and Meme Champion
I am not mixing anything up, you stated yields vs frequency for GPU trying to ply XSX yields would be better as its GPU is 1.825 GHz....vs 2.23 Ghz Ps5

I was pointing out the CPU on XSX goes to 3.8 Ghz and a FinFET gate is the same on the die and therefor frequencies of around 2 GHz have NOTHING to do with yields, its heat capacity due to the GPU area of the die. And if Ps5 can handle that heat by design, there is no big yield concern.

The main yield impact will be, for EUV Litho, Particulate and die size related and number die per 300 mm wafer rather than parametric.
I gotta say that taking clocks into acounts with yields is misguided, because there is only one type of chip for the console. Not XSX APU at 1GHZ, 500 mhz, etc. BESIDES APU cointains differently clocked parts anyway. So PS5 have inherent advantage, because both of them are bound by that CPU cluster there.

And maybe sony lower clocks of CPU part, to being more tolerant when it comes to not so great yields.


Because it's pre-recorded. Games are interactive, and that's why they're pretty expensive calculation-wise. You can watch 2080Ti ultra 4K settings gameplay on your smart TV on youtube, but doesn't mean you can play it.
Good point there chief.
 
Last edited:
"Furthermore If we go down the rabbit hole of CU vs MHZ, recall that increasing the CU count to increase TFLOP is not a linear increase either. Compare the 2080 vs the 2080ti, 50% more CUs for 17% extra performance. For AMD, just compare the R9 390X vs the Fury X, 41% increased CU count for 23% increased performance. This is partially related to a concept in computing called Amdahl's law. You can read more about it here. Basically the higher the parallelization of a workload (such as thousands of GPU CUs) the harder it is to extract perfect performance from it.

The 18% number that is being thrown around for the difference in performance in the XSX and PS5 is purely theoretical and is based on the raw TFLOP numbers. In reality, the difference might even be smaller then that. The PS5 GPU does have advantages in Pixel fill rate and triangle culling as well."

T H I S
 

Allandor

Member
I am not mixing anything up, you stated yields vs frequency for GPU trying to ply XSX yields would be better as its GPU is 1.825 GHz....vs 2.23 Ghz Ps5

I was pointing out the CPU on XSX goes to 3.8 Ghz and a FinFET gate is the same on the die and therefor frequencies of around 2 GHz have NOTHING to do with yields, its heat capacity due to the GPU area of the die. And if Ps5 can handle that heat by design, there is no big yield concern.

The main yield impact will be, for EUV Litho, Particulate and die size related and number die per 300 mm wafer rather than parametric.
:messenger_dizzy:
really ... the CPU frequencies have almost nothing to do with the GPU frequencies or the yields.
Yes some chips might not be usefull because the CPU doesn't reach the higher frequency. But that is Zen 2 which easily reaches those 3.8 GHz on any Zen2 CPU out there. So I don't think we have yield problems because of the CPU part.
The GPU is much more complex and bigger than the CPU. And for a GPU 2.23 GHz is a really high frequency. Much much smaller chips (like the intel GPUs) reach those, but they are much smaller and less complex. MS already used the safe path for the GPU part with 1.8 GHz. While sony went with peak clocks (defined by usage) which are really not good for the end result in the production of those chips. And MS has the bigger chips witch also has its problem with yields. I didn't say that anybody as an advantage in the yields, just they have both problems with yields (one because of GPU frequencies and one because of GPU size).

btw, smaller chip -> higher frequencies -> more heat to spread on a smaller die size -> cooling problem. Else they would not use smartshift. Smartshift is designed for mobile APUs where heat (power usage) can get a problem. For a stationary console it would not be a problem to just increase the power supply a bit and hold those frequencies fixed. A problem only occurs if the heat can not be spread fast enough to the cooling solution.

PS:
man, those sony fanboys are really the worst. Everytime you don't praise something from sony into the heaven they react like you hit them with something.
 
Last edited:

M1chl

Currently Gif and Meme Champion
I don't think MS needs to produce as many units as Sony ;)
The demand on Sonys side is much higher.
Well yeah, but it does not mean, that they are not going to bleed money this way, waffer is still payed by the whole thing.

Maybe is something to do that with CPUs you can scale down chip and use one only when you can sell that chip in lower tier product. The situation which does not exist here.

Or something along those lines. I think that APU situation is really complex situation which we would have to have some TSMC data, to draw some conclusion. It was speculations on my part, we really don't know.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius


See above.

0BJE7hl.jpg




You're right it is, but take it somewhere else.
64 FP Ops per cycle * 2.23 GHz > 100 GFLOPS... weird... XSX equivalent programmable DSP (which may be as flexible or less flexible than Tempest and I also doubt that Tempest is the only sound co-processor in there too) is about 8 + 8 + 4 (or 8 if those FP units can do Fused MADD’s too) per cycle so 20-24 FP ops per cycle.
 

M1chl

Currently Gif and Meme Champion
64 FP Ops per cycle * 2.23 GHz > 100 GFLOPS... weird... XSX equivalent programmable DSP (which may be as flexible or less flexible than Tempest and I also doubt that Tempest is the only sound co-processor in there too) is about 8 + 8 + 4 (or 8 if those FP units can do Fused MADD’s too) per cycle so 20-24 FP ops per cycle.
Just out of curiosity where you get that 8 + 8 + 4? Just asking so I know how is it calculated...cause I don't know it : (
 
Last edited:

geordiemp

Member
:messenger_dizzy:
really ... the CPU frequencies have almost nothing to do with the GPU frequencies or the yields.
Yes some chips might not be usefull because the CPU doesn't reach the higher frequency. But that is Zen 2 which easily reaches those 3.8 GHz on any Zen2 CPU out there. So I don't think we have yield problems because of the CPU part.
The GPU is much more complex and bigger than the CPU. And for a GPU 2.23 GHz is a really high frequency. Much much smaller chips (like the intel GPUs) reach those, but they are much smaller and less complex. MS already used the safe path for the GPU part with 1.8 GHz. While sony went with peak clocks (defined by usage) which are really not good for the end result in the production of those chips. And MS has the bigger chips witch also has its problem with yields. I didn't say that anybody as an advantage in the yields, just they have both problems with yields (one because of GPU frequencies and one because of GPU size).

btw, smaller chip -> higher frequencies -> more heat to spread on a smaller die size -> cooling problem. Else they would not use smartshift. Smartshift is designed for mobile APUs where heat (power usage) can get a problem. For a stationary console it would not be a problem to just increase the power supply a bit and hold those frequencies fixed. A problem only occurs if the heat can not be spread fast enough to the cooling solution.

PS:
man, those sony fanboys are really the worst. Everytime you don't praise something from sony into the heaven they react like you hit them with something.

There will be no big yield difference due to the GPU frequencies due to parametric differences it will mainly be die size and particulate.

I work in semi conductor thanks.
 
Last edited:

Azurro

Banned
Obviously Cerny is doing mental gymnastics and doesn't hold a candle to the mighty Azurro Azurro .

That's taking the statement out of context. It is capable of handling non audio tasks as well, though not as well as a regular CU.

Or perhaps I'm wrong, who knows. I've definitely never seen audio components being rated on TFLOPs though. My point was, you don't understand what that means either, and probably shouldn't be making comparisons, when you have an obvious agenda.
 
Last edited:

M1chl

Currently Gif and Meme Champion
There will be no big yield difference due to the GPU frequencies or parametric differences it will mainly be die size and particulate.

I work in semi conductor thanks.
Was I wrong if I said, that you can scale down not so great yield on lower tier chips from the same line of CPUs/GPUs?
 

geordiemp

Member
Was I wrong if I said, that you can scale down not so great yield on lower tier chips from the same line of CPUs/GPUs?

Yes your correct, by disabling part of GPU due to particulate if the particle lands on circuitry and kills it (again if unlucky).

Its the big frequency concern on yields that was reaching for RDNA2 and EUV Litho on FinFETs that I was toning down.
 
Last edited:

M1chl

Currently Gif and Meme Champion
Yes your correct, by disabling part of GPU due to particulate if the particle lands on circuitry and kills it (again if unlucky).

Its the big frequency concern on yields that was reaching for RDNA2 and EUV Litho on FinFETs that I was toning down.
There was this situation, that's why I ask.
 

geordiemp

Member
There was this situation, that's why I ask.


If a layer is made badly it is detected from test wafer parametrics and statistics

When a chip fails final test it just gets binned to scrap or lower spec if you can use that chip with that circuit missing, you only know why if you take it apart layer by layer and find that rogue particle or defect, which is like looking for a bad car in a city from above. so most of the time the binned part had a part of its circuit failed is all that is known, most are due to particulate.

As 7m + is using EUV Litho probably around the gates (TSMC trade secrets and special sauce), the variance will be much better for critical dimensions.
 
Last edited:

M1chl

Currently Gif and Meme Champion
If a layer is made badly it is detected from test wafer parametrics and statistics

When a chip fails final test it just gets binned to scrap or lower spec if you can use that chip with that circuit missing, you only know why if you take it apart layer by layer and find that rogue particle or defect, which is like looking for a bad car in a city from above. so most of the time the binned particle hada part of its circuit failed is all that is known, most are due to particulate
And layers are all composed by light I guess, you don't glue layers of silicon together...if my understanding is correct, so slavaged chip is same size as higher tier one, you don't cut finished, albeit badly produced one. That's why chiplets exists. Hope I am correct.
 

Dodkrake

Banned
Except this time is basically watered down version of the same HW, with same feature set, etc.






Hmm, that's not it. One think which I know for sure, that displays on Android is non-issue and iOS layouts are nightmare even at few displays they have.

I was talking displays as in display quality, color accuracy, etc. Lower level access to firmware allows for finer tuning of your display settings. Rando android off brand will stick the preset with no optimization for it.
 

Dodkrake

Banned
I am not mixing anything up, you stated yields vs frequency for GPU trying to ply XSX yields would be better as its GPU is 1.825 GHz....vs 2.23 Ghz Ps5

I was pointing out the CPU on XSX goes to 3.8 Ghz and a FinFET gate is the same on the die and therefor frequencies of around 2 GHz have NOTHING to do with yields, its heat capacity due to the GPU area of the die. And if Ps5 can handle that heat by design, there is no big yield concern.

The main yield impact will be, for EUV Litho, Particulate and die size related and number die per 300 mm wafer rather than parametric.

Goes to 3.8 with multi threading disabled. Funny you forgot that asterisk.
 

Ascend

Member
umm do they mean last gen as in gcn or last gen as in rdna 1.0?

even if it is rdna 1.0, it will be on par with rtx 2080 ti which is a 17 tflops card at ingame clocks. actually it would be 15 tflops, so still not on par with rtx 2080ti. but very close which is insane.
Comparing TF between different architectures is useless. Let me tell you how I arrived at this...
5700XT has a game clock of about the same as the XSX, so that means we don't need to account for clocks.
25% faster CU, so, at 40 CUs, the XSX GPU would be 25% faster. But it has 52 rather than 40. So that means, it has 30% more CUs. 30% more CUs with 25% more performance per CU, puts you at an overall performance increase of about 62%. The 2080Ti is about 34% faster than a 5700XT, so... Yeah.

But, you do have a very valid point. I was comparing the CU increase over RDNA1, which after thinking about it, was a stupid mistake on my part. They are comparing it to last console generation, not last AMD GPU generation. So that makes us assume that they are comparing it to the Xbox One X, which basically uses Polaris CUs.

So if it is 25% performance per CU over Polaris, that actually sucks, and, I don't think that's correct. Navi 10 (i.e. the 5700XT) has an average of 39% higher performance per CU compared to Polaris at 1440p, as tested by Computerbase.de

I guess we're back to simply assuming a similar IPC to the 5700XT. In that case, the XSX GPU is about on par with the 2080 Ti.
 

Panajev2001a

GAF's Pleasant Genius
Just out of curiosity where you get that 8 + 8 + 4? Just asking so I know how is it calculated...cause I don't know it : (

if you look at the slides in the previous page, they talk about two 4-way FP SIMD units and four FP units ( dedicated to complex math ops, think exponential or trigonometry).

The 4-way SIMD unit (single instruction, multiple data) have 128 bits vectors partitioned in 4x32 bits lanes (32 bits matches single precision “standard” floats values) and are able to perform the same operation on the 4 parallel lanes at the same time (assuming ideal throughput, one new SIMD can be started each cycle).

Generally, to calculate FP performance you take the number of operations per cycle when you process fused multiply add instructions which are able to perform an add and a multiply in the same “cycle” so a single MADD instruction Is actually two ops (multiply and add) in one: r = a * b + c. If you can execute the same instruction of four different sets of operands at the same time you have a max throughput of 8 operations per cycle.

Given that the other separate scalar FP units quoted are likely used for complex math instructions, not sure if we should count the throughput with fused multiply adds operations (they should support them, but it is not a given): hence why I was not sure whether we should count 4 or 8 operations.

So: ((4 SIMD lanes * 2 operations per lane) * 2 units) + (4 FP scalar engines * 1_or_2) = 20-24 FP Ops per cycle
 
Last edited:

M1chl

Currently Gif and Meme Champion
I was talking displays as in display quality, color accuracy, etc. Lower level access to firmware allows for finer tuning of your display settings. Rando android off brand will stick the preset with no optimization for it.
Well we are now put two different things together, I was speaking just from the developer perspective and certainly to a certain degree you are correct, however when games are also created for PCs, situation changed from the previous gens. But yeah to certain degree you are correct. However Apple produce multiple devices in one gen and also support old device for a long time, so it's not exactly best way how to describe it.

iOS situation actually better ilustrates XSX and XSS situation, if there would actually be games just for them, since there aren't any...well at least I hope that the SSD situation and how it's accessed is not going to be gimped further with PCs.

if you look at the slides in the previous page, they talk about two 4-way FP SIMD units and four FP units ( dedicated to complex math Ops, think exponential or trigonometry).

The 4-way SIMD unit (single instruction, multiple data) have 128 bits vectors partitioned in 4x32 bits lanes (32 bits matches single precision “standard” floats values) and are able to perform the same operation on the 4 parallel lanes at the same time (assuming ideal throughput, one new SIMD can be started each cycle).

Generally, to calculate FP performance you take the number of operations per cycle when you process fused multiply add instructions which are able to perform and add and a multiply in the same “cycle” so a single MADD instruction Is actually two Ops (multiply and add) in one: r = a * b + c. If you can execute the same instruction of four different sets of operands at the same time you have a max throughput of 8 operations per cycle.

Given that the other separate scalar FP units quoted Are likely used for complex math instructions, not sure if we should count the throughput with fused multiply adds operations (they should support them, but it is not a given): hence why I was not sure whether we should count 4 or 8 operations.

So: ((4 SIMD lanes * 2 operations per lane) * 2 units) + (4 FP scalar engines * 1_or_2) = 20-24 FP Ops per cycle
And this is with the presumption that MS also using GPU CU type silicon for audio decoding? Because it can totally custom silicon. Because I don't see any diagram of Audio processing unit...
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Well we are now put two different things together, I was speaking just from the developer perspective and certainly to a certain degree you are correct, however when games are also created for PCs, situation changed from the previous gens. But yeah to certain degree you are correct. However Apple produce multiple devices in one gen and also support old device for a long time, so it's not exactly best way how to describe it.

iOS situation actually better ilustrates XSX and XSS situation, if there would actually be games just for them, since there aren't any...well at least I hope that the SSD situation and how it's accessed is not going to be gimped further with PCs.


And this is with the presumption that MS also using GPU CU type silicon for audio decoding? Because it can totally custom silicon. Because I don't see any diagram of Audio processing unit...

They detail their audio solution here:
ZQMxXU2.jpg
 

Redlight

Member
Because it's pre-recorded. Games are interactive, and that's why they're pretty expensive calculation-wise. You can watch 2080Ti ultra 4K settings gameplay on your smart TV on youtube, but doesn't mean you can play it.
Sure, but this technique exists outside of the 'Tempest engine' doesn't it? Nor does it require sending in photos of your ears for algorithmic analysis. You'll be able to get the same result on any system of comparable power.
 

SlimySnake

Flashless at the Golden Globes
Comparing TF between different architectures is useless. Let me tell you how I arrived at this...
5700XT has a game clock of about the same as the XSX, so that means we don't need to account for clocks.
25% faster CU, so, at 40 CUs, the XSX GPU would be 25% faster. But it has 52 rather than 40. So that means, it has 30% more CUs. 30% more CUs with 25% more performance per CU, puts you at an overall performance increase of about 62%. The 2080Ti is about 34% faster than a 5700XT, so... Yeah.

But, you do have a very valid point. I was comparing the CU increase over RDNA1, which after thinking about it, was a stupid mistake on my part. They are comparing it to last console generation, not last AMD GPU generation. So that makes us assume that they are comparing it to the Xbox One X, which basically uses Polaris CUs.

So if it is 25% performance per CU over Polaris, that actually sucks, and, I don't think that's correct. Navi 10 (i.e. the 5700XT) has an average of 39% higher performance per CU compared to Polaris at 1440p, as tested by Computerbase.de

I guess we're back to simply assuming a similar IPC to the 5700XT. In that case, the XSX GPU is about on par with the 2080 Ti.
On average, DF found that RDNA 1.0 gains over Polaris were roughly 25% which is exactly the number AMD gave when they first revealed RDNA 1.0 last year.

But yes, it seems there are no IPC gains going over to RDNA 2.0.
 

Bo_Hazem

Banned
Sure, but this technique exists outside of the 'Tempest engine' doesn't it? Nor does it require sending in photos of your ears for algorithmic analysis. You'll be able to get the same result on any system of comparable power.

Yes if you mean pre-recorded, but not in gaming except for cutscenes (if they're pre-rendered, lately they've been in real-time so even that is doubtful as it adds a big chunk of unnecessary GB's to your game size).

But for further accuracy, you'll need to have your very own HRTF measured. Sadly, I don't think we have any in my country.

HRTF-Testing.jpg


85


But they've already collected hundreds of them and made 5 presets to choose from. They might as well use similar technique present in Sony's 360 Reality app that you make pictures of your ears to have better recommendation of what to choose:

sony-360ra-ear-photos-100828881-large.jpg
 
64 FP Ops per cycle * 2.23 GHz > 100 GFLOPS... weird... XSX equivalent programmable DSP (which may be as flexible or less flexible than Tempest and I also doubt that Tempest is the only sound co-processor in there too) is about 8 + 8 + 4 (or 8 if those FP units can do Fused MADD’s too) per cycle so 20-24 FP ops per cycle.
Yes I think Cerny has just rounded 142 gflops to around ~100 gflops. Which is a correct hundreds rounding actually.
 
Both are. This SONY fanboy crap has got you blinded. Can’t say anything positive about the competition.
at least he is 9 years old. he should be the absolutely last person to be called out.
lets start with the grown ups, I think we'll be busy for years.
maybe even aceofspades will be an adult before we get those other "adults" straightened up


When is Sony going to show their full spec and interaction spreadsheets?
I guess
tenor.gif
 
Last edited:

MrFunSocks

Banned
I think that's exactly Sony's point - to offer 3D/surround to a much broader audience by enabling it on a decent headphones, that cost only a fraction of of expensive soundbar/surround setup.
Dolby Atmos does that already on the XB1 though. 3D sound is nothing new.
 

Redlight

Member
Yes if you mean pre-recorded, but not in gaming except for cutscenes (if they're pre-rendered, lately they've been in real-time so even that is doubtful as it adds a big chunk of unnecessary GB's to your game size).

But for further accuracy, you'll need to have your very own HRTF measured. Sadly, I don't think we have any in my country.

HRTF-Testing.jpg


85


But they've already collected hundreds of them and made 5 presets to choose from. They might as well use similar technique present in Sony's 360 Reality app that you make pictures of your ears to have better recommendation of what to choose:

sony-360ra-ear-photos-100828881-large.jpg
It's interesting. Sony's 360 reality audio is primarily aimed at music and headphone use and I expect that there could be benefits for headphone users on consoles if they use a similar system. Dolby does pretty much the same thing though.

I'm still dubious about the ear pics, that sounds very much like a gimmick when you could just flip between the five settings and decide for yourself. In some demo setups they actually place microphones inside your ear to customise delivery of sound, this won't be anything like that so user experience will vary.

Unfortunately you won't get any real 'immersion' benefit through the typical TV speaker set-up or a stereo soundbar, and, let's be honest, the whole thing is really just Dolby Atmos under a different name. Luckily both consoles have similar audio processing power available to them, so there's some chance that this kind of stuff will be supported beyond a handful of titles.
 

Panajev2001a

GAF's Pleasant Genius
Yes I think Cerny has just rounded 142 gflops to around ~100 gflops. Which is a correct hundreds rounding actually.

Kind of expected from him, but if I said it out loud it seems like gushing over him hehe ;). Seriously, leaving almost 50 GFLOPS out of the total count to do exact rounding seems pretty honest and humble.
 
Last edited:

Deto

Banned
I am not mixing anything up, you stated yields vs frequency for GPU trying to ply XSX yields would be better as its GPU is 1.825 GHz....vs 2.23 Ghz Ps5

I was pointing out the CPU on XSX goes to 3.8 Ghz and a FinFET gate is the same on the die and therefor frequencies of around 2 GHz have NOTHING to do with yields, its heat capacity due to the GPU area of the die. And if Ps5 can handle that heat by design, there is no big yield concern.

The main yield impact will be, for EUV Litho, Particulate and die size related and number die per 300 mm wafer rather than parametric.

Penello has already said that the cost of silicon is all based on the area, there is no frequency. Not that Penello has a lot of brain, but I think that in order to know how to read an excel spreadsheet, he can do it.



it seems that it is an attempt to keep the SX and PS5 narrative at the same price, no, it will not happen.

Most expensive APU in SX
most expensive memory
most expensive motherboard, because of the 320-bit bus.
 

Marlenus

Member
I think MS would be mad to charge more than $500 for the series X. I also think with the rumoured series S specs charging more than $250 is a bit much.

At these prices both consoles will be loss leaders but the S more so. If it is digital only though and is there as a platform to increase install base for more Game Pass subs and digital purchases it could be really profitable.
 

SlimySnake

Flashless at the Golden Globes

Hit the link for the RDNA 1 post.

@Mod of War since the 9 teraflop troll bait is not allowed, could the RDNA 1 bait be treated the same? These next-gen threads go in circles 😵
I really dont understand this level of insecurity. I added like a million qualifiers and you still managed to get upset. And it's not like the thread became about RDNA 1 after my post like you suggested. A couple of people replied to disagree with me, and that was the end of that. Yet you made it seem like all xbox fans are bringing this up when it was a Sony fan who made that post.

And if you had bothered to read that post till the end, you wouldve seen that despite all the generous concessions and assumptions, you still dont get to the 100% gap in performance suggested by Dusk Golem. So if anything, that post with all that math, refutes those rumors, but i guess reading comprehension isnt your strongest suit.

I think you would be better off directing your outrage at people like that Toms Hardware author who literally just said that the PS5 is not full RDNA 2 based on "feelings". We wouldnt be discussing this stuff if it wasnt for people in the industry bringing this shit up all the time.

DM9WLKS.jpg


GqAEuEA.jpg
 

Deto

Banned
More interesting that MS did not imagine a PS5 only digital to compete with the SS.

MS, like its fanboys, also believed in the Github "PS5 8TF" for 400USD, just like all the idiot that underestimates Sony thought.

MS's dream would be:

SX: 600USD, 12TF, 50% more power than the PS5 and 50% more expensive
SS: 200USD, 4TF, 50% less power, 50% less cost than PS5.

Little did MS know, which underestimates Sony like its fanboys, that Sony was pushing the GPU to 2.23GHz and would launch a digital-only console.

PS5 10TF: 500USD
PS5 FROM: 400USD.

just destroyed the price of the xbox, no longer scale the price with increased power.

PS5 DE: more than 2x the power, 2x the price
PS5: 20% less power, 33% less price.

It turns out that Sony is not an idiot, contrary to what the idiotic xbox fanboys think they were burping "MS master plan, Sony underestimated Sony" just like the idiots in Windows Central who for free took a Halo Infinite vs Horizon Forbidden West in the ass .
 

Neo_game

Member
Where the heck do those numbers come from? By the most optimistic assessment (comparing max clock to guaranteed clock) it is more than 20% behind.

Wake up, XSeX has 44% more CUs, the gap is MASSIVE. The fact that they managed to cut it in half via OCing is also remarkable on Sony side.

If you consider the current gen PS4 had 50% more CU or was 40% more powerful going by tflops. But the biggest bottleneck was not the gfx but the RAM. Xbox only had 32gb fast memory. I am not sure there is going to be much difference this time. Probably Xbox will have 20% more pixels than PS5 on some games ?
 

IntentionalPun

Ask me about my wife's perfect butthole
More interesting that MS did not imagine a PS5 only digital to compete with the SS.

MS, like its fanboys, also believed in the Github "PS5 8TF" for 400USD, just like all the idiot that underestimates Sony thought.

MS's dream would be:

SX: 600USD, 12TF, 50% more power than the PS5 and 50% more expensive
SS: 200USD, 4TF, 50% less power, 50% less cost than PS5.

Little did MS know, which underestimates Sony like its fanboys, that Sony was pushing the GPU to 2.23GHz and would launch a digital-only console.

PS5 10TF: 500USD
PS5 FROM: 400USD.

just destroyed the price of the xbox, no longer scale the price with increased power.

PS5 DE: more than 2x the power, 2x the price
PS5: 20% less power, 33% less price.

It turns out that Sony is not an idiot, contrary to what the idiotic xbox fanboys think they were burping "MS master plan, Sony underestimated Sony" just like the idiots in Windows Central who for free took a Halo Infinite vs Horizon Forbidden West in the ass .
Wait, what? We know the console prices?
 

Journey

Banned
When is Sony going to show their full spec and interaction spreadsheets?



Microsoft has been pretty confident with Series X, it was the reverse in 2013 when Sony for the first time ever in consoles went in honest and laying out the exact numbers of the PS4 and how they got those numbers (with PS3 they said it was a 2TF machine out of their ass lol). MS with Xbox One would only mention 8GB of ram, but wouldn't list the type and wanted to only talk about transistors because it's the only thing they had more of LMAO.

Now MS has a box that's superior in every way except the SSD, so they'll sing it off the rooftops while Sony will stay very quiet and focus on the SSD whenever possible, capitalizing on developers being super happy with the incredible boost in IOPS that SSD's bring, but the thing is, Xbox Series X is also benefiting from the exact things developers have been craving about for years, not needing to pad their games with redundant data to optimize loading when data sits on un-optimal areas of a spinning hdd, which can be tedious in of itself and needlessly increases the size of the game.

PS4 and X1 have 5400rpm HDD, when looking at the graph below comparing SSD to 10,000 and even 15,000 rpm HDDs, imagine the boost in IOPS when comparing to last gen.

IOps_mean_comparison_EN.gif
 
"Furthermore If we go down the rabbit hole of CU vs MHZ, recall that increasing the CU count to increase TFLOP is not a linear increase either. Compare the 2080 vs the 2080ti, 50% more CUs for 17% extra performance. For AMD, just compare the R9 390X vs the Fury X, 41% increased CU count for 23% increased performance. This is partially related to a concept in computing called Amdahl's law. You can read more about it here. Basically the higher the parallelization of a workload (such as thousands of GPU CUs) the harder it is to extract perfect performance from it.

The 18% number that is being thrown around for the difference in performance in the XSX and PS5 is purely theoretical and is based on the raw TFLOP numbers. In reality, the difference might even be smaller then that. The PS5 GPU does have advantages in Pixel fill rate and triangle culling as well."

T H I S
Think of it as a midrange to high end gpu comparison, midrange gpus got the clock advantage while high end gpus got the overall raw power advantage.
 
Top Bottom