• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Let's Design The Mid-Gen Refreshes, Part 1: OBTAINING SOME CRITICAL PS5 DATA

WARNING: Very technical discussion incoming. Skip to the TL;DR if you'd like. If you'd like to know the methodology behind the numbers there, read through the rest.

------------

captura-de-pantalla-2020-06-11-a-las-16-12-54.png


Even with the next-gen systems arriving next month, some of us are already anticipating the mid-gen refreshes. Some of us might even already be anticipating the 10th-gen consoles (PlayStation 6, Xbox Series X-Next)...and I'll get to doing something like this for those, too. But for now, and primarily inspired by this thread over on B3D (I'll try cross-posting the PS6/Series X-Next parts there too, to keep in the general discussion over on that site. Plus I'm interested in if any posters there have thoughts to add, counter-points etc. to whatever I end up posting), I started thinking about what the next-next gen systems could be in terms of specs, design philosophy, business strategy approaches, features etc. It's been a LOT of research and numbers involved, and I realized in order to best guess at what things could be, we need to have a better understanding of things as they are right now, and what things could be with the mid-gen refreshes.

So where do we start? Well, we have the PS5 and Series X (and Series S) releasing in little over a month from now, and while I have some very interesting ideas regarding mid-gen refreshes for the latter two as well, the realization hit that it'd actually be easier to start with PS5. Why? Well, it's true there's still a lot of stuff about the system we don't know of yet, but we do technically have a roughly equivalent GPU on the market that can serve as a reference: the 5700 XT. Now before anyone starts, no this isn't a good reference due to any silly "RDNA 1.5" gossip; I think we should all accept that we're dealing with a RDNA2 architecture when it comes to PS5's GPU.

03mwhBOrgigzRFDPEciA0r0-1..1569469928.jpg


Rather, the reason the 5700 XT is a good reference is because it's literally about the same size as the PS5's GPU: 40 CUs, with all 40 active. The PS5's GPU is also 40 CUs, but 4 are disabled for yields, leaving 36 active. We already know a lot of similar data between the two chips: clock speeds, IPC, peak TF etc. We can also infer certain PS5 GPU features from what the 5700 XT (and indeed the Series X) has, such as 64 ROPs and perhaps roughly equivalent L0$ and L1$ amounts to the 5700 XT.

There are also a few things regarding the PS5 GPU we can roughly infer from guessimates of other things. For example, we know the Series X APU is about 360 mm^2, and estimates pin the PS5's APu at about 320 mm^2. Estimates from the Series X die diagrams (via Hot Chips) peg its GPU at roughly 45% of the total APU size. We know that the Series X has 16 more CUs than the PS5, or 44% more CUs. Therefore, if the Series X's GPU is roughly 162 mm^2 (360 mm^2 * .45% = 162 mm^2), we can simply do (162 mm^2 * .44% = (162 mm^2 - 71.28 mm^2 =), giving us 90.72 mm^2....although this isn't a 100% accurate number. The reason being because, we don't necessarily know the size of a given CU on PS5 compared to Series X, and there are parts of the GPU which are not CUs and would not scale with a percentage reduction. For example, 44% less CUs doesn't mean you suddenly lose a Primitive Shader unit or display unit! Therefore it may be more accurate to say that while the PS5 has 44% less CUs than Series X, the actual GPU is probably only about 35% smaller. So that would give us (162 mm^2 * .35% = (162 mm^2 - 56.7mm^2 =) 105.3mm^2, which sounds like a more accurate number.

One of the biggest questions surrounding PS5's GPU, though, is the exact amount of power it's actually consuming. This probably won't be a number that's ever readily provided, but we can try to calculate roughly what it is. Now one thing we should keep in mind immediately is that the PS5 has a 350 watt PSU. We can infer that Sony are at least aiming for a cooling solution as good as the Xbox One X's (most likely better), and we take a look at that console, its max power consumption was 170 watts on a 245 watt PSU. That gives a power consumption headroom of 75 watts. We could easily then say that if the PS5's cooling solution were to aim to be better than that console's, it should need less power consumption headroom, right? Welll.....

Electricity.jpg


If the next-gen consoles were mainframes...


See, both PS5 and Series X are packing much higher densities in smaller spaces due to massively reduced nodes. Truthfully speaking, there is no linear correlation between the relationship of power(electricity) and heat. In other words, you don't need a certain amount of electricity to produce a certain amount of heat, and you aren't guaranteed a certain amount of heat based on a certain amount of generated power! So, even with the benefits of a 7nm process and the power consumption savings it brings, that doesn't guarantee your heat generation and heat dissipation amounts scale down with it! So even if the next-gen consoles had "only" a One X's level of cooling in place, it's very likely they would need a bit more than 75 watts on the PSU as headroom. This should be important to keep in mind because I'm going to give an estimate for the PS5's system TDP later on.

Now in order to try figuring out the PS5 GPU's TDP, I looked at the 5700 XT's TDP (courtesy via Techpoweredup), listed at 225 watts. Keep in mind this also includes the 8 GB of GDDR6 RAM, which we couldn't include going forward. The common GDDR6 module consumes 1 to 1.5 watts of energy. Taking the most lenient route I did (8 * 1.5), to shave off 12 watts, leaving the GPU itself to 213 watts. Now to figure a few other things out, I took another big liberty; seeing as how the PS5 only has 36 active CUs, I needed to try figuring the wattage per CU on the 5700 XT. (213 wTDP / 40 CUs) gave me 5.325 watts per CU. This isn't a perfect metric, but it's a "good enough" one and, again, is pretty lenient here. That further shaved down the TDP to 191.7 watts (213 - (5.325 * 4)).

From there it got a bit trickier; the PS5 is an RDNA2 chip, so it's going to have SOME power consumption reduction over the RDNA1 5700 XT. Going by some logical guesses, we can probably say that the PS5 (and most likely Series X and S) are 7nm DUV Enhanced chips. This means they won't likely enjoy the full benefit of power consumption reduction over 7nm a 7nm EUV chip would, but a happy middle-ground is possible. 7nm EUV brings a 15% power consumption reduction over 7nm, so this would put 7nm DUV at a 7.5% power consumption reduction. Decent enough. However, this reduction is ONLY for clock-for-clock, so if the two items are of differing clocks, you'd have to adjust. And we clearly know the PS5's GPU is much HIGHER clocked than a 5700 XT's, to the point where this 7.5% is likely negated. Thankfully, there is still a sizable power consumption reduction for RDNA2 over RDNA1...though maybe not as high as AMD's claimed 50% PPW increase.

If we look at RDNA1's gains over Vega, we saw a 14% performance increase with a 23% power consumption reduction clock-for-clock. RDNA1 also saw a 25% performance increase over GCN (which Vega belonged to, though was a later-gen GCN). IIRC, AMD have claimed a 50% PPW increase and 50% IPC gain for RDNA2 over RDNA1; I think at least the 50% IPC gain claim is slightly exaggerated. The 50% PPW improvement might be realistic, but I imagine this is on 7nm EUV chips, and I don't think the PS5 and Series systems are on this exact process (additionally we don't know to what frequency range, i.e in-sweetspot or out-of-sweetspot, that 50% PPW improvement is based around). That all said, I clearly cannot see AMD having gone backwards in improvement gains, so I figured to be friendly (if slightly reserved) and give RDNA2 a 30% power consumption reduction and 25% IPC gain over RDNA, on 7nm DUV Enhanced, clock-for-clock. These percentages will serve as basis for both Sony and MS's systems going forward.

AMD-Navi-RDNA-2.jpg


The last (and most challenging) thing to try and figure out here is how exactly the curve for power-to-frequency gain works for RDNA2 beyond its sweetspot. Part of the challenge here is that we don't actually know what RDNA2's sweetspot range is at. However, I actually think this is somewhat easy to calculate. We can use the power consumption reduction of 7nm DUV Enhanced (7.5%) as a modifier to RDNA1's known sweetspot low and high (1.7 GHz, 1.8 GHz), to get:

>1.7 GHz * 7.5% = 1.8275 GHz

>1.8 GHz * 7.5% = 1.935 GHz

These are what I suspect are the new sweetspot points for a 7nm RDNA2 GPU on the DUV Enhanced process. As we can see here, the 5700 XT, if it were an RDNA2 GPU, would fall right in the near the upper end of this new sweetspot, while the PS5, as an RDNA2 GPU, actually exceeds it by some 295 MHz. The last part of this puzzle, then, is to figure out what the power-to-frequency ratio would curve like within that 295 MHz range. We know the absolute peak ratio at the peak of that range, given to us by Cerny himself, at 5:1. But that doesn't mean it remains 5:1 for the entire curve. In fact, it would most likely ramp up exponentially over a wide spread from 1.935 GHz to 2.23 GHz. This spread is probably more "clustered" or sharper for a 7nm DUV Enhanced part than it is a 7nm EUV part, as well. Unfortunately, there's no such graphic to show a similar power-to-frequency ratio spread on a curve graph for the 5700 XT AFAIK. We can assume that at its peak, the 5700 XT probably had a somewhat harsher power-to-frequency ratio, though, at perhaps 6:1 or even 7:1, and that was with a narrow MHz clock range to push up to.

GettyImages-647221380.jpg


After a hard....some measure of time's....of crunching numbers, eh?.......but we ain't done yet!!

With these other numbers settled on, we can start to actually figure what the PS5 GPU's power consumption is likely at. Going back to the 191.7 watt number (5700 XT's GPU @ 1.905 GHz with 36 CUs active), we can simply apply our (somewhat conservative, but likely probable) power consumption percentage for RDNA2 (on 7nm DUV enhanced) over RDNA1, 30%, as a modifier. That gives us (191.7 watts *.30% = (191.7 watts - 57.51 watts =)) 134.19 watts, which we can probably round down to 134 watts. However, keep in mind that's clock-for-clock, and the PS5 GPU's clock is not the same as the 5700 XT's; it's much faster. 295 MHz faster, in fact. If you divide 2230 MHz by 295 MHz, you get a percentage of 7.6%. Now again, we're going to have to treat things with a bit of liberty. Being very lenient and assuming 1:1 power-to-frequency (which, for anything beyond the sweetspot, would not be the case and we'll correct for this in a moment), we can add that 7.6% (representing the amount of additional frequency as a conversion to wattage increase in a theoretical 1:1 relationship) as a wattage amount, 7.6 watts, to the 134.19 watts calculated earlier, giving us 141.75 watts.

However, it doesn't actually end there. We still have to account for the gain the PS5 GPU has over a 5700 XT when their CUs are adjusted to even, at their native GPU clocks. A 5700 XT @ 1.905 GHz (Boost clock) with 36 CUs active gives 8.778 TF of performance. Meanwhile, a PS5 with 36 CUs active and an "sustained boost" (these are Cerny's own words) of 2.23 GHz gives 10.275 TF of performance. That accounts for a difference of 1.35852 TF. Dividing this difference into the PS5's TF total gives a percentage ratio of 7.56%. Again, we're going to be a bit lenient and assume a 1:1 linear relationship here, this time being clock frequency to floating point operations per second. However, in this case we should ideally calculate the 5700 XT with the idea of it as an RDNA2 chip, meaning we should check to see if its actual Boost clock is beyond the new sweetspot peak. It is not, so we should instead calculate its performance here @ the 1.935 GHz clock, since we're trying to account for the power consumption at the difference of clocks between a 5700 XT in the sweetspot on 7nm DUV Enhanced, and a PS5 outside of the sweetspot on the same process.

36 CUs * 64 ROPs * 2 IPC * 1.935 GHz gives 8.91648 TF. Subtract this from PS5's 10.275 TF and it leaves us with 1.35936 TF, which if you divide into 10.275 TF gives a percentage of...7.559% While we still needed to take this step, it actually changes much nothing from the calculation in the above paragraph, we've just merely confirmed it was correct. Simplifying things, then, we can round up 7.559 to 7.56 without being egregious. In any case, we now need to add this 7.56% percent as if to represent a wattage increase for obtaining it (by assuming a linear ratio between performance and power, which we know for anything beyond the sweetspot this isn't the case). 141.75 watts + 7.56 watts = 149.31 watts. From here we simply need to give a final assumed wattage increase to account for the actual non-linear scaling of power-to-frequency we have at clocks beyond the sweetspot, which we'll figure is about 10 watts, 11 watts tops, by taking the aforementioned 7.6% and 7.56% figures, adding them, then multiplying by 2/3, as I wouldn't think of such a correction on the wattage TDP to be driven by an even amount from those two figures, but anything approaching the sum of those figures would likely be too aggressive.

In total, adding the 11 watts to 149 watts, we get a PS5 GPU with a watts TDP of 160 watts. When looking at it from this context, it fits in nicely after we start to do some rough calculations for other system components and their power consumption costs, to arrive at what the total system TDP might be. Start by taking the aforementioned 12 watts for the 8 GB of GDDR6, add in about 22 watts for the Zen 2 processor (both PS5 and Series X are likely based on the 4800U Zen 2 APU, which has a CPU rated at 25 watts with max clock of 4.2 GHz. The PS5's CPU clock is a bit lower than that, at 3.5 GHz, so I shave off 3 watts), 8 watts for the Tempest Engine, 15 watts for the SSD I/O Block (it's been stated to be equivalent to 9 Zen 2 cores in processing power, though I'd say with some stuff removed that an actual Zen 2 CPU would have, so I shave off 10 watts from the 4800U's CPU TDP assuming the performance of the I/O block here is probably referencing this particular CPU), and 6 watts for the NAND chip modules (assuming half a watt for each of the 12 64 GB modules), and you get a total system TDP budget of 225 watts.

SO, a TL;DR:

>PS5 GPU Power Consumption: 160 watts

>PS5 GPU Die Area: ~ 105 mm^2

>PS5 System Power Consumption: 225 watts

>PS5 PSU: 350 watts

the-moment-when-5af3fe.jpg


Aaand we did it, gang. We're done with the numbuahz!!

Considering the 5700 XT alone has the same TDP and that doesn't even account for anything else aside from GDDR6, and delivers much lower performance than PS5 (especially when you adjust PS5's performance to relative RDNA1 equivalents, aka 12.84357 TF of relative RDNA1 equivalent performance), this is very impressive. It also leaves a wattage headroom of 125 watts on the PSU, which sounds about right considering what we talked about in the beginning with these systems likely not reducing heat generation or dissipation despite power consumption reduction, as heat and electricity do not share a linear relationship.

It is likely that I have undershot the TDP in some cases, mainly due to not knowing exactly what the power consumption in that 295 MHz range beyond the adjusted peak of the new sweetspot would actually look like. If so, however, I highly doubt it's been undershot by anything more than 25 watts, and that's on the extreme end, while still leaving 100 watts of headroom for the PSU. Now hopefully people can see why it was necessary to hammer out some speculative calculations on aspects of the PS5's power consumption here, as it helped us to also figure out the likely sweetspot range for RDNA2 on 7nm DUV Enhanced process, see some relationships (linear and non-linear) between a few things, and even kind of figure out the likely size of the GPU portion of the APU. Some of the Series X info we already have was also helpful here, but the info on 5700 XT was even more helpful, providing just enough to help us figure this stuff out.

With that out of the way, we can finally move on to the more interesting stuff: the PlayStation 5 Pro, and Series X-2 and Series S refresh. For the latter two, there'll be some (very brief) number work on some aspects of Series X included just to serve as a basis of reference there, but by and large it's extremely easy to figure out things like the die size and GPU portion sizes both because MS have outright provided such numbers (through listings and graphs), and because some of the other stuff can simply be deduced by what we know about Series X, such as how we deduced the PS5's GPU size here from the Series X's GPU size. This also applies to GPU power consumption amounts, which for something like Series X, surprisingly, might actually fall roughly in range with PS5's despite some of their other differences, for reasons I'll touch on when we get to those mid-gen refreshes.

So yeah, I hope I'll have Part 2: PS5 Pro, Series X-2 And Series S-2 Mid-Gen Refreshes up very soon (will also link that one here when it's ready).

Hope you all enjoyed this and weren't put off by all the data crunching. Give it a try yourselves or just share in what ballpark ranges you personally think some aspects of PS5's GPU (power consumption amounts, power-to-frequency ratio scaling ranges, assumed RDNA2 sweetspot ranges etc.) fall in. While at it, what ideas do you have in mind for the mid-gen PS5, Series X and Series S refreshes? Are you expecting any at all? Do you think they'll replicate the PS4 Pro and One X or go in a different direction? Sound off!
 
Why assume they will stick to GDDR6? Why not HBM?

This isn't actually for any of the mid-gen refreshes (yet), just a way to try figuring out some metrics regarding PS5 that haven't been disclosed. Because it's important to figure some of these things out before getting to the mid-gen refreshes.

But in terms of the GDDR6 question, I think one of the reasons they might stick with GDDR6 is because by then they can simply get faster modules for roughly the same cost they're getting the 14 Gbps ones now, and I'm personally not actually expecting a massive jump in TF performance for the mid-gen refreshes, for reasons I'll explain in the second part. So if there's not a huge TF increase, they won't necessarily need a huge bandwidth increase, and faster GDDR6 modules can probably provide enough of a bandwidth increase on their own.

Plus, HBM would require more work, there may be additional costs due to the interposers, and it might be more likely MS and Sony save HBM for 10th-gen systems, very likely HBM3/HBMNext....

...for the most part. There's something mid-gen refresh wise that might benefit from HBM, but I'd like to talk about it when I do Part 2 of this.
 

DESTROYA

Member
I'm not technical, sorry. But I wonder if we'll see a mid term refresh. Without doubt both the Pro and X were introduced to open up the 4K market. Having the PS5 and XBX already capable of 4K, is a mid term refresh necessary?
Maybe not for a “pro/X ” model but a nice slim version definitely.
 

Andodalf

Banned
Just seems a bit premature to talk about refreshes or next next gen before we even get to experience this upcoming gen 🤷‍♂️
Carry on then

Sony and MS are certainly working on them, so we might as well speculate about em.

Why assume they will stick to GDDR6? Why not HBM?

The benefits aren't really there for gaming rn, especially compared the GDDR6X, which AMD should be working with by the time of the refresh.
 

geordiemp

Member
WARNING: Very technical discussion incoming. Skip to the TL;DR if you'd like. If you'd like to know the methodology behind the numbers there, read through the rest.

------------

captura-de-pantalla-2020-06-11-a-las-16-12-54.png


Even with the next-gen systems arriving next month, some of us are already anticipating the mid-gen refreshes. Some of us might even already be anticipating the 10th-gen consoles (PlayStation 6, Xbox Series X-Next)...and I'll get to doing something like this for those, too. But for now, and primarily inspired by this thread over on B3D (I'll try cross-posting the PS6/Series X-Next parts there too, to keep in the general discussion over on that site. Plus I'm interested in if any posters there have thoughts to add, counter-points etc. to whatever I end up posting), I started thinking about what the next-next gen systems could be in terms of specs, design philosophy, business strategy approaches, features etc. It's been a LOT of research and numbers involved, and I realized in order to best guess at what things could be, we need to have a better understanding of things as they are right now, and what things could be with the mid-gen refreshes.

So where do we start? Well, we have the PS5 and Series X (and Series S) releasing in little over a month from now, and while I have some very interesting ideas regarding mid-gen refreshes for the latter two as well, the realization hit that it'd actually be easier to start with PS5. Why? Well, it's true there's still a lot of stuff about the system we don't know of yet, but we do technically have a roughly equivalent GPU on the market that can serve as a reference: the 5700 XT. Now before anyone starts, no this isn't a good reference due to any silly "RDNA 1.5" gossip; I think we should all accept that we're dealing with a RDNA2 architecture when it comes to PS5's GPU.

03mwhBOrgigzRFDPEciA0r0-1..1569469928.jpg


Rather, the reason the 5700 XT is a good reference is because it's literally about the same size as the PS5's GPU: 40 CUs, with all 40 active. The PS5's GPU is also 40 CUs, but 4 are disabled for yields, leaving 36 active. We already know a lot of similar data between the two chips: clock speeds, IPC, peak TF etc. We can also infer certain PS5 GPU features from what the 5700 XT (and indeed the Series X) has, such as 64 ROPs and perhaps roughly equivalent L0$ and L1$ amounts to the 5700 XT.

There are also a few things regarding the PS5 GPU we can roughly infer from guessimates of other things. For example, we know the Series X APU is about 360 mm^2, and estimates pin the PS5's APu at about 320 mm^2. Estimates from the Series X die diagrams (via Hot Chips) peg its GPU at roughly 45% of the total APU size. We know that the Series X has 16 more CUs than the PS5, or 44% more CUs. Therefore, if the Series X's GPU is roughly 162 mm^2 (360 mm^2 * .45% = 162 mm^2), we can simply do (162 mm^2 * .44% = (162 mm^2 - 71.28 mm^2 =), giving us 90.72 mm^2....although this isn't a 100% accurate number. The reason being because, we don't necessarily know the size of a given CU on PS5 compared to Series X, and there are parts of the GPU which are not CUs and would not scale with a percentage reduction. For example, 44% less CUs doesn't mean you suddenly lose a Primitive Shader unit or display unit! Therefore it may be more accurate to say that while the PS5 has 44% less CUs than Series X, the actual GPU is probably only about 35% smaller. So that would give us (162 mm^2 * .35% = (162 mm^2 - 56.7mm^2 =) 105.3mm^2, which sounds like a more accurate number.

One of the biggest questions surrounding PS5's GPU, though, is the exact amount of power it's actually consuming. This probably won't be a number that's ever readily provided, but we can try to calculate roughly what it is. Now one thing we should keep in mind immediately is that the PS5 has a 350 watt PSU. We can infer that Sony are at least aiming for a cooling solution as good as the Xbox One X's (most likely better), and we take a look at that console, its max power consumption was 170 watts on a 245 watt PSU. That gives a power consumption headroom of 75 watts. We could easily then say that if the PS5's cooling solution were to aim to be better than that console's, it should need less power consumption headroom, right? Welll.....

Electricity.jpg


If the next-gen consoles were mainframes...


See, both PS5 and Series X are packing much higher densities in smaller spaces due to massively reduced nodes. Truthfully speaking, there is no linear correlation between the relationship of power(electricity) and heat. In other words, you don't need a certain amount of electricity to produce a certain amount of heat, and you aren't guaranteed a certain amount of heat based on a certain amount of generated power! So, even with the benefits of a 7nm process and the power consumption savings it brings, that doesn't guarantee your heat generation and heat dissipation amounts scale down with it! So even if the next-gen consoles had "only" a One X's level of cooling in place, it's very likely they would need a bit more than 75 watts on the PSU as headroom. This should be important to keep in mind because I'm going to give an estimate for the PS5's system TDP later on.

Now in order to try figuring out the PS5 GPU's TDP, I looked at the 5700 XT's TDP (courtesy via Techpoweredup), listed at 225 watts. Keep in mind this also includes the 8 GB of GDDR6 RAM, which we couldn't include going forward. The common GDDR6 module consumes 1 to 1.5 watts of energy. Taking the most lenient route I did (8 * 1.5), to shave off 12 watts, leaving the GPU itself to 213 watts. Now to figure a few other things out, I took another big liberty; seeing as how the PS5 only has 36 active CUs, I needed to try figuring the wattage per CU on the 5700 XT. (213 wTDP / 40 CUs) gave me 5.325 watts per CU. This isn't a perfect metric, but it's a "good enough" one and, again, is pretty lenient here. That further shaved down the TDP to 191.7 watts (213 - (5.325 * 4)).

From there it got a bit trickier; the PS5 is an RDNA2 chip, so it's going to have SOME power consumption reduction over the RDNA1 5700 XT. Going by some logical guesses, we can probably say that the PS5 (and most likely Series X and S) are 7nm DUV Enhanced chips. This means they won't likely enjoy the full benefit of power consumption reduction over 7nm a 7nm EUV chip would, but a happy middle-ground is possible. 7nm EUV brings a 15% power consumption reduction over 7nm, so this would put 7nm DUV at a 7.5% power consumption reduction. Decent enough. However, this reduction is ONLY for clock-for-clock, so if the two items are of differing clocks, you'd have to adjust. And we clearly know the PS5's GPU is much HIGHER clocked than a 5700 XT's, to the point where this 7.5% is likely negated. Thankfully, there is still a sizable power consumption reduction for RDNA2 over RDNA1...though maybe not as high as AMD's claimed 50% PPW increase.

If we look at RDNA1's gains over Vega, we saw a 14% performance increase with a 23% power consumption reduction clock-for-clock. RDNA1 also saw a 25% performance increase over GCN (which Vega belonged to, though was a later-gen GCN). IIRC, AMD have claimed a 50% PPW increase and 50% IPC gain for RDNA2 over RDNA1; I think at least the 50% IPC gain claim is slightly exaggerated. The 50% PPW improvement might be realistic, but I imagine this is on 7nm EUV chips, and I don't think the PS5 and Series systems are on this exact process (additionally we don't know to what frequency range, i.e in-sweetspot or out-of-sweetspot, that 50% PPW improvement is based around). That all said, I clearly cannot see AMD having gone backwards in improvement gains, so I figured to be friendly (if slightly reserved) and give RDNA2 a 30% power consumption reduction and 25% IPC gain over RDNA, on 7nm DUV Enhanced, clock-for-clock. These percentages will serve as basis for both Sony and MS's systems going forward.

AMD-Navi-RDNA-2.jpg


The last (and most challenging) thing to try and figure out here is how exactly the curve for power-to-frequency gain works for RDNA2 beyond its sweetspot. Part of the challenge here is that we don't actually know what RDNA2's sweetspot range is at. However, I actually think this is somewhat easy to calculate. We can use the power consumption reduction of 7nm DUV Enhanced (7.5%) as a modifier to RDNA1's known sweetspot low and high (1.7 GHz, 1.8 GHz), to get:

>1.7 GHz * 7.5% = 1.8275 GHz

>1.8 GHz * 7.5% = 1.935 GHz

These are what I suspect are the new sweetspot points for a 7nm RDNA2 GPU on the DUV Enhanced process. As we can see here, the 5700 XT, if it were an RDNA2 GPU, would fall right in the near the upper end of this new sweetspot, while the PS5, as an RDNA2 GPU, actually exceeds it by some 295 MHz. The last part of this puzzle, then, is to figure out what the power-to-frequency ratio would curve like within that 295 MHz range. We know the absolute peak ratio at the peak of that range, given to us by Cerny himself, at 5:1. But that doesn't mean it remains 5:1 for the entire curve. In fact, it would most likely ramp up exponentially over a wide spread from 1.935 GHz to 2.23 GHz. This spread is probably more "clustered" or sharper for a 7nm DUV Enhanced part than it is a 7nm EUV part, as well. Unfortunately, there's no such graphic to show a similar power-to-frequency ratio spread on a curve graph for the 5700 XT AFAIK. We can assume that at its peak, the 5700 XT probably had a somewhat harsher power-to-frequency ratio, though, at perhaps 6:1 or even 7:1, and that was with a narrow MHz clock range to push up to.

GettyImages-647221380.jpg


After a hard....some measure of time's....of crunching numbers, eh?.......but we ain't done yet!!

With these other numbers settled on, we can start to actually figure what the PS5 GPU's power consumption is likely at. Going back to the 191.7 watt number (5700 XT's GPU @ 1.905 GHz with 36 CUs active), we can simply apply our (somewhat conservative, but likely probable) power consumption percentage for RDNA2 (on 7nm DUV enhanced) over RDNA1, 30%, as a modifier. That gives us (191.7 watts *.30% = (191.7 watts - 57.51 watts =)) 134.19 watts, which we can probably round down to 134 watts. However, keep in mind that's clock-for-clock, and the PS5 GPU's clock is not the same as the 5700 XT's; it's much faster. 295 MHz faster, in fact. If you divide 2230 MHz by 295 MHz, you get a percentage of 7.6%. Now again, we're going to have to treat things with a bit of liberty. Being very lenient and assuming 1:1 power-to-frequency (which, for anything beyond the sweetspot, would not be the case and we'll correct for this in a moment), we can add that 7.6% (representing the amount of additional frequency as a conversion to wattage increase in a theoretical 1:1 relationship) as a wattage amount, 7.6 watts, to the 134.19 watts calculated earlier, giving us 141.75 watts.

However, it doesn't actually end there. We still have to account for the gain the PS5 GPU has over a 5700 XT when their CUs are adjusted to even, at their native GPU clocks. A 5700 XT @ 1.905 GHz (Boost clock) with 36 CUs active gives 8.778 TF of performance. Meanwhile, a PS5 with 36 CUs active and an "sustained boost" (these are Cerny's own words) of 2.23 GHz gives 10.275 TF of performance. That accounts for a difference of 1.35852 TF. Dividing this difference into the PS5's TF total gives a percentage ratio of 7.56%. Again, we're going to be a bit lenient and assume a 1:1 linear relationship here, this time being clock frequency to floating point operations per second. However, in this case we should ideally calculate the 5700 XT with the idea of it as an RDNA2 chip, meaning we should check to see if its actual Boost clock is beyond the new sweetspot peak. It is not, so we should instead calculate its performance here @ the 1.935 GHz clock, since we're trying to account for the power consumption at the difference of clocks between a 5700 XT in the sweetspot on 7nm DUV Enhanced, and a PS5 outside of the sweetspot on the same process.

36 CUs * 64 ROPs * 2 IPC * 1.935 GHz gives 8.91648 TF. Subtract this from PS5's 10.275 TF and it leaves us with 1.35936 TF, which if you divide into 10.275 TF gives a percentage of...7.559% While we still needed to take this step, it actually changes much nothing from the calculation in the above paragraph, we've just merely confirmed it was correct. Simplifying things, then, we can round up 7.559 to 7.56 without being egregious. In any case, we now need to add this 7.56% percent as if to represent a wattage increase for obtaining it (by assuming a linear ratio between performance and power, which we know for anything beyond the sweetspot this isn't the case). 141.75 watts + 7.56 watts = 149.31 watts. From here we simply need to give a final assumed wattage increase to account for the actual non-linear scaling of power-to-frequency we have at clocks beyond the sweetspot, which we'll figure is about 10 watts, 11 watts tops, by taking the aforementioned 7.6% and 7.56% figures, adding them, then multiplying by 2/3, as I wouldn't think of such a correction on the wattage TDP to be driven by an even amount from those two figures, but anything approaching the sum of those figures would likely be too aggressive.

In total, adding the 11 watts to 149 watts, we get a PS5 GPU with a watts TDP of 160 watts. When looking at it from this context, it fits in nicely after we start to do some rough calculations for other system components and their power consumption costs, to arrive at what the total system TDP might be. Start by taking the aforementioned 12 watts for the 8 GB of GDDR6, add in about 22 watts for the Zen 2 processor (both PS5 and Series X are likely based on the 4800U Zen 2 APU, which has a CPU rated at 25 watts with max clock of 4.2 GHz. The PS5's CPU clock is a bit lower than that, at 3.5 GHz, so I shave off 3 watts), 8 watts for the Tempest Engine, 15 watts for the SSD I/O Block (it's been stated to be equivalent to 9 Zen 2 cores in processing power, though I'd say with some stuff removed that an actual Zen 2 CPU would have, so I shave off 10 watts from the 4800U's CPU TDP assuming the performance of the I/O block here is probably referencing this particular CPU), and 6 watts for the NAND chip modules (assuming half a watt for each of the 12 64 GB modules), and you get a total system TDP budget of 225 watts.

SO, a TL;DR:

>PS5 GPU Power Consumption: 160 watts

>PS5 GPU Die Area: ~ 105 mm^2

>PS5 System Power Consumption: 225 watts

>PS5 PSU: 350 watts

the-moment-when-5af3fe.jpg


Aaand we did it, gang. We're done with the numbuahz!!

Considering the 5700 XT alone has the same TDP and that doesn't even account for anything else aside from GDDR6, and delivers much lower performance than PS5 (especially when you adjust PS5's performance to relative RDNA1 equivalents, aka 12.84357 TF of relative RDNA1 equivalent performance), this is very impressive. It also leaves a wattage headroom of 125 watts on the PSU, which sounds about right considering what we talked about in the beginning with these systems likely not reducing heat generation or dissipation despite power consumption reduction, as heat and electricity do not share a linear relationship.

It is likely that I have undershot the TDP in some cases, mainly due to not knowing exactly what the power consumption in that 295 MHz range beyond the adjusted peak of the new sweetspot would actually look like. If so, however, I highly doubt it's been undershot by anything more than 25 watts, and that's on the extreme end, while still leaving 100 watts of headroom for the PSU. Now hopefully people can see why it was necessary to hammer out some speculative calculations on aspects of the PS5's power consumption here, as it helped us to also figure out the likely sweetspot range for RDNA2 on 7nm DUV Enhanced process, see some relationships (linear and non-linear) between a few things, and even kind of figure out the likely size of the GPU portion of the APU. Some of the Series X info we already have was also helpful here, but the info on 5700 XT was even more helpful, providing just enough to help us figure this stuff out.

With that out of the way, we can finally move on to the more interesting stuff: the PlayStation 5 Pro, and Series X-2 and Series S refresh. For the latter two, there'll be some (very brief) number work on some aspects of Series X included just to serve as a basis of reference there, but by and large it's extremely easy to figure out things like the die size and GPU portion sizes both because MS have outright provided such numbers (through listings and graphs), and because some of the other stuff can simply be deduced by what we know about Series X, such as how we deduced the PS5's GPU size here from the Series X's GPU size. This also applies to GPU power consumption amounts, which for something like Series X, surprisingly, might actually fall roughly in range with PS5's despite some of their other differences, for reasons I'll touch on when we get to those mid-gen refreshes.

So yeah, I hope I'll have Part 2: PS5 Pro, Series X-2 And Series S-2 Mid-Gen Refreshes up very soon (will also link that one here when it's ready).

Hope you all enjoyed this and weren't put off by all the data crunching. Give it a try yourselves or just share in what ballpark ranges you personally think some aspects of PS5's GPU (power consumption amounts, power-to-frequency ratio scaling ranges, assumed RDNA2 sweetspot ranges etc.) fall in. While at it, what ideas do you have in mind for the mid-gen PS5, Series X and Series S refreshes? Are you expecting any at all? Do you think they'll replicate the PS4 Pro and One X or go in a different direction? Sound off!

So I skipped read it, and the flaws are that your assuming Ps and XSX are using exactly same process steps and that Ps5 has an outlier clock for GPU.

If ps5 and PC RDNA2 are all 2.2 GHz GPU and XSX is at 1.8 GHz, whats your response ?

Note some pC parts are also rumoured to be 1.8-1.9 GHz. Why do you think some can go to 2.5 Ghz, and some are around 1.8-1.9 GHz.

Answers on a post card.
 
Last edited:

truth411

Member
WARNING: Very technical discussion incoming. Skip to the TL;DR if you'd like. If you'd like to know the methodology behind the numbers there, read through the rest.

------------

captura-de-pantalla-2020-06-11-a-las-16-12-54.png


Even with the next-gen systems arriving next month, some of us are already anticipating the mid-gen refreshes. Some of us might even already be anticipating the 10th-gen consoles (PlayStation 6, Xbox Series X-Next)...and I'll get to doing something like this for those, too. But for now, and primarily inspired by this thread over on B3D (I'll try cross-posting the PS6/Series X-Next parts there too, to keep in the general discussion over on that site. Plus I'm interested in if any posters there have thoughts to add, counter-points etc. to whatever I end up posting), I started thinking about what the next-next gen systems could be in terms of specs, design philosophy, business strategy approaches, features etc. It's been a LOT of research and numbers involved, and I realized in order to best guess at what things could be, we need to have a better understanding of things as they are right now, and what things could be with the mid-gen refreshes.

So where do we start? Well, we have the PS5 and Series X (and Series S) releasing in little over a month from now, and while I have some very interesting ideas regarding mid-gen refreshes for the latter two as well, the realization hit that it'd actually be easier to start with PS5. Why? Well, it's true there's still a lot of stuff about the system we don't know of yet, but we do technically have a roughly equivalent GPU on the market that can serve as a reference: the 5700 XT. Now before anyone starts, no this isn't a good reference due to any silly "RDNA 1.5" gossip; I think we should all accept that we're dealing with a RDNA2 architecture when it comes to PS5's GPU.

03mwhBOrgigzRFDPEciA0r0-1..1569469928.jpg


Rather, the reason the 5700 XT is a good reference is because it's literally about the same size as the PS5's GPU: 40 CUs, with all 40 active. The PS5's GPU is also 40 CUs, but 4 are disabled for yields, leaving 36 active. We already know a lot of similar data between the two chips: clock speeds, IPC, peak TF etc. We can also infer certain PS5 GPU features from what the 5700 XT (and indeed the Series X) has, such as 64 ROPs and perhaps roughly equivalent L0$ and L1$ amounts to the 5700 XT.

There are also a few things regarding the PS5 GPU we can roughly infer from guessimates of other things. For example, we know the Series X APU is about 360 mm^2, and estimates pin the PS5's APu at about 320 mm^2. Estimates from the Series X die diagrams (via Hot Chips) peg its GPU at roughly 45% of the total APU size. We know that the Series X has 16 more CUs than the PS5, or 44% more CUs. Therefore, if the Series X's GPU is roughly 162 mm^2 (360 mm^2 * .45% = 162 mm^2), we can simply do (162 mm^2 * .44% = (162 mm^2 - 71.28 mm^2 =), giving us 90.72 mm^2....although this isn't a 100% accurate number. The reason being because, we don't necessarily know the size of a given CU on PS5 compared to Series X, and there are parts of the GPU which are not CUs and would not scale with a percentage reduction. For example, 44% less CUs doesn't mean you suddenly lose a Primitive Shader unit or display unit! Therefore it may be more accurate to say that while the PS5 has 44% less CUs than Series X, the actual GPU is probably only about 35% smaller. So that would give us (162 mm^2 * .35% = (162 mm^2 - 56.7mm^2 =) 105.3mm^2, which sounds like a more accurate number.

One of the biggest questions surrounding PS5's GPU, though, is the exact amount of power it's actually consuming. This probably won't be a number that's ever readily provided, but we can try to calculate roughly what it is. Now one thing we should keep in mind immediately is that the PS5 has a 350 watt PSU. We can infer that Sony are at least aiming for a cooling solution as good as the Xbox One X's (most likely better), and we take a look at that console, its max power consumption was 170 watts on a 245 watt PSU. That gives a power consumption headroom of 75 watts. We could easily then say that if the PS5's cooling solution were to aim to be better than that console's, it should need less power consumption headroom, right? Welll.....

Electricity.jpg


If the next-gen consoles were mainframes...


See, both PS5 and Series X are packing much higher densities in smaller spaces due to massively reduced nodes. Truthfully speaking, there is no linear correlation between the relationship of power(electricity) and heat. In other words, you don't need a certain amount of electricity to produce a certain amount of heat, and you aren't guaranteed a certain amount of heat based on a certain amount of generated power! So, even with the benefits of a 7nm process and the power consumption savings it brings, that doesn't guarantee your heat generation and heat dissipation amounts scale down with it! So even if the next-gen consoles had "only" a One X's level of cooling in place, it's very likely they would need a bit more than 75 watts on the PSU as headroom. This should be important to keep in mind because I'm going to give an estimate for the PS5's system TDP later on.

Now in order to try figuring out the PS5 GPU's TDP, I looked at the 5700 XT's TDP (courtesy via Techpoweredup), listed at 225 watts. Keep in mind this also includes the 8 GB of GDDR6 RAM, which we couldn't include going forward. The common GDDR6 module consumes 1 to 1.5 watts of energy. Taking the most lenient route I did (8 * 1.5), to shave off 12 watts, leaving the GPU itself to 213 watts. Now to figure a few other things out, I took another big liberty; seeing as how the PS5 only has 36 active CUs, I needed to try figuring the wattage per CU on the 5700 XT. (213 wTDP / 40 CUs) gave me 5.325 watts per CU. This isn't a perfect metric, but it's a "good enough" one and, again, is pretty lenient here. That further shaved down the TDP to 191.7 watts (213 - (5.325 * 4)).

From there it got a bit trickier; the PS5 is an RDNA2 chip, so it's going to have SOME power consumption reduction over the RDNA1 5700 XT. Going by some logical guesses, we can probably say that the PS5 (and most likely Series X and S) are 7nm DUV Enhanced chips. This means they won't likely enjoy the full benefit of power consumption reduction over 7nm a 7nm EUV chip would, but a happy middle-ground is possible. 7nm EUV brings a 15% power consumption reduction over 7nm, so this would put 7nm DUV at a 7.5% power consumption reduction. Decent enough. However, this reduction is ONLY for clock-for-clock, so if the two items are of differing clocks, you'd have to adjust. And we clearly know the PS5's GPU is much HIGHER clocked than a 5700 XT's, to the point where this 7.5% is likely negated. Thankfully, there is still a sizable power consumption reduction for RDNA2 over RDNA1...though maybe not as high as AMD's claimed 50% PPW increase.

If we look at RDNA1's gains over Vega, we saw a 14% performance increase with a 23% power consumption reduction clock-for-clock. RDNA1 also saw a 25% performance increase over GCN (which Vega belonged to, though was a later-gen GCN). IIRC, AMD have claimed a 50% PPW increase and 50% IPC gain for RDNA2 over RDNA1; I think at least the 50% IPC gain claim is slightly exaggerated. The 50% PPW improvement might be realistic, but I imagine this is on 7nm EUV chips, and I don't think the PS5 and Series systems are on this exact process (additionally we don't know to what frequency range, i.e in-sweetspot or out-of-sweetspot, that 50% PPW improvement is based around). That all said, I clearly cannot see AMD having gone backwards in improvement gains, so I figured to be friendly (if slightly reserved) and give RDNA2 a 30% power consumption reduction and 25% IPC gain over RDNA, on 7nm DUV Enhanced, clock-for-clock. These percentages will serve as basis for both Sony and MS's systems going forward.

AMD-Navi-RDNA-2.jpg


The last (and most challenging) thing to try and figure out here is how exactly the curve for power-to-frequency gain works for RDNA2 beyond its sweetspot. Part of the challenge here is that we don't actually know what RDNA2's sweetspot range is at. However, I actually think this is somewhat easy to calculate. We can use the power consumption reduction of 7nm DUV Enhanced (7.5%) as a modifier to RDNA1's known sweetspot low and high (1.7 GHz, 1.8 GHz), to get:

>1.7 GHz * 7.5% = 1.8275 GHz

>1.8 GHz * 7.5% = 1.935 GHz

These are what I suspect are the new sweetspot points for a 7nm RDNA2 GPU on the DUV Enhanced process. As we can see here, the 5700 XT, if it were an RDNA2 GPU, would fall right in the near the upper end of this new sweetspot, while the PS5, as an RDNA2 GPU, actually exceeds it by some 295 MHz. The last part of this puzzle, then, is to figure out what the power-to-frequency ratio would curve like within that 295 MHz range. We know the absolute peak ratio at the peak of that range, given to us by Cerny himself, at 5:1. But that doesn't mean it remains 5:1 for the entire curve. In fact, it would most likely ramp up exponentially over a wide spread from 1.935 GHz to 2.23 GHz. This spread is probably more "clustered" or sharper for a 7nm DUV Enhanced part than it is a 7nm EUV part, as well. Unfortunately, there's no such graphic to show a similar power-to-frequency ratio spread on a curve graph for the 5700 XT AFAIK. We can assume that at its peak, the 5700 XT probably had a somewhat harsher power-to-frequency ratio, though, at perhaps 6:1 or even 7:1, and that was with a narrow MHz clock range to push up to.

GettyImages-647221380.jpg


After a hard....some measure of time's....of crunching numbers, eh?.......but we ain't done yet!!

With these other numbers settled on, we can start to actually figure what the PS5 GPU's power consumption is likely at. Going back to the 191.7 watt number (5700 XT's GPU @ 1.905 GHz with 36 CUs active), we can simply apply our (somewhat conservative, but likely probable) power consumption percentage for RDNA2 (on 7nm DUV enhanced) over RDNA1, 30%, as a modifier. That gives us (191.7 watts *.30% = (191.7 watts - 57.51 watts =)) 134.19 watts, which we can probably round down to 134 watts. However, keep in mind that's clock-for-clock, and the PS5 GPU's clock is not the same as the 5700 XT's; it's much faster. 295 MHz faster, in fact. If you divide 2230 MHz by 295 MHz, you get a percentage of 7.6%. Now again, we're going to have to treat things with a bit of liberty. Being very lenient and assuming 1:1 power-to-frequency (which, for anything beyond the sweetspot, would not be the case and we'll correct for this in a moment), we can add that 7.6% (representing the amount of additional frequency as a conversion to wattage increase in a theoretical 1:1 relationship) as a wattage amount, 7.6 watts, to the 134.19 watts calculated earlier, giving us 141.75 watts.

However, it doesn't actually end there. We still have to account for the gain the PS5 GPU has over a 5700 XT when their CUs are adjusted to even, at their native GPU clocks. A 5700 XT @ 1.905 GHz (Boost clock) with 36 CUs active gives 8.778 TF of performance. Meanwhile, a PS5 with 36 CUs active and an "sustained boost" (these are Cerny's own words) of 2.23 GHz gives 10.275 TF of performance. That accounts for a difference of 1.35852 TF. Dividing this difference into the PS5's TF total gives a percentage ratio of 7.56%. Again, we're going to be a bit lenient and assume a 1:1 linear relationship here, this time being clock frequency to floating point operations per second. However, in this case we should ideally calculate the 5700 XT with the idea of it as an RDNA2 chip, meaning we should check to see if its actual Boost clock is beyond the new sweetspot peak. It is not, so we should instead calculate its performance here @ the 1.935 GHz clock, since we're trying to account for the power consumption at the difference of clocks between a 5700 XT in the sweetspot on 7nm DUV Enhanced, and a PS5 outside of the sweetspot on the same process.

36 CUs * 64 ROPs * 2 IPC * 1.935 GHz gives 8.91648 TF. Subtract this from PS5's 10.275 TF and it leaves us with 1.35936 TF, which if you divide into 10.275 TF gives a percentage of...7.559% While we still needed to take this step, it actually changes much nothing from the calculation in the above paragraph, we've just merely confirmed it was correct. Simplifying things, then, we can round up 7.559 to 7.56 without being egregious. In any case, we now need to add this 7.56% percent as if to represent a wattage increase for obtaining it (by assuming a linear ratio between performance and power, which we know for anything beyond the sweetspot this isn't the case). 141.75 watts + 7.56 watts = 149.31 watts. From here we simply need to give a final assumed wattage increase to account for the actual non-linear scaling of power-to-frequency we have at clocks beyond the sweetspot, which we'll figure is about 10 watts, 11 watts tops, by taking the aforementioned 7.6% and 7.56% figures, adding them, then multiplying by 2/3, as I wouldn't think of such a correction on the wattage TDP to be driven by an even amount from those two figures, but anything approaching the sum of those figures would likely be too aggressive.

In total, adding the 11 watts to 149 watts, we get a PS5 GPU with a watts TDP of 160 watts. When looking at it from this context, it fits in nicely after we start to do some rough calculations for other system components and their power consumption costs, to arrive at what the total system TDP might be. Start by taking the aforementioned 12 watts for the 8 GB of GDDR6, add in about 22 watts for the Zen 2 processor (both PS5 and Series X are likely based on the 4800U Zen 2 APU, which has a CPU rated at 25 watts with max clock of 4.2 GHz. The PS5's CPU clock is a bit lower than that, at 3.5 GHz, so I shave off 3 watts), 8 watts for the Tempest Engine, 15 watts for the SSD I/O Block (it's been stated to be equivalent to 9 Zen 2 cores in processing power, though I'd say with some stuff removed that an actual Zen 2 CPU would have, so I shave off 10 watts from the 4800U's CPU TDP assuming the performance of the I/O block here is probably referencing this particular CPU), and 6 watts for the NAND chip modules (assuming half a watt for each of the 12 64 GB modules), and you get a total system TDP budget of 225 watts.

SO, a TL;DR:

>PS5 GPU Power Consumption: 160 watts

>PS5 GPU Die Area: ~ 105 mm^2

>PS5 System Power Consumption: 225 watts

>PS5 PSU: 350 watts

the-moment-when-5af3fe.jpg


Aaand we did it, gang. We're done with the numbuahz!!

Considering the 5700 XT alone has the same TDP and that doesn't even account for anything else aside from GDDR6, and delivers much lower performance than PS5 (especially when you adjust PS5's performance to relative RDNA1 equivalents, aka 12.84357 TF of relative RDNA1 equivalent performance), this is very impressive. It also leaves a wattage headroom of 125 watts on the PSU, which sounds about right considering what we talked about in the beginning with these systems likely not reducing heat generation or dissipation despite power consumption reduction, as heat and electricity do not share a linear relationship.

It is likely that I have undershot the TDP in some cases, mainly due to not knowing exactly what the power consumption in that 295 MHz range beyond the adjusted peak of the new sweetspot would actually look like. If so, however, I highly doubt it's been undershot by anything more than 25 watts, and that's on the extreme end, while still leaving 100 watts of headroom for the PSU. Now hopefully people can see why it was necessary to hammer out some speculative calculations on aspects of the PS5's power consumption here, as it helped us to also figure out the likely sweetspot range for RDNA2 on 7nm DUV Enhanced process, see some relationships (linear and non-linear) between a few things, and even kind of figure out the likely size of the GPU portion of the APU. Some of the Series X info we already have was also helpful here, but the info on 5700 XT was even more helpful, providing just enough to help us figure this stuff out.

With that out of the way, we can finally move on to the more interesting stuff: the PlayStation 5 Pro, and Series X-2 and Series S refresh. For the latter two, there'll be some (very brief) number work on some aspects of Series X included just to serve as a basis of reference there, but by and large it's extremely easy to figure out things like the die size and GPU portion sizes both because MS have outright provided such numbers (through listings and graphs), and because some of the other stuff can simply be deduced by what we know about Series X, such as how we deduced the PS5's GPU size here from the Series X's GPU size. This also applies to GPU power consumption amounts, which for something like Series X, surprisingly, might actually fall roughly in range with PS5's despite some of their other differences, for reasons I'll touch on when we get to those mid-gen refreshes.

So yeah, I hope I'll have Part 2: PS5 Pro, Series X-2 And Series S-2 Mid-Gen Refreshes up very soon (will also link that one here when it's ready).

Hope you all enjoyed this and weren't put off by all the data crunching. Give it a try yourselves or just share in what ballpark ranges you personally think some aspects of PS5's GPU (power consumption amounts, power-to-frequency ratio scaling ranges, assumed RDNA2 sweetspot ranges etc.) fall in. While at it, what ideas do you have in mind for the mid-gen PS5, Series X and Series S refreshes? Are you expecting any at all? Do you think they'll replicate the PS4 Pro and One X or go in a different direction? Sound off!
Dude!!! Lol. Your making it overly complicated.

PS5 Pro will double the CUs to 80 (72 active) and be on TSMC 3nm EUV and clocked higher.
TSMC 3nm will be used to in mobile phones and tablets in 2022, we know Apple is using it. Most likely for AMD graphic Cards and Consoles in 2023. On 3nm EUV I expect both a PS5 Pro and PS5 Slim(er).
Nvidia I believe are stuck with Samsung Fabs, we have to see how that plays out but with DLSS the Nintendo Switch 2 could be really special.
 
So I skipped read it, and the flaws are that your assuming Ps and XSX are using exactly same process steps and that Ps5 has an outlier clock for GPU.

If ps5 and PC RDNA2 are all 2.2 GHz GPU and XSX is at 1.8 GHz, whats your response ?

Note some pC parts are also rumoured to be 1.8-1.9 GHz. Why do you think some can go to 2.5 Ghz, and some are around 1.8-1.9 GHz.

Answers on a post card.

I did mention at the beginning my speculative reasons for why they are likely on 7nm DUV Enhanced, in fact I've held this position for months now, but other things seem to affirm it, at least IMO.

I also give some reasoning into the 2.2 GHz and even 2.5 GHz clocks for the rumored Big Navi cards: they are both likely using 7nm EUV, and those are Boost clocks. We can see from both 5700 XT's Boost clock of 1.905 GHz and PS5's "continuous boost" (as Cerny mentioned) clock of 2.23 GHz that you don't need to respect any rules of linear power to get linear performance gains. Boost clocks really aren't concerned with that too much especially on the PC GPU side.

If you look at the sweetspot range I pegged for RDNA2, Series X falls right about near the lower end of that, so it's a very conservative clock and they may've preferred that to keep cooling simple yet efficient, and still hit their performance targets. By necessity of having a smaller GPU, Sony would've needed to hit a higher clock frequency to reach what they'd feel are good performance targets. I'm simply using this first part (in a series) to try figuring what amount of power they'd need to do it, and feel like what's in here is a satisfactory figure especially considering what the PSU is listed at.

As to why some PC GPUs can go up to 2.5 GHz and others can't, well that could be for any number of reasons. It's not strictly indicative of the node process and/or its limitations. I don't think there's a satisfactory way you can answer that which is universally applicable. But this thread wasn't really about trying to answer that particular question, either.

PS5 Pro will double the CUs to 80 (72 active) and be on TSMC 3nm EUV and clocked higher.

Interesting speculation; personally don't see this happening at all tbh, but I have to refine some points as to why for a later date.

One reason I don't see this happening is because it's too conventional for power gains, not particularly efficient for more performance in and of itself, and is still too reliant on brute-forcing Moore's Law.
 
Last edited:

Falc67

Member
WARNING: Very technical discussion incoming. Skip to the TL;DR if you'd like. If you'd like to know the methodology behind the numbers there, read through the rest.

------------

captura-de-pantalla-2020-06-11-a-las-16-12-54.png


Even with the next-gen systems arriving next month, some of us are already anticipating the mid-gen refreshes. Some of us might even already be anticipating the 10th-gen consoles (PlayStation 6, Xbox Series X-Next)...and I'll get to doing something like this for those, too. But for now, and primarily inspired by this thread over on B3D (I'll try cross-posting the PS6/Series X-Next parts there too, to keep in the general discussion over on that site. Plus I'm interested in if any posters there have thoughts to add, counter-points etc. to whatever I end up posting), I started thinking about what the next-next gen systems could be in terms of specs, design philosophy, business strategy approaches, features etc. It's been a LOT of research and numbers involved, and I realized in order to best guess at what things could be, we need to have a better understanding of things as they are right now, and what things could be with the mid-gen refreshes.

So where do we start? Well, we have the PS5 and Series X (and Series S) releasing in little over a month from now, and while I have some very interesting ideas regarding mid-gen refreshes for the latter two as well, the realization hit that it'd actually be easier to start with PS5. Why? Well, it's true there's still a lot of stuff about the system we don't know of yet, but we do technically have a roughly equivalent GPU on the market that can serve as a reference: the 5700 XT. Now before anyone starts, no this isn't a good reference due to any silly "RDNA 1.5" gossip; I think we should all accept that we're dealing with a RDNA2 architecture when it comes to PS5's GPU.

03mwhBOrgigzRFDPEciA0r0-1..1569469928.jpg


Rather, the reason the 5700 XT is a good reference is because it's literally about the same size as the PS5's GPU: 40 CUs, with all 40 active. The PS5's GPU is also 40 CUs, but 4 are disabled for yields, leaving 36 active. We already know a lot of similar data between the two chips: clock speeds, IPC, peak TF etc. We can also infer certain PS5 GPU features from what the 5700 XT (and indeed the Series X) has, such as 64 ROPs and perhaps roughly equivalent L0$ and L1$ amounts to the 5700 XT.

There are also a few things regarding the PS5 GPU we can roughly infer from guessimates of other things. For example, we know the Series X APU is about 360 mm^2, and estimates pin the PS5's APu at about 320 mm^2. Estimates from the Series X die diagrams (via Hot Chips) peg its GPU at roughly 45% of the total APU size. We know that the Series X has 16 more CUs than the PS5, or 44% more CUs. Therefore, if the Series X's GPU is roughly 162 mm^2 (360 mm^2 * .45% = 162 mm^2), we can simply do (162 mm^2 * .44% = (162 mm^2 - 71.28 mm^2 =), giving us 90.72 mm^2....although this isn't a 100% accurate number. The reason being because, we don't necessarily know the size of a given CU on PS5 compared to Series X, and there are parts of the GPU which are not CUs and would not scale with a percentage reduction. For example, 44% less CUs doesn't mean you suddenly lose a Primitive Shader unit or display unit! Therefore it may be more accurate to say that while the PS5 has 44% less CUs than Series X, the actual GPU is probably only about 35% smaller. So that would give us (162 mm^2 * .35% = (162 mm^2 - 56.7mm^2 =) 105.3mm^2, which sounds like a more accurate number.

One of the biggest questions surrounding PS5's GPU, though, is the exact amount of power it's actually consuming. This probably won't be a number that's ever readily provided, but we can try to calculate roughly what it is. Now one thing we should keep in mind immediately is that the PS5 has a 350 watt PSU. We can infer that Sony are at least aiming for a cooling solution as good as the Xbox One X's (most likely better), and we take a look at that console, its max power consumption was 170 watts on a 245 watt PSU. That gives a power consumption headroom of 75 watts. We could easily then say that if the PS5's cooling solution were to aim to be better than that console's, it should need less power consumption headroom, right? Welll.....

Electricity.jpg


If the next-gen consoles were mainframes...


See, both PS5 and Series X are packing much higher densities in smaller spaces due to massively reduced nodes. Truthfully speaking, there is no linear correlation between the relationship of power(electricity) and heat. In other words, you don't need a certain amount of electricity to produce a certain amount of heat, and you aren't guaranteed a certain amount of heat based on a certain amount of generated power! So, even with the benefits of a 7nm process and the power consumption savings it brings, that doesn't guarantee your heat generation and heat dissipation amounts scale down with it! So even if the next-gen consoles had "only" a One X's level of cooling in place, it's very likely they would need a bit more than 75 watts on the PSU as headroom. This should be important to keep in mind because I'm going to give an estimate for the PS5's system TDP later on.

Now in order to try figuring out the PS5 GPU's TDP, I looked at the 5700 XT's TDP (courtesy via Techpoweredup), listed at 225 watts. Keep in mind this also includes the 8 GB of GDDR6 RAM, which we couldn't include going forward. The common GDDR6 module consumes 1 to 1.5 watts of energy. Taking the most lenient route I did (8 * 1.5), to shave off 12 watts, leaving the GPU itself to 213 watts. Now to figure a few other things out, I took another big liberty; seeing as how the PS5 only has 36 active CUs, I needed to try figuring the wattage per CU on the 5700 XT. (213 wTDP / 40 CUs) gave me 5.325 watts per CU. This isn't a perfect metric, but it's a "good enough" one and, again, is pretty lenient here. That further shaved down the TDP to 191.7 watts (213 - (5.325 * 4)).

From there it got a bit trickier; the PS5 is an RDNA2 chip, so it's going to have SOME power consumption reduction over the RDNA1 5700 XT. Going by some logical guesses, we can probably say that the PS5 (and most likely Series X and S) are 7nm DUV Enhanced chips. This means they won't likely enjoy the full benefit of power consumption reduction over 7nm a 7nm EUV chip would, but a happy middle-ground is possible. 7nm EUV brings a 15% power consumption reduction over 7nm, so this would put 7nm DUV at a 7.5% power consumption reduction. Decent enough. However, this reduction is ONLY for clock-for-clock, so if the two items are of differing clocks, you'd have to adjust. And we clearly know the PS5's GPU is much HIGHER clocked than a 5700 XT's, to the point where this 7.5% is likely negated. Thankfully, there is still a sizable power consumption reduction for RDNA2 over RDNA1...though maybe not as high as AMD's claimed 50% PPW increase.

If we look at RDNA1's gains over Vega, we saw a 14% performance increase with a 23% power consumption reduction clock-for-clock. RDNA1 also saw a 25% performance increase over GCN (which Vega belonged to, though was a later-gen GCN). IIRC, AMD have claimed a 50% PPW increase and 50% IPC gain for RDNA2 over RDNA1; I think at least the 50% IPC gain claim is slightly exaggerated. The 50% PPW improvement might be realistic, but I imagine this is on 7nm EUV chips, and I don't think the PS5 and Series systems are on this exact process (additionally we don't know to what frequency range, i.e in-sweetspot or out-of-sweetspot, that 50% PPW improvement is based around). That all said, I clearly cannot see AMD having gone backwards in improvement gains, so I figured to be friendly (if slightly reserved) and give RDNA2 a 30% power consumption reduction and 25% IPC gain over RDNA, on 7nm DUV Enhanced, clock-for-clock. These percentages will serve as basis for both Sony and MS's systems going forward.

AMD-Navi-RDNA-2.jpg


The last (and most challenging) thing to try and figure out here is how exactly the curve for power-to-frequency gain works for RDNA2 beyond its sweetspot. Part of the challenge here is that we don't actually know what RDNA2's sweetspot range is at. However, I actually think this is somewhat easy to calculate. We can use the power consumption reduction of 7nm DUV Enhanced (7.5%) as a modifier to RDNA1's known sweetspot low and high (1.7 GHz, 1.8 GHz), to get:

>1.7 GHz * 7.5% = 1.8275 GHz

>1.8 GHz * 7.5% = 1.935 GHz

These are what I suspect are the new sweetspot points for a 7nm RDNA2 GPU on the DUV Enhanced process. As we can see here, the 5700 XT, if it were an RDNA2 GPU, would fall right in the near the upper end of this new sweetspot, while the PS5, as an RDNA2 GPU, actually exceeds it by some 295 MHz. The last part of this puzzle, then, is to figure out what the power-to-frequency ratio would curve like within that 295 MHz range. We know the absolute peak ratio at the peak of that range, given to us by Cerny himself, at 5:1. But that doesn't mean it remains 5:1 for the entire curve. In fact, it would most likely ramp up exponentially over a wide spread from 1.935 GHz to 2.23 GHz. This spread is probably more "clustered" or sharper for a 7nm DUV Enhanced part than it is a 7nm EUV part, as well. Unfortunately, there's no such graphic to show a similar power-to-frequency ratio spread on a curve graph for the 5700 XT AFAIK. We can assume that at its peak, the 5700 XT probably had a somewhat harsher power-to-frequency ratio, though, at perhaps 6:1 or even 7:1, and that was with a narrow MHz clock range to push up to.

GettyImages-647221380.jpg


After a hard....some measure of time's....of crunching numbers, eh?.......but we ain't done yet!!

With these other numbers settled on, we can start to actually figure what the PS5 GPU's power consumption is likely at. Going back to the 191.7 watt number (5700 XT's GPU @ 1.905 GHz with 36 CUs active), we can simply apply our (somewhat conservative, but likely probable) power consumption percentage for RDNA2 (on 7nm DUV enhanced) over RDNA1, 30%, as a modifier. That gives us (191.7 watts *.30% = (191.7 watts - 57.51 watts =)) 134.19 watts, which we can probably round down to 134 watts. However, keep in mind that's clock-for-clock, and the PS5 GPU's clock is not the same as the 5700 XT's; it's much faster. 295 MHz faster, in fact. If you divide 2230 MHz by 295 MHz, you get a percentage of 7.6%. Now again, we're going to have to treat things with a bit of liberty. Being very lenient and assuming 1:1 power-to-frequency (which, for anything beyond the sweetspot, would not be the case and we'll correct for this in a moment), we can add that 7.6% (representing the amount of additional frequency as a conversion to wattage increase in a theoretical 1:1 relationship) as a wattage amount, 7.6 watts, to the 134.19 watts calculated earlier, giving us 141.75 watts.

However, it doesn't actually end there. We still have to account for the gain the PS5 GPU has over a 5700 XT when their CUs are adjusted to even, at their native GPU clocks. A 5700 XT @ 1.905 GHz (Boost clock) with 36 CUs active gives 8.778 TF of performance. Meanwhile, a PS5 with 36 CUs active and an "sustained boost" (these are Cerny's own words) of 2.23 GHz gives 10.275 TF of performance. That accounts for a difference of 1.35852 TF. Dividing this difference into the PS5's TF total gives a percentage ratio of 7.56%. Again, we're going to be a bit lenient and assume a 1:1 linear relationship here, this time being clock frequency to floating point operations per second. However, in this case we should ideally calculate the 5700 XT with the idea of it as an RDNA2 chip, meaning we should check to see if its actual Boost clock is beyond the new sweetspot peak. It is not, so we should instead calculate its performance here @ the 1.935 GHz clock, since we're trying to account for the power consumption at the difference of clocks between a 5700 XT in the sweetspot on 7nm DUV Enhanced, and a PS5 outside of the sweetspot on the same process.

36 CUs * 64 ROPs * 2 IPC * 1.935 GHz gives 8.91648 TF. Subtract this from PS5's 10.275 TF and it leaves us with 1.35936 TF, which if you divide into 10.275 TF gives a percentage of...7.559% While we still needed to take this step, it actually changes much nothing from the calculation in the above paragraph, we've just merely confirmed it was correct. Simplifying things, then, we can round up 7.559 to 7.56 without being egregious. In any case, we now need to add this 7.56% percent as if to represent a wattage increase for obtaining it (by assuming a linear ratio between performance and power, which we know for anything beyond the sweetspot this isn't the case). 141.75 watts + 7.56 watts = 149.31 watts. From here we simply need to give a final assumed wattage increase to account for the actual non-linear scaling of power-to-frequency we have at clocks beyond the sweetspot, which we'll figure is about 10 watts, 11 watts tops, by taking the aforementioned 7.6% and 7.56% figures, adding them, then multiplying by 2/3, as I wouldn't think of such a correction on the wattage TDP to be driven by an even amount from those two figures, but anything approaching the sum of those figures would likely be too aggressive.

In total, adding the 11 watts to 149 watts, we get a PS5 GPU with a watts TDP of 160 watts. When looking at it from this context, it fits in nicely after we start to do some rough calculations for other system components and their power consumption costs, to arrive at what the total system TDP might be. Start by taking the aforementioned 12 watts for the 8 GB of GDDR6, add in about 22 watts for the Zen 2 processor (both PS5 and Series X are likely based on the 4800U Zen 2 APU, which has a CPU rated at 25 watts with max clock of 4.2 GHz. The PS5's CPU clock is a bit lower than that, at 3.5 GHz, so I shave off 3 watts), 8 watts for the Tempest Engine, 15 watts for the SSD I/O Block (it's been stated to be equivalent to 9 Zen 2 cores in processing power, though I'd say with some stuff removed that an actual Zen 2 CPU would have, so I shave off 10 watts from the 4800U's CPU TDP assuming the performance of the I/O block here is probably referencing this particular CPU), and 6 watts for the NAND chip modules (assuming half a watt for each of the 12 64 GB modules), and you get a total system TDP budget of 225 watts.

SO, a TL;DR:

>PS5 GPU Power Consumption: 160 watts

>PS5 GPU Die Area: ~ 105 mm^2

>PS5 System Power Consumption: 225 watts

>PS5 PSU: 350 watts

the-moment-when-5af3fe.jpg


Aaand we did it, gang. We're done with the numbuahz!!

Considering the 5700 XT alone has the same TDP and that doesn't even account for anything else aside from GDDR6, and delivers much lower performance than PS5 (especially when you adjust PS5's performance to relative RDNA1 equivalents, aka 12.84357 TF of relative RDNA1 equivalent performance), this is very impressive. It also leaves a wattage headroom of 125 watts on the PSU, which sounds about right considering what we talked about in the beginning with these systems likely not reducing heat generation or dissipation despite power consumption reduction, as heat and electricity do not share a linear relationship.

It is likely that I have undershot the TDP in some cases, mainly due to not knowing exactly what the power consumption in that 295 MHz range beyond the adjusted peak of the new sweetspot would actually look like. If so, however, I highly doubt it's been undershot by anything more than 25 watts, and that's on the extreme end, while still leaving 100 watts of headroom for the PSU. Now hopefully people can see why it was necessary to hammer out some speculative calculations on aspects of the PS5's power consumption here, as it helped us to also figure out the likely sweetspot range for RDNA2 on 7nm DUV Enhanced process, see some relationships (linear and non-linear) between a few things, and even kind of figure out the likely size of the GPU portion of the APU. Some of the Series X info we already have was also helpful here, but the info on 5700 XT was even more helpful, providing just enough to help us figure this stuff out.

With that out of the way, we can finally move on to the more interesting stuff: the PlayStation 5 Pro, and Series X-2 and Series S refresh. For the latter two, there'll be some (very brief) number work on some aspects of Series X included just to serve as a basis of reference there, but by and large it's extremely easy to figure out things like the die size and GPU portion sizes both because MS have outright provided such numbers (through listings and graphs), and because some of the other stuff can simply be deduced by what we know about Series X, such as how we deduced the PS5's GPU size here from the Series X's GPU size. This also applies to GPU power consumption amounts, which for something like Series X, surprisingly, might actually fall roughly in range with PS5's despite some of their other differences, for reasons I'll touch on when we get to those mid-gen refreshes.

So yeah, I hope I'll have Part 2: PS5 Pro, Series X-2 And Series S-2 Mid-Gen Refreshes up very soon (will also link that one here when it's ready).

Hope you all enjoyed this and weren't put off by all the data crunching. Give it a try yourselves or just share in what ballpark ranges you personally think some aspects of PS5's GPU (power consumption amounts, power-to-frequency ratio scaling ranges, assumed RDNA2 sweetspot ranges etc.) fall in. While at it, what ideas do you have in mind for the mid-gen PS5, Series X and Series S refreshes? Are you expecting any at all? Do you think they'll replicate the PS4 Pro and One X or go in a different direction? Sound off!

TLDR
 

geordiemp

Member
I did mention at the beginning my speculative reasons for why they are likely on 7nm DUV Enhanced, in fact I've held this position for months now, but other things seem to affirm it, at least IMO.

I also give some reasoning into the 2.2 GHz and even 2.5 GHz clocks for the rumored Big Navi cards: they are both likely using 7nm EUV, and those are Boost clocks. We can see from both 5700 XT's Boost clock of 1.905 GHz and PS5's "continuous boost" (as Cerny mentioned) clock of 2.23 GHz that you don't need to respect any rules of linear power to get linear performance gains. Boost clocks really aren't concerned with that too much especially on the PC GPU side.

If you look at the sweetspot range I pegged for RDNA2, Series X falls right about near the lower end of that, so it's a very conservative clock and they may've preferred that to keep cooling simple yet efficient, and still hit their performance targets. By necessity of having a smaller GPU, Sony would've needed to hit a higher clock frequency to reach what they'd feel are good performance targets. I'm simply using this first part (in a series) to try figuring what amount of power they'd need to do it, and feel like what's in here is a satisfactory figure especially considering what the PSU is listed at.

As to why some PC GPUs can go up to 2.5 GHz and others can't, well that could be for any number of reasons. It's not strictly indicative of the node process and/or its limitations. I don't think there's a satisfactory way you can answer that which is universally applicable. But this thread wasn't really about trying to answer that particular question, either.



Interesting speculation; personally don't see this happening at all tbh, but I have to refine some points as to why for a later date.

One reason I don't see this happening is because it's too conventional for power gains, not particularly efficient for more performance in and of itself, and is still too reliant on brute-forcing Moore's Law.

Yes, I agree, edit

XSX is likely DUV enhanced, and ps5 / some pC parts EUV around finFET layers.

I dont for 1 second think mS just clocked lower because reasons, nobody leaves potential on the table.

There is another potential reason of lower XSX clocks, propagation delay maybe different, XSX is not laid out the same, 14 CU per shader array.

Or it could be a bit of both.
 
Last edited:

LordOfChaos

Member
Allegedly Apple booked out ALL of TSMC's 5nm, not sure for how long. For a substantial mid gen refresh though, they'll probably want at least two node shrinks, so maybe 3nm in 2023, about right for "mid" gen.
 

ZywyPL

Banned
giphy.webp



What would be the purpose of mid-gen refresh models? Because without it there's no business reason to even start thinking about such models. Because realistically, the mindset of a typical console gamer is: A) 30FPS is enough, B) 4K is a waste of resources, C), RT is a gimmick, so what should the hypothetically more powerful models do? 8K? 120FPS? Nobody wants it. My personal opinion? Console plebs don't deserve anything more than 1080p30, give them that at 199$ and you'll swim in money.
 
Allegedly Apple booked out ALL of TSMC's 5nm, not sure for how long. For a substantial mid gen refresh though, they'll probably want at least two node shrinks, so maybe 3nm in 2023, about right for "mid" gen.

So where does that leave PS6 and Series X-Next? Anything beyond 3nm is not even realistically known to be possible without some massive breakthrough in a few various areas, because once you push beyond 3nm you're getting into near atom-sized territory and physics will eventually take over.

I don't think you need 3nm for mid-gen refreshes, honestly. Especially if the mid-gen refreshes are 2023 like some think. Mid-gen will likely focus more on pushing efficiency on things like RT, ML, compute etc. and experimenting with various 3D packaging methodologies to push up some performance.

Plus I also think both MS and Sony will try pushing for somewhat cheaper mid-gen refreshes but, again, hopefully I can explain my own perspective on this in Part 2 whenever I can post it.

I think that there is a cost vs benefit to what process steps are chosen, pick more EUV around finfet and you can clockm higher but more cost, go DUV and cheaper but more variance in crit dimensions and lower clock POTENTIAL.

It would explain why ps5 and XSX are same price and vastly different approaches. Sony wanted speed, MS wanted CU number.

I dont for 1 second think mS just clocked lower because reasons, nobody leaves potential on the table.

Regarding pricing, there's other things that could be factoring into it. We know PS5 has a smaller APU, but we also know they have more NAND chips, a faster decompressor and flash memory controller (which at least would cost more in R&D), a more feature-packed controller etc. Those costs add up.

I should clarify, I don't think either system is default 7nm, but DUV Enhanced. AMD made this listing at an event sometime earlier this year with shareholders, and it sits between 7nm and 7nm EUV. At that same showing they gave a range of mentions for RDNA2 GPUs: 7nm, 7nm DUV enhanced, 7nm EUV. I guess depending on the market sector, availability, and what the client wanted, you'll be getting GPUs in a range of those for a good while.

F Falc67 I had that section in the OP down near the end man :LOL:
 
Last edited:

geordiemp

Member
So where does that leave PS6 and Series X-Next? Anything beyond 3nm is not even realistically known to be possible without some massive breakthrough in a few various areas, because once you push beyond 3nm you're getting into near atom-sized territory and physics will eventually take over.

I don't think you need 3nm for mid-gen refreshes, honestly. Especially if the mid-gen refreshes are 2023 like some think. Mid-gen will likely focus more on pushing efficiency on things like RT, ML, compute etc. and experimenting with various 3D packaging methodologies to push up some performance.

Plus I also think both MS and Sony will try pushing for somewhat cheaper mid-gen refreshes but, again, hopefully I can explain my own perspective on this in Part 2 whenever I can post it.



Regarding pricing, there's other things that could be factoring into it. We know PS5 has a smaller APU, but we also know they have more NAND chips, a faster decompressor and flash memory controller (which at least would cost more in R&D), a more feature-packed controller etc. Those costs add up.

I should clarify, I don't think either system is default 7nm, but DUV Enhanced. AMD made this listing at an event sometime earlier this year with shareholders, and it sits between 7nm and 7nm EUV. At that same showing they gave a range of mentions for RDNA2 GPUs: 7nm, 7nm DUV enhanced, 7nm EUV. I guess depending on the market sector, availability, and what the client wanted, you'll be getting GPUs in a range of those for a good while.

F Falc67 I had that section in the OP down near the end man :LOL:

I belive that its up to 4 or so critical layers that is an option for EUV / rest is DUV, and its all marketing, we will never know as TSMC is more secretive than North Korea lol

Also TSMC 7nm is supposed to be min 6 nm or less at the lower end, and 40 nm a few layers above, (matallics) 7nm is just PR and approx density.
 
Last edited:

LordOfChaos

Member
So where does that leave PS6 and Series X-Next? Anything beyond 3nm is not even realistically known to be possible without some massive breakthrough in a few various areas, because once you push beyond 3nm you're getting into near atom-sized territory and physics will eventually take over.

I don't think you need 3nm for mid-gen refreshes, honestly. Especially if the mid-gen refreshes are 2023 like some think. Mid-gen will likely focus more on pushing efficiency on things like RT, ML, compute etc. and experimenting with various 3D packaging methodologies to push up some performance.

Plus I also think both MS and Sony will try pushing for somewhat cheaper mid-gen refreshes but, again, hopefully I can explain my own perspective on this in Part 2 whenever I can post it.


Below 3nm transistors scaling gets iffy...Good thing then that fabrication process names are all marketing malarkey and density still has a long ways to go, 3nm would denote the minimum feature size, but that's not the same as density gains, mega-transistors per mm2

If this guy says there's room to go for a long while, I'm with him




How this works right now is if a single feature can be 3nm, they call that a 3nm node. Doesn't matter if the rest of the die looks every bit like 10nm fin widths and other measures. There's a long way to go considering these are three dimensional objects.
 
Last edited:

Great Hair

Banned
PlayStation 6 2026 (heavily insinuated by the site)

ITO:
Indeed, in the past, the cycle for a new platform was 7 to 10 years, but in view of the very rapid development and evolution of technology, it’s really a six to seven year platform cycle. Then we cannot fully catch up with the rapid development of the technology, therefore our thinking is that as far as a platform is concerned for the PS5, it’s a cycle of maybe six to seven years. But doing that, a platform lifecycle, we should be able to change the hardware itself and try to incorporate advancements in technology. That was the thinking behind it, and the test case of that thinking was the PS4 Pro that launched in the midway of the PS4 launch cycle.

My take:

there wont be a PS5 Pro, as i dont see much improvement within the next 3 year or 4. An extra 50%? more teraflops, twice the ssdspace, thats it.

15 vs 10TF aint gonna make a huge difference and for 8K we need way more than just 15TF ... RTX3090 with 36TF is too weak for native 8K (at ultra settings).
 

geordiemp

Member
Below 3nm transistors scaling gets iffy...Good thing then that fabrication process names are all marketing malarkey and density still has a long ways to go, 3nm would denote the minimum feature size, but that's not the same as density gains, mega-transistors per mm2

If this guy says there's room to go for a long while, I'm with him




How this works right now is if a single feature can be 3nm, they call that a 3nm node. Doesn't matter if the rest of the die looks every bit like 10nm fin widths and other measures. There's a long way to go considering these are three dimensional objects.


Yup, 7nm is more like 40 nm above the FinFET gates, the min dimension is rumoured to be 6nm or so :messenger_beaming:

Also 3nm depends on the material, thereare allot of exotic gate materials and low K used for a while, and they keep getting more exotic. Hence you can have a gate acting like 3nm, but its larger phyically.
 
Last edited:

geordiemp

Member
PlayStation 6 2026 (heavily insinuated by the site)

ITO:


My take:

there wont be a PS5 Pro, as i dont see much improvement within the next 3 year or 4. An extra 50%? more teraflops, twice the ssdspace, thats it.

15 vs 10TF aint gonna make a huge difference and for 8K we need way more than just 15TF ... RTX3090 with 36TF is too weak for native 8K (at ultra settings).

If there is a pro it will be for enhanced ray tracing at 60 FPS or GI / effects so you can play graphics mode at 60. It wont be a necessity.
 

RaySoft

Member
WARNING: Very technical discussion incoming. Skip to the TL;DR if you'd like. If you'd like to know the methodology behind the numbers there, read through the rest.

------------

captura-de-pantalla-2020-06-11-a-las-16-12-54.png


Even with the next-gen systems arriving next month, some of us are already anticipating the mid-gen refreshes. Some of us might even already be anticipating the 10th-gen consoles (PlayStation 6, Xbox Series X-Next)...and I'll get to doing something like this for those, too. But for now, and primarily inspired by this thread over on B3D (I'll try cross-posting the PS6/Series X-Next parts there too, to keep in the general discussion over on that site. Plus I'm interested in if any posters there have thoughts to add, counter-points etc. to whatever I end up posting), I started thinking about what the next-next gen systems could be in terms of specs, design philosophy, business strategy approaches, features etc. It's been a LOT of research and numbers involved, and I realized in order to best guess at what things could be, we need to have a better understanding of things as they are right now, and what things could be with the mid-gen refreshes.

So where do we start? Well, we have the PS5 and Series X (and Series S) releasing in little over a month from now, and while I have some very interesting ideas regarding mid-gen refreshes for the latter two as well, the realization hit that it'd actually be easier to start with PS5. Why? Well, it's true there's still a lot of stuff about the system we don't know of yet, but we do technically have a roughly equivalent GPU on the market that can serve as a reference: the 5700 XT. Now before anyone starts, no this isn't a good reference due to any silly "RDNA 1.5" gossip; I think we should all accept that we're dealing with a RDNA2 architecture when it comes to PS5's GPU.

03mwhBOrgigzRFDPEciA0r0-1..1569469928.jpg


Rather, the reason the 5700 XT is a good reference is because it's literally about the same size as the PS5's GPU: 40 CUs, with all 40 active. The PS5's GPU is also 40 CUs, but 4 are disabled for yields, leaving 36 active. We already know a lot of similar data between the two chips: clock speeds, IPC, peak TF etc. We can also infer certain PS5 GPU features from what the 5700 XT (and indeed the Series X) has, such as 64 ROPs and perhaps roughly equivalent L0$ and L1$ amounts to the 5700 XT.

There are also a few things regarding the PS5 GPU we can roughly infer from guessimates of other things. For example, we know the Series X APU is about 360 mm^2, and estimates pin the PS5's APu at about 320 mm^2. Estimates from the Series X die diagrams (via Hot Chips) peg its GPU at roughly 45% of the total APU size. We know that the Series X has 16 more CUs than the PS5, or 44% more CUs. Therefore, if the Series X's GPU is roughly 162 mm^2 (360 mm^2 * .45% = 162 mm^2), we can simply do (162 mm^2 * .44% = (162 mm^2 - 71.28 mm^2 =), giving us 90.72 mm^2....although this isn't a 100% accurate number. The reason being because, we don't necessarily know the size of a given CU on PS5 compared to Series X, and there are parts of the GPU which are not CUs and would not scale with a percentage reduction. For example, 44% less CUs doesn't mean you suddenly lose a Primitive Shader unit or display unit! Therefore it may be more accurate to say that while the PS5 has 44% less CUs than Series X, the actual GPU is probably only about 35% smaller. So that would give us (162 mm^2 * .35% = (162 mm^2 - 56.7mm^2 =) 105.3mm^2, which sounds like a more accurate number.

One of the biggest questions surrounding PS5's GPU, though, is the exact amount of power it's actually consuming. This probably won't be a number that's ever readily provided, but we can try to calculate roughly what it is. Now one thing we should keep in mind immediately is that the PS5 has a 350 watt PSU. We can infer that Sony are at least aiming for a cooling solution as good as the Xbox One X's (most likely better), and we take a look at that console, its max power consumption was 170 watts on a 245 watt PSU. That gives a power consumption headroom of 75 watts. We could easily then say that if the PS5's cooling solution were to aim to be better than that console's, it should need less power consumption headroom, right? Welll.....

Electricity.jpg


If the next-gen consoles were mainframes...


See, both PS5 and Series X are packing much higher densities in smaller spaces due to massively reduced nodes. Truthfully speaking, there is no linear correlation between the relationship of power(electricity) and heat. In other words, you don't need a certain amount of electricity to produce a certain amount of heat, and you aren't guaranteed a certain amount of heat based on a certain amount of generated power! So, even with the benefits of a 7nm process and the power consumption savings it brings, that doesn't guarantee your heat generation and heat dissipation amounts scale down with it! So even if the next-gen consoles had "only" a One X's level of cooling in place, it's very likely they would need a bit more than 75 watts on the PSU as headroom. This should be important to keep in mind because I'm going to give an estimate for the PS5's system TDP later on.

Now in order to try figuring out the PS5 GPU's TDP, I looked at the 5700 XT's TDP (courtesy via Techpoweredup), listed at 225 watts. Keep in mind this also includes the 8 GB of GDDR6 RAM, which we couldn't include going forward. The common GDDR6 module consumes 1 to 1.5 watts of energy. Taking the most lenient route I did (8 * 1.5), to shave off 12 watts, leaving the GPU itself to 213 watts. Now to figure a few other things out, I took another big liberty; seeing as how the PS5 only has 36 active CUs, I needed to try figuring the wattage per CU on the 5700 XT. (213 wTDP / 40 CUs) gave me 5.325 watts per CU. This isn't a perfect metric, but it's a "good enough" one and, again, is pretty lenient here. That further shaved down the TDP to 191.7 watts (213 - (5.325 * 4)).

From there it got a bit trickier; the PS5 is an RDNA2 chip, so it's going to have SOME power consumption reduction over the RDNA1 5700 XT. Going by some logical guesses, we can probably say that the PS5 (and most likely Series X and S) are 7nm DUV Enhanced chips. This means they won't likely enjoy the full benefit of power consumption reduction over 7nm a 7nm EUV chip would, but a happy middle-ground is possible. 7nm EUV brings a 15% power consumption reduction over 7nm, so this would put 7nm DUV at a 7.5% power consumption reduction. Decent enough. However, this reduction is ONLY for clock-for-clock, so if the two items are of differing clocks, you'd have to adjust. And we clearly know the PS5's GPU is much HIGHER clocked than a 5700 XT's, to the point where this 7.5% is likely negated. Thankfully, there is still a sizable power consumption reduction for RDNA2 over RDNA1...though maybe not as high as AMD's claimed 50% PPW increase.

If we look at RDNA1's gains over Vega, we saw a 14% performance increase with a 23% power consumption reduction clock-for-clock. RDNA1 also saw a 25% performance increase over GCN (which Vega belonged to, though was a later-gen GCN). IIRC, AMD have claimed a 50% PPW increase and 50% IPC gain for RDNA2 over RDNA1; I think at least the 50% IPC gain claim is slightly exaggerated. The 50% PPW improvement might be realistic, but I imagine this is on 7nm EUV chips, and I don't think the PS5 and Series systems are on this exact process (additionally we don't know to what frequency range, i.e in-sweetspot or out-of-sweetspot, that 50% PPW improvement is based around). That all said, I clearly cannot see AMD having gone backwards in improvement gains, so I figured to be friendly (if slightly reserved) and give RDNA2 a 30% power consumption reduction and 25% IPC gain over RDNA, on 7nm DUV Enhanced, clock-for-clock. These percentages will serve as basis for both Sony and MS's systems going forward.

AMD-Navi-RDNA-2.jpg


The last (and most challenging) thing to try and figure out here is how exactly the curve for power-to-frequency gain works for RDNA2 beyond its sweetspot. Part of the challenge here is that we don't actually know what RDNA2's sweetspot range is at. However, I actually think this is somewhat easy to calculate. We can use the power consumption reduction of 7nm DUV Enhanced (7.5%) as a modifier to RDNA1's known sweetspot low and high (1.7 GHz, 1.8 GHz), to get:

>1.7 GHz * 7.5% = 1.8275 GHz

>1.8 GHz * 7.5% = 1.935 GHz

These are what I suspect are the new sweetspot points for a 7nm RDNA2 GPU on the DUV Enhanced process. As we can see here, the 5700 XT, if it were an RDNA2 GPU, would fall right in the near the upper end of this new sweetspot, while the PS5, as an RDNA2 GPU, actually exceeds it by some 295 MHz. The last part of this puzzle, then, is to figure out what the power-to-frequency ratio would curve like within that 295 MHz range. We know the absolute peak ratio at the peak of that range, given to us by Cerny himself, at 5:1. But that doesn't mean it remains 5:1 for the entire curve. In fact, it would most likely ramp up exponentially over a wide spread from 1.935 GHz to 2.23 GHz. This spread is probably more "clustered" or sharper for a 7nm DUV Enhanced part than it is a 7nm EUV part, as well. Unfortunately, there's no such graphic to show a similar power-to-frequency ratio spread on a curve graph for the 5700 XT AFAIK. We can assume that at its peak, the 5700 XT probably had a somewhat harsher power-to-frequency ratio, though, at perhaps 6:1 or even 7:1, and that was with a narrow MHz clock range to push up to.

GettyImages-647221380.jpg


After a hard....some measure of time's....of crunching numbers, eh?.......but we ain't done yet!!

With these other numbers settled on, we can start to actually figure what the PS5 GPU's power consumption is likely at. Going back to the 191.7 watt number (5700 XT's GPU @ 1.905 GHz with 36 CUs active), we can simply apply our (somewhat conservative, but likely probable) power consumption percentage for RDNA2 (on 7nm DUV enhanced) over RDNA1, 30%, as a modifier. That gives us (191.7 watts *.30% = (191.7 watts - 57.51 watts =)) 134.19 watts, which we can probably round down to 134 watts. However, keep in mind that's clock-for-clock, and the PS5 GPU's clock is not the same as the 5700 XT's; it's much faster. 295 MHz faster, in fact. If you divide 2230 MHz by 295 MHz, you get a percentage of 7.6%. Now again, we're going to have to treat things with a bit of liberty. Being very lenient and assuming 1:1 power-to-frequency (which, for anything beyond the sweetspot, would not be the case and we'll correct for this in a moment), we can add that 7.6% (representing the amount of additional frequency as a conversion to wattage increase in a theoretical 1:1 relationship) as a wattage amount, 7.6 watts, to the 134.19 watts calculated earlier, giving us 141.75 watts.

However, it doesn't actually end there. We still have to account for the gain the PS5 GPU has over a 5700 XT when their CUs are adjusted to even, at their native GPU clocks. A 5700 XT @ 1.905 GHz (Boost clock) with 36 CUs active gives 8.778 TF of performance. Meanwhile, a PS5 with 36 CUs active and an "sustained boost" (these are Cerny's own words) of 2.23 GHz gives 10.275 TF of performance. That accounts for a difference of 1.35852 TF. Dividing this difference into the PS5's TF total gives a percentage ratio of 7.56%. Again, we're going to be a bit lenient and assume a 1:1 linear relationship here, this time being clock frequency to floating point operations per second. However, in this case we should ideally calculate the 5700 XT with the idea of it as an RDNA2 chip, meaning we should check to see if its actual Boost clock is beyond the new sweetspot peak. It is not, so we should instead calculate its performance here @ the 1.935 GHz clock, since we're trying to account for the power consumption at the difference of clocks between a 5700 XT in the sweetspot on 7nm DUV Enhanced, and a PS5 outside of the sweetspot on the same process.

36 CUs * 64 ROPs * 2 IPC * 1.935 GHz gives 8.91648 TF. Subtract this from PS5's 10.275 TF and it leaves us with 1.35936 TF, which if you divide into 10.275 TF gives a percentage of...7.559% While we still needed to take this step, it actually changes much nothing from the calculation in the above paragraph, we've just merely confirmed it was correct. Simplifying things, then, we can round up 7.559 to 7.56 without being egregious. In any case, we now need to add this 7.56% percent as if to represent a wattage increase for obtaining it (by assuming a linear ratio between performance and power, which we know for anything beyond the sweetspot this isn't the case). 141.75 watts + 7.56 watts = 149.31 watts. From here we simply need to give a final assumed wattage increase to account for the actual non-linear scaling of power-to-frequency we have at clocks beyond the sweetspot, which we'll figure is about 10 watts, 11 watts tops, by taking the aforementioned 7.6% and 7.56% figures, adding them, then multiplying by 2/3, as I wouldn't think of such a correction on the wattage TDP to be driven by an even amount from those two figures, but anything approaching the sum of those figures would likely be too aggressive.

In total, adding the 11 watts to 149 watts, we get a PS5 GPU with a watts TDP of 160 watts. When looking at it from this context, it fits in nicely after we start to do some rough calculations for other system components and their power consumption costs, to arrive at what the total system TDP might be. Start by taking the aforementioned 12 watts for the 8 GB of GDDR6, add in about 22 watts for the Zen 2 processor (both PS5 and Series X are likely based on the 4800U Zen 2 APU, which has a CPU rated at 25 watts with max clock of 4.2 GHz. The PS5's CPU clock is a bit lower than that, at 3.5 GHz, so I shave off 3 watts), 8 watts for the Tempest Engine, 15 watts for the SSD I/O Block (it's been stated to be equivalent to 9 Zen 2 cores in processing power, though I'd say with some stuff removed that an actual Zen 2 CPU would have, so I shave off 10 watts from the 4800U's CPU TDP assuming the performance of the I/O block here is probably referencing this particular CPU), and 6 watts for the NAND chip modules (assuming half a watt for each of the 12 64 GB modules), and you get a total system TDP budget of 225 watts.

SO, a TL;DR:

>PS5 GPU Power Consumption: 160 watts

>PS5 GPU Die Area: ~ 105 mm^2

>PS5 System Power Consumption: 225 watts

>PS5 PSU: 350 watts

the-moment-when-5af3fe.jpg


Aaand we did it, gang. We're done with the numbuahz!!

Considering the 5700 XT alone has the same TDP and that doesn't even account for anything else aside from GDDR6, and delivers much lower performance than PS5 (especially when you adjust PS5's performance to relative RDNA1 equivalents, aka 12.84357 TF of relative RDNA1 equivalent performance), this is very impressive. It also leaves a wattage headroom of 125 watts on the PSU, which sounds about right considering what we talked about in the beginning with these systems likely not reducing heat generation or dissipation despite power consumption reduction, as heat and electricity do not share a linear relationship.

It is likely that I have undershot the TDP in some cases, mainly due to not knowing exactly what the power consumption in that 295 MHz range beyond the adjusted peak of the new sweetspot would actually look like. If so, however, I highly doubt it's been undershot by anything more than 25 watts, and that's on the extreme end, while still leaving 100 watts of headroom for the PSU. Now hopefully people can see why it was necessary to hammer out some speculative calculations on aspects of the PS5's power consumption here, as it helped us to also figure out the likely sweetspot range for RDNA2 on 7nm DUV Enhanced process, see some relationships (linear and non-linear) between a few things, and even kind of figure out the likely size of the GPU portion of the APU. Some of the Series X info we already have was also helpful here, but the info on 5700 XT was even more helpful, providing just enough to help us figure this stuff out.

With that out of the way, we can finally move on to the more interesting stuff: the PlayStation 5 Pro, and Series X-2 and Series S refresh. For the latter two, there'll be some (very brief) number work on some aspects of Series X included just to serve as a basis of reference there, but by and large it's extremely easy to figure out things like the die size and GPU portion sizes both because MS have outright provided such numbers (through listings and graphs), and because some of the other stuff can simply be deduced by what we know about Series X, such as how we deduced the PS5's GPU size here from the Series X's GPU size. This also applies to GPU power consumption amounts, which for something like Series X, surprisingly, might actually fall roughly in range with PS5's despite some of their other differences, for reasons I'll touch on when we get to those mid-gen refreshes.

So yeah, I hope I'll have Part 2: PS5 Pro, Series X-2 And Series S-2 Mid-Gen Refreshes up very soon (will also link that one here when it's ready).

Hope you all enjoyed this and weren't put off by all the data crunching. Give it a try yourselves or just share in what ballpark ranges you personally think some aspects of PS5's GPU (power consumption amounts, power-to-frequency ratio scaling ranges, assumed RDNA2 sweetspot ranges etc.) fall in. While at it, what ideas do you have in mind for the mid-gen PS5, Series X and Series S refreshes? Are you expecting any at all? Do you think they'll replicate the PS4 Pro and One X or go in a different direction? Sound off!
I'm sure you wrote a real nice wall of text per usual. I'm sorry, but I'm going straight for your thread title without reading your post. (reason being I'm 95% certain of what to find) My take is that Sony won't release a mid-gen refresh. MS WILL. That's the new landscape. MS is full on PC mode. All their games are designed to run on multitude of hardware, wich they will upgrade down the line. That's the REAL "generations" that Sony talked about. MS are now in PC territory. Sony has designed a hell of a console that most havent understood just yet. That i/o is KEY for real next-gen titles. And I'm not talking load times here. As I've said earlier since Cerny's talk, (go ahead look it up) This i/o will change how games are made. Period. We just have to wait a few years to see the benefits.

Edit: I've read enough of your post now to, again, understand that you come from a PC angle. You compare PC GPU's to console GPU's etc. This is actually irrelevant for now, the PC has some catching up to do first. The next gen GPU's are so good (ampere/rdna2) that they are actually more held back by other critical systems. Your fist fault is to try and compare console GPU's (APU) to dedicated PC GPU's. A whole "system" is more than it's GPU. And a PC GPU, albeit stronger that a console one, shouldn't be your red herring of your post. You need to think of the whole "eco system" A chain is never any stronger than it's weakest part. This console gen GPU + i/o will be king. I'm certain that this gen will acheive much more than last gen ever did. People keep underestimating the fact what "instant" availability of data may bring to games. I see people counting seconds of load times and such.. It's not about that. It's about how you can "shorten" your pipeline of refreshed data. If it's fast enough you can start to redesign your whole pipe. And that is exactly what is going to happen with PS5's ridiculous i/o bandwidth.
 
Last edited:

truth411

Member
So where does that leave PS6 and Series X-Next? Anything beyond 3nm is not even realistically known to be possible without some massive breakthrough in a few various areas, because once you push beyond 3nm you're getting into near atom-sized territory and physics will eventually take over.

I don't think you need 3nm for mid-gen refreshes, honestly. Especially if the mid-gen refreshes are 2023 like some think. Mid-gen will likely focus more on pushing efficiency on things like RT, ML, compute etc. and experimenting with various 3D packaging methodologies to push up some performance.

Plus I also think both MS and Sony will try pushing for somewhat cheaper mid-gen refreshes but, again, hopefully I can explain my own perspective on this in Part 2 whenever I can post it.



Regarding pricing, there's other things that could be factoring into it. We know PS5 has a smaller APU, but we also know they have more NAND chips, a faster decompressor and flash memory controller (which at least would cost more in R&D), a more feature-packed controller etc. Those costs add up.

I should clarify, I don't think either system is default 7nm, but DUV Enhanced. AMD made this listing at an event sometime earlier this year with shareholders, and it sits between 7nm and 7nm EUV. At that same showing they gave a range of mentions for RDNA2 GPUs: 7nm, 7nm DUV enhanced, 7nm EUV. I guess depending on the market sector, availability, and what the client wanted, you'll be getting GPUs in a range of those for a good while.

F Falc67 I had that section in the OP down near the end man :LOL:

TSMC 2nm is virtually a Lock for 2024 for Phones. Probably 2025 for desktop cards.
After that don't know.
 
TSMC 2nm is virtually a Lock for 2024 for Phones. Probably 2025 for desktop cards.
After that don't know.

Well, going by what LordOfChaos LordOfChaos linked in their post, "2nm" may not actually be 2nm in full. Which, honestly, I'm not sure how I feel about that yet. It would sound like the companies are being very disingenuous in that case, hell maybe even open to lawsuits in a just society for deceitful marketing tactics. Although I know there's some truth to what they posted and linked; there's already plenty of POP setups with mixed-node components, it's not a stretch to speculate that could be the case with various silicon wafer products, either.

FWIW, what may be ready for top-of-the-line smartphones and desktop GPUs, doesn't mean it'd be ready for mid-gen refreshes or even next-next gen consoles. We'll have to wait and see.

RaySoft RaySoft Insightful stuff, but I'd like to clear up a misconception. The OP was actually just me trying to figure out some values regarding as-yet-unknown PS5 system metrics by drawing from what similar metrics we know about other next-gen consoles (such as Series X) and GPUs that have some similarities with PS5's GPU (5700 XT). These metrics are in regards to probable sweetspot frequency range, GPU silicon die area, and GPU power consumption on a given process node (7nm DUV Enhanced).

There were no actual comparisons with Series X in the OP and anything that was PC-centric there was simply in using 5700 XT as a reference point to jump off from for doing some calculations. WRT what you mention in other areas, well there was no comparison between PS5's GPU and PC GPUs as to indicate which one was better than the other. Again, the purpose of the 5700 XT here was as a reference point to try figuring out some things regarding the PS5 in the areas of die area and power consumption mainly. No focus was on other things like the SSD until I needed to try figuring a realistic number for the system's total power consumption.

While it can't be denied Sony have developed a really capable SSD I/O solution, I think some people make the mistake of believing it is a heads-and-shoulders solution above what other companies are providing in either their own next-gen systems, or next-gen PC GPUs. This is not actually the case. There will be some small differences and edge-cases where certain solutions provide better results here and there, but nothing where any one solution routinely outperforms the other in the area of data I/O. All of the various approaches are immensely capable and backed by MANY years of R&D research and, in some cases, actual integration in real-world products within the data markets. Here's a post from dobwal on B3D that shows you a glimpse to how much real-world R&D these companies have invested into solving data I/O bottlenecks:

GZip does not normally write the compressed size of each block in its header, so finding the position of the next block requires decompressing the current one, precluding multithreaded decompression. Fortunately, GZip supports additional, custom fields known as EXTRA fields. When writing a compressed file, MiGz adds an EXTRA field with the compressed size of the block; this field will be ignored by other GZip decompressors, but MiGz uses it to determine the location of the next block without having to decompress the current block. By reading a block, handing it to another thread for decompression, reading the next block and repeating, MiGz is able to decompress the file in parallel.

This isn't a patent or some research solution that hasn't been applied in the real world. It's being used by a major corporation namely Microsoft.

There are also variants of gzip similar to MiGz like GZinga and BGZip that offer random access. GZinga is used by Ebay.

https://tech.ebayinc.com/engineering/gzinga-seekable-and-splittable-gzip/

Who uses BGZip? Its used to compress bioinformatic data. Guess whose hardware can support this format to accelerate bioinformatics apps? Nvidia. BGZip is supported on Nvidia GPUs using CUDA.

http://nvlabs.github.io/nvbio/

There are MANY extremely valid approaches to solving data I/O bottlenecks, and Sony's approach is just one of many valid ones. Companies like Microsoft and Nvidia have invested tons into technologies to address many of these same things, and it's not a stretch to assume they have integrated a lot of that into their next-gen gaming offerings. So I don't necessarily know how true the notion is that Sony's I/O solution is so far ahead of everything else that no one else understands it yet, though I do agree that (just like on the Series systems at least), it'll mainly be the 1st-party titles that best show off the capabilities of the system data I/O solutions.

If there is a pro it will be for enhanced ray tracing at 60 FPS or GI / effects so you can play graphics mode at 60. It wont be a necessity.

Kind of where I am with mid-gen refreshes, too. There's no major advancement of a mass-proliferating resolution to push as the new standard. There might be room for things related to big VR/AR advances and the such, though.
 
I give my two cents here.

PS5 Pro should be around 30TF since PS4 Pro was around 3x TF of PS4.

I'm not sure AMD's solutions compare to Nvidia's dedicated RT cores and DLSS. Let's see what RDNA2 GPU will offers or maybe PS5 pro will go to RDNA3.

However I think PS5 was fine with current price and performance for at least 3 years from now.
 
Last edited:

VFXVeteran

Banned
If there is a pro it will be for enhanced ray tracing at 60 FPS or GI / effects so you can play graphics mode at 60. It wont be a necessity.


Can you give a description of what enhanced ray tracing at 60FPS would look like when 3090s can't even do 60FPS "enhanced" RT? You are rationalizing a jump from 10TFLOPS to over 30TFLOPS in just 3-yrs time? At what price? If that tech is so ready (3yrs is way too soon for a complete revamp of the hardware system), why not just release a scaled down version now? Or better yet, we should see it come out with AMD's announcement of their boards killing it in RT with games running at 60FPS right?
 
Last edited:

NinjaBoiX

Member
I'm not technical, sorry. But I wonder if we'll see a mid term refresh. Without doubt both the Pro and X were introduced to open up the 4K market. Having the PS5 and XBX already capable of 4K, is a mid term refresh necessary?
I’d imagine there’d be a minor boost to horsepower along with efficiency savings and a smaller case.

It’ll be somewhere between this gen and last gen.
 

Andodalf

Banned
I'm not technical, sorry. But I wonder if we'll see a mid term refresh. Without doubt both the Pro and X were introduced to open up the 4K market. Having the PS5 and XBX already capable of 4K, is a mid term refresh necessary?

It's a good reason to get a die shift in, which means you can make a more efficient console, and maybe even shrink the die space reducing cost. Additionally they can shift to, potentially, GDDR7, which would be nice as the GDDR6 (not even X) found in the new consoles will fall out of favor and rise in price. Additionally, with MS having declared generations dead, they could be putting up an entirely new console in 2023/24, and Sony has to have something to answer, even if it's not an entirely new console.
 
Can you give a description of what enhanced ray tracing at 60FPS would look like when 3090s can't even do 60FPS "enhanced" RT? You are rationalizing a jump from 10TFLOPS to over 30TFLOPS in just 3-yrs time? At what price? If that tech is so ready (3yrs is way too soon for a complete revamp of the hardware system), why not just release a scaled down version now? Or better yet, we should see it come out with AMD's announcement of their boards killing it in RT with games running at 60FPS right?

Yeah this is kinda where I'm in the camp of when it comes to next-gen mid-gen refreshes. The Pro and One X were exceptions in massive power boosts because the base consoles were so conservative. I think the 3080 and 3090 have skewed people's perspectives on next-gen consoles to view them as also conservative in specs in some ways, but that's short-changing the next-gen systems IMHO.

I do see mid-gen refreshes focusing mainly on lowering power consumption, refining some specific aspects of the graphics pipeline (RT, ML, compute, DLSS-style techniques etc.) and maybe one of them experimenting with a chiplet design for the GPU.

It's a good reason to get a die shift in, which means you can make a more efficient console, and maybe even shrink the die space reducing cost. Additionally they can shift to, potentially, GDDR7, which would be nice as the GDDR6 (not even X) found in the new consoles will fall out of favor and rise in price. Additionally, with MS having declared generations dead, they could be putting up an entirely new console in 2023/24, and Sony has to have something to answer, even if it's not an entirely new console.

Are there any JEDEC specifications/roadmaps for GDDR7? Would love to know if they're around to look at.
 

Andodalf

Banned
Yeah this is kinda where I'm in the camp of when it comes to next-gen mid-gen refreshes. The Pro and One X were exceptions in massive power boosts because the base consoles were so conservative. I think the 3080 and 3090 have skewed people's perspectives on next-gen consoles to view them as also conservative in specs in some ways, but that's short-changing the next-gen systems IMHO.

I do see mid-gen refreshes focusing mainly on lowering power consumption, refining some specific aspects of the graphics pipeline (RT, ML, compute, DLSS-style techniques etc.) and maybe one of them experimenting with a chiplet design for the GPU.



Are there any JEDEC specifications/roadmaps for GDDR7? Would love to know if they're around to look at.

No spec yet, but here's two links for possible early info.




To add my own input, DDR3 hit consumers in 07, with GDDR5 hitting consumers in 08. DDR 4 came in 2014 with GDDR6 following in 2018. With LPDDR5 already in phones and Intel targeting it for next year, 2023-2024 seems like a very possible target for GDDR7, but for that to hold true we would need to see an official specification soon.
 

cryptoadam

Banned
System didnt even launch and already talking about pro versions.

And we all know they are coming. In a way its pointless to buy a system now unless you really need 4K gaming and a pro or X doesnt scratch the itch for you.

Wonder how many day 1 buyers who shell out 499 are gonna she out another 499 or 599 in 2 years for the series x turbo and ps5 pro.
 

EverydayBeast

ChatGPT 0.1
Video games are kinda established in the “mid gen refresh” now like phones and cars, PS4 Pro worked well for SONY.
 

VFXVeteran

Banned
Yeah this is kinda where I'm in the camp of when it comes to next-gen mid-gen refreshes. The Pro and One X were exceptions in massive power boosts because the base consoles were so conservative. I think the 3080 and 3090 have skewed people's perspectives on next-gen consoles to view them as also conservative in specs in some ways, but that's short-changing the next-gen systems IMHO.

They've always done this for years against high-end Nvidia hardware. They somehow think that the "next" iteration is going to be that monster that they've all been wishing for instead of seeing realistically about 1) AMD is not at the forefront of tech like Nvidia. 2) Costs to own a console are rising to the brink of unaffordability, 3) tech takes LOTS of time and 4) the consoles will never really "lead" in any advances in tech (i.e. it will most likely have already been iterated upon through some other means). The sooner we can all come to grips with how this stuff pans out by taking our last 2 generations into account (PS4/PS5) and seeing the outcome there, the better off we'll be at having a reasonable conversation about future hardware. As it is now, it's not even worth entertaining the conversation as the wishlist is way out in left field like the majority of the Speculation thread.
 

Komatsu

Member
Very good thread and enjoyable read. I agree with most of your assumptions, though my suspicion is that that whole conversation about “continuous boost” is smoke & mirrors, not altogether different from some of the claims made about the Cell back in 2005.

The reason why we got a mid-gen refresh last time had to do with the fact, I believe, that both the X360 and the PS3 stuck around for almost eight years (2005-2013), for reasons not worth getting into in this particular thread, so the following generation came out underpowered (1.3/1.8 TFLOPs) just as display technology began to shift. Not sure if we’ll see the same movement from the major players this time around.
 
Last edited:
Top Bottom