• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Die Shot has been revealed

Lysandros

Member
Wrong.

Big Navi the 80 CU die, aka Navi 21, took the 40 CU die of Navi10 and doubled the number of shader engines, from 2 to 4.

The upcoming Navi22 die is the same 40 CUs as the PS5.

PS4, PS4 Pro, PS5 and Navi10, Navi21, Navi22 all use the same 5 DCUs per SA, 2 SA per SE layout. Navi21 just has 4 SE not 2.

XSX uses the same 2 shader engines as PS5 and Navi22 and Navi10, but added 2 extra DCUs per shader array. Not like Navi21.


RDNA1 Navi10 - 40 CUs (RX 5700XT, 5700)
5 Dual Compute Units per Shader Array
2 Shader Arrays per Shader Engine
2 Shader Engines
5x2x2x2=40 CUs
2560 shaders
64 ROPs
160 TMUs

RDNA2 Navi21 - 80 CUs (RX 6900XT, 6800XT, 6800)
5 Dual Compute Units per Shader Array
2 Shader Arrays per Shader Engine
4 Shader Engines
5x2x2x4=80 CUs
5120 shaders
128 ROPs
320 TMUs

RDNA2 Navi22 - 40 CUs (RX 6700XT, 6700)
5 Dual Compute Units per Shader Array
2 Shader Arrays per Shader Engine
2 Shader Engines
5x2x2x2=40
2560 shaders
64 ROPs
160 TMUs

PS5 GPU - 40 CUs
5 Dual Compute Units per Shader Array
2 Shader Arrays per Shader Engine
2 Shader Engines
5x2x2x2=40
2560 shaders
64 ROPs
160 TMUs

XSX GPU - 56 CUs
7 Dual Compute Units per Shader Array
2 Shader Arrays per Shader Engine
2 Shader Engines
7x2x2x2=56
3584 shaders
64 ROPs
224 TMUs

XSS GPU - 24 CUs
6 Dual Compute Units per Shader Array
2 Shader Arrays per Shader Engine
1 Shader Engine
6x2x2x1=24
1536 shaders
32 ROPs
96 TMUs

And of course the consoles disable 4 CUs for yields, which also disables 16 TMUs.
And some of the PC GPUs get cut down for segmentation and yields.
Is it possible that PS5 has slightly more TMUs than 160 as a customization?
 
Oh I know your graph like the back of my hand, Ive posted it so many times lol.

The higher power consumption is what I was referring to. Your graph showed 220w for just 2.15 Ghz 5700xt. That wouldve been almost 250w for a 2.23 ghz GPU just for the GPU die. So to me, whatever arch or node improvements allowed AMD to reduce that level of power consumption are in the PS5 GPU. I am not well versed in GPU die speculation enough to tell you what those improvements are, but they arent node improvements since AMD said they are not using 7nm+ for RDNA 2.0 chips.

The PS5 seems to be outperforming the 5700xt despite its lower CU count due to its higher tflops/clocks. So I wouldnt be surprised if its firestrike score is somewhere in the 28k-30k region despite 4 fewer CUs.

resultsshjg4.png

yeah don't get me wrong. i was in no way trying to say that consoles are RDNA 1 because of their power profile. i pretty much disproven that with my gonzalo analysis thread. PS5 would have been well north of 300W on the wall if that was the case.

what i was trying to say: infinity cache is saving lots and lots of power by preventing data to be moved around unnecessarily. so aquivalent rdna2 GPUs should be even more power efficient than the consoles.

the fact PS5 is even hitting this clockspeeds must mean that there had to be a massive pipeline rework compared to rdna 1.

xsx is fitting the rdna 1 power profile better, but is still more efficient compared what you could expect from a comparable wide rdna 1 aquivalent.
 

MonarchJT

Banned
1.) I saw some articles which put forward "RDNA1.1" or "RDNA1.5" but that was not my creation or claim.
I obviously have no control about how people cite my commentary or (mis)interpret my statements.
There is the possibility for further clarification and getting in touch with newswriters to recommend adjustments but personally I don't have the energy and time to do that on a wide scale.

2.) I may do some speculative PS5 analysis because there isn't an IP list for the PS5, so we can't tell for sure as on the Xbox Series X how it's build.
But yes, the PS5 is also a mixed tech chip.

3.) The render frontend setup is the same as on RDNA1 GPUs however this doesn't necessarly mean that it's identical.
The Compute Units have the physical design improvements from RDNA2, so even if from a functional perspective there would be no difference, it could be seen as an oversimplified statement.
And if you exclude the TMUs with Texture Sampler Feedback and Ray Tracing Acceleration there wouldn't be so much difference between RDNA1 and RDNA2 either way.
I would put the CUs also in a mixed domain.


I took no offense because it was rather obvious that it ties to the 1.) point above but since I was directly addressed I felt the need to lay down my actual standpoint.
First of all, hi! and thanks for all the explanations and the time you dedicate to make the technology clearer to non-experts. So to understand better you believe that the difference between CU RDNA1 and RDNA2 would be negligible?
And as a last question (if you are not bored) do you think that the increase in clocks is mainly due to the jump of node or more to improve architecture?
 

SlimySnake

Flashless at the Golden Globes
@Bill O'Rights

With such an overwhelming amount of evidence from official sources can the admins take against against anyone calling either system anything other than a custom RDNA2 one?

This reminds me of all the claims of the PS5 running at 8TFs or the brute force XSX ones that we had in the past. It might be a good idea to take some action otherwise many discussions will get derailed over and over again.

Thank you for looking at this.
this is absolutely absurd. Dont be a discussion Nazi. The discussion is literally based around tweets of people much smarter than us talking about RDNA 1.5 or RDNA 1.xx i.e., what is custom RDNA and what is FULL, and banning people for engaging in that discussion is very era like.





So whats next? Should we start banning users for posting these tweets? Or should we just ban the users who extrapolate on the tweets? Should we ban the discussion of what custom means? By that logic, we should ban all discussion of the PS5 GPU since we all know its super custom compared to any other RDNA 2.0 GPU out there. Lets ban everyone who brings up the I/O capabilities of the PS5 since thats also custom.
 

3liteDragon

Member
I forget to added the frontend....

Render Frontend
PS5: RDNA 2?
Series: RDNA

Render Backends
PS5: RDNA
Series: RDNA 2

Compute Units
PS5: RDNA 2
Series: RDNA


Ray-tracing / TMUs
PS5: RDNA 2
Series: RDNA 2
Yes, was waiting for this, lmao... One thing Cerny was explicit about mentioning was RDNA 2 cu's. We know for a fact thats in the PS5, if nothing else.
This is the part I don't get, how are the Series X CUs RDNA 1 instead of 2? The reason Cerny gave for PS5's CUs being RDNA 2 was because of the SIZE of the CUs (as in like increased transistor count for each CU instead of the physical size of the CU on the die itself), so because the Series X APU is based on the 7nm node but with lower GPU clocks (compared to PS5), aren't the CUs still RDNA 2?
 

SlimySnake

Flashless at the Golden Globes
This is the part I don't get, how are the Series X CUs RDNA 1 instead of 2? The reason Cerny gave for PS5's CUs being RDNA 2 was because of the SIZE of the CUs (as in like increased transistor count for each CU instead of the physical size of the CU on the die itself), so because the Series X APU is based on the 7nm node but with lower GPU clocks (compared to PS5), aren't the CUs still RDNA 2?
the xsx CUs have hardware level support for ray tracing built into the TMUs. They are bound to be bigger and closer to RDNA 2.0 CUs than RDNA 1.0 CUs. If the XSX CUs were RDNA 1 CUs, they would not have support for hardware accelerated ray tracing.
 

SlimySnake

Flashless at the Golden Globes
what i was trying to say: infinity cache is saving lots and lots of power by preventing data to be moved around unnecessarily. so aquivalent rdna2 GPUs should be even more power efficient than the consoles.
Ah I see. I wonder if the PS5 Pro will go with a smaller cache like Xbox One's 32MB ESRAM. I always assumed that the extra transistors on the RAM wouldve drastically increased the power consumption.
 

ethomaz

Banned
This is the part I don't get, how are the Series X CUs RDNA 1 instead of 2? The reason Cerny gave for PS5's CUs being RDNA 2 was because of the SIZE of the CUs (as in like increased transistor count for each CU instead of the physical size of the CU on the die itself), so because the Series X APU is based on the 7nm node but with lower GPU clocks (compared to PS5), aren't the CUs still RDNA 2?
Cerny compared the increased with GCN.... (PS5 vs PS4).

the xsx CUs have hardware level support for ray tracing built into the TMUs. They are bound to be bigger and closer to RDNA 2.0 CUs than RDNA 1.0 CUs. If the XSX CUs were RDNA 1 CUs, they would not have support for hardware accelerated ray tracing.
Ray-tracking is only on TMUs.
CU and TMU are different silicon parts.
 
Last edited:
this is absolutely absurd. Dont be a discussion Nazi. The discussion is literally based around tweets of people much smarter than us talking about RDNA 1.5 or RDNA 1.xx i.e., what is custom RDNA and what is FULL, and banning people for engaging in that discussion is very era like.





So whats next? Should we start banning users for posting these tweets? Or should we just ban the users who extrapolate on the tweets? Should we ban the discussion of what custom means? By that logic, we should ban all discussion of the PS5 GPU since we all know its super custom compared to any other RDNA 2.0 GPU out there. Lets ban everyone who brings up the I/O capabilities of the PS5 since thats also custom.


Well neither system is RDNA1. And there's plenty of evidence from official sources that disprove that. It's fine to discuss how they customized their RDNA2 GPUs but it's another to say that either of them is RDNA1 with official sources saying the opposite. If your fine with the RDNA1 claims then your also fine with the 8TF and brute forcing claims. Neither which is true btw.

I'm just asking the admins opinion on this. If they are fine with it then I'll just accept their decision.

Also I would prefer you not to use Nazi with me. I know it's just a saying but I have a sentimental attachment to that word. Not because I am a nazi but because I almost didn't exist because of them. Let's leave it at that.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Well neither system is RDNA1. And there's plenty of evidence from official sources that disprove that. It's fine to discuss how they customized their RDNA2 GPUs but it's another to say that either of them is RDNA1 with official sources saying the opposite. If your fine with the RDNA1 claims then your also fine with the 8TF and brute forcing claims. Neither which is true btw.

I'm just asking the admins opinion on this. If they are fine with it then I'll just accept their decision.
The question isnt which system is RDNA 1, the question is which system is full RDNA 2.0 and the answer from me and pretty much everyone in this thread and the tweets we are linking to is that neither is fully RDNA 2.0.

The Locuza tweet mentioning how he thinks its closer to RDNA 1.8 was curious to me which is why I created that table of known RDNA 2.0 features to see if the XSX or PS5 came closer to RDNA 1.8 or RDNA 2.0. Neither did. I dont know why you think thats a crime that needs Bill of Rights brought in to determine if its constitutional.
 
Last edited:
Ah I see. I wonder if the PS5 Pro will go with a smaller cache like Xbox One's 32MB ESRAM. I always assumed that the extra transistors on the RAM wouldve drastically increased the power consumption.

in computer science there is a key metric is pJ per bit per mm
as in energy required to move information a certain distance.
energetically it's always cheaper to let information sit in one place. that comes with disadvantages like die area requirement and rising latency for caches as they grow in size (that's why you have different cache levels). chipmakers have to pic their poison.
 
Last edited:
The question isnt which system is RDNA 1, the question is which system is full RDNA 2.0 and the answer from me and pretty much everyone in this thread and the tweets we are linking to is that neither is fully RDNA 2.0.

The Locuza tweet mentioning how he thinks its closer to RDNA 1.8 was curious to me which is why I created that table of known RDNA 2.0 features to see if the XSX or PS5 came closer to RDNA 1.8 or RDNA 2.0. Neither did. I dont know why you think thats a crime that needs Bill of Rights brought in to determine if its constitutional.

I don't think it's wrong to discuss how custom the RDNA2 GPUs are. But I think it's wrong that some people are saying that they are RDNA1 GPUs when it's confirmed from official sources that they are not.

It's kind of like if I said the PS5 is using a GCN GPU with Jaguar Cores.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
But I think it's wrong that some people are saying that they are RDNA1 GPUs when it's confirmed from official sources that they are not.
I think that's wrong as well. But you are taking issues with a post of mine that literally said this.
The XSX might take the lead once devs start utilizing mesh shaders and VRS, but the PS5 is probably doing better at the moment because it's hitting those higher RDNA 2.0 clocks where as MS used those perf per watt gains to get more CUs on the GPU.

This clearly shows that I think both consoles are using RDNA 2.0 features. You are asking the mods to look at something that does not exist.
 
man you people all have to watch the locuza video again.

his point was that there are no clear rdna 1 / 2 boardes but iterations of different feature sets of the specific GPU components. even the advancement of those iterations is not always reflected correctly by their version number.
 
Last edited:

MonarchJT

Banned
I don't think it's wrong to discuss how custom the RDNA2 GPUs are. But I think it's wrong that some people are saying that they are RDNA1 GPUs when it's confirmed from official sources that they are not.

It's kind of like if I said the PS5 is using a GCN GPU with Jaguar Cores.
The thing is that rdna x.x mean nothing expecially if we are talking about customized gpu's. We can endlessly discuss nomenclature without ever coming to an end. But there is only one way to figure out which one comes closest to being full rdna2. The features
 
Last edited:
I think that's wrong as well. But you are taking issues with a post of mine that literally said this.


This clearly shows that I think both consoles are using RDNA 2.0 features. You are asking the mods to look at something that does not exist.

I don't know I've seen people claim that the PS5 is RDNA1 when it isn't. I know both are a combination of the two but that doesn't make it correct to say that one system is RDNA1 while the other is RDNA2. It gets really irritating to see the same misinformation spread over and over again.
 

ethomaz

Banned
hmmm... .I thought the CUs were made up of Shader Processors, Texture Mapping Units and ROPs.
CUs are CUs and indeed have the Shader Processors inside... liked to them there are TMUs and cache... they are close each other but AMD pics shows they are different units.

ROPs are usually in the center of the GPU... it is called Render Backend in AMD chips... it is close to the GE, Command processor, etc.
It is a bit far way from the CUs and TMUs.
 
Last edited:

ethomaz

Banned

There are more talk if you guys wants to read... I showed the clocks of RX 6800 and RX 5700.
We need more evidences yet (that is why I use guess but seems like nobody take that).

I will check the difference between silicons if I had time today.... I'm a bit late with my work hehe.

but aren't the CU's distributed in groups of different numbers on rdna2?
You mean the number of CUs per Shader Array? It is the same between RDNA and RDNA2.
 
Last edited:

ethomaz

Banned
lol the guy tried to get a answer from experts and got a no answer lol
Console warriors in twitter are even stronger than hew in GAF (I never stated a fact but guess).




More about he sizes:



GE and TMUs seems to be way bigger on PS5.
 
Last edited:

DJ12

Member
Look, it matters little to me if PS5 had RDNA2 full set of capabilities. I just quoting MS statement here, take it or leave it. Playstation fans should go and ask Cerny why Microsoft lying here, since that's what you guys believe, right?

Quote from Oct 2020 is online.
Xbox Series X|S are the only next-generation consoles with full hardware support for all the RDNA 2 capabilities AMD showcased today.
Seems like it matters a lot to you actually.
 
I personaly don't find Locuza's twitter posts to be very constructive with all that fixation and speculation with "RDNA1.", "true RDNA2", "Cut down FFU", "True Zen 2", "128-bit Zen 1 FPU" etc. It reminds me of earlier Microsoft PR statements.
The cut down FPU aspect is interesting, despite the console warring going on around the topic, what was the trade-off for this sacrifice? Is there any downside in the context of a console?

Even if Sony made the right choice, it only shows they either considered different options to optimize the cost/performance of their machine... So far it seems like it works, anything about "future secret GPU boost/cloud/dx12u/second GPU/tools" is pure speculation, has been seen before, etc. Could happen, but it could be on either side.
 

Lysandros

Member
The cut down FPU aspect is interesting, despite the console warring going on around the topic, what was the trade-off for this sacrifice? Is there any downside in the context of a console?

Even if Sony made the right choice, it only shows they either considered different options to optimize the cost/performance of their machine... So far it seems like it works, anything about "future secret GPU boost/cloud/dx12u/second GPU/tools" is pure speculation, has been seen before, etc. Could happen, but it could be on either side.
As i said in my discussion with Locuza my post was a bit rushed due to misunderstanding. To my knowledge the whole FPU sitation seems to be fuzzy for now, all we know for sure is that it looks to be (slightly) smaller compared PC/XSX. The precise nature of the tweaking remains unknown. I personally don't think it's something that will affect performance to a noticeable degree if at all. Maybe his future posts will cast some more light on the matter.
 
Last edited:

Hashi

Member
PS5 doesn't need VRS (Variable Rate Shading) because every pixel on PS5 is rendered in real-time
I know, I writing false.
:messenger_smiling_with_eyes:
 

Locuza

Member
First of all, hi! and thanks for all the explanations and the time you dedicate to make the technology clearer to non-experts. So to understand better you believe that the difference between CU RDNA1 and RDNA2 would be negligible?
And as a last question (if you are not bored) do you think that the increase in clocks is mainly due to the jump of node or more to improve architecture?
If you exclude the TMUs, the re-pipelining and other physical design optimizations then there isn't a huge difference between RDNA1 and RDNA2 Compute Units.
However that's not how reality works and how the PS5 or Xbox Series X are build.

The increase in clock speeds relative to RDNA1 GPUs comes from a lot of design work.
Each circuit design has a limit how fast it can work.
For example If you have 100 building blocks and 99 would achieve 2GHz but one only 1 GHz, then the whole design would need to run at 1GHz if all of them have to live under one clock domain.
As an architect you would look at this 1 block and try to expand that clock limit by re-designing the cuircuty.
There are multiple options to improve the clock speed with different compromises.

Beyond the "simple" clock speed increase the efficiency of the whole design was improved by a large factor.
Both the Xbox Series and PS5 are obviously leveraging a lot if not all of that RDNA2 work.
Because otherwise both would consume much more power and even the Xbox Series wouldn't run at 1.825GHz.
The 5700XT runs at similar clock rates under games but AMD can bin the best chips to run at higher frequency, there is also the 5700 which is slower.
For console you can't bin the chips after different classes, all of them have to hit 1.825GHz which is why console designs are usually less agressively clocked than top stack SKUs for desktop or mobile products.
Since the Xbox Series X is already running at 1.825GHz, showcases RDNA2 level of energy efficiency and includes the new Render Backend design, which was also optimized for higher clock rates, you can expect that the silicon can also achieve well over 2GHz, which wouldn't be possible under RDNA1 GPUs.


Cerny compared the increased with GCN.... (PS5 vs PS4).


Ray-tracking is only on TMUs.
CU and TMU are different silicon parts.

hmmm... .I thought the CUs were made up of Shader Processors, Texture Mapping Units and ROPs.
If you go by AMDs definition, CUs include the "Shader Processors and Texture Mapping Units" but not the ROPs.
45268_radeon-rx6000-series-compute-unit.jpg
 

ToTTenTranz

Banned
Update on my guesses.

Render Frontend
PS5: RDNA
Series: RDNA

Render Backends
PS5: RDNA
Series: RDNA 2

Compute Units
PS5: RDNA 2
Series: RDNA 2

Ray-tracing / TMUs
PS5: RDNA 2
Series: RDNA 2

Computer Unit looks to be RDNA 2 by James Stanard.


I would be wary of the interpretations on low-level changes of a 12-15 billion transistor chip that are being made using a 9 million pixel photo.
 
If you exclude the TMUs, the re-pipelining and other physical design optimizations then there isn't a huge difference between RDNA1 and RDNA2 Compute Units.
However that's not how reality works and how the PS5 or Xbox Series X are build.

The increase in clock speeds relative to RDNA1 GPUs comes from a lot of design work.
Each circuit design has a limit how fast it can work.
For example If you have 100 building blocks and 99 would achieve 2GHz but one only 1 GHz, then the whole design would need to run at 1GHz if all of them have to live under one clock domain.
As an architect you would look at this 1 block and try to expand that clock limit by re-designing the cuircuty.
There are multiple options to improve the clock speed with different compromises.

Beyond the "simple" clock speed increase the efficiency of the whole design was improved by a large factor.
Both the Xbox Series and PS5 are obviously leveraging a lot if not all of that RDNA2 work.
Because otherwise both would consume much more power and even the Xbox Series wouldn't run at 1.825GHz.
The 5700XT runs at similar clock rates under games but AMD can bin the best chips to run at higher frequency, there is also the 5700 which is slower.
For console you can't bin the chips after different classes, all of them have to hit 1.825GHz which is why console designs are usually less agressively clocked than top stack SKUs for desktop or mobile products.
Since the Xbox Series X is already running at 1.825GHz, showcases RDNA2 level of energy efficiency and includes the new Render Backend design, which was also optimized for higher clock rates, you can expect that the silicon can also achieve well over 2GHz, which wouldn't be possible under RDNA1 GPUs.





If you go by AMDs definition, CUs include the "Shader Processors and Texture Mapping Units" but not the ROPs.
45268_radeon-rx6000-series-compute-unit.jpg

Very interesting.

I would appreciate your opinion on something if you don't mind.

So it seems pretty obvious that Sony could have selected the features that they wanted for their GPU. But why make the decision not include dedicated hardware for SFS, VRS and Mesh Shaders?

Just seems a bit odd that Sony wouldn't have those features unless they could substitute them for something else.
 

MonarchJT

Banned
If you exclude the TMUs, the re-pipelining and other physical design optimizations then there isn't a huge difference between RDNA1 and RDNA2 Compute Units.
However that's not how reality works and how the PS5 or Xbox Series X are build.

The increase in clock speeds relative to RDNA1 GPUs comes from a lot of design work.
Each circuit design has a limit how fast it can work.
For example If you have 100 building blocks and 99 would achieve 2GHz but one only 1 GHz, then the whole design would need to run at 1GHz if all of them have to live under one clock domain.
As an architect you would look at this 1 block and try to expand that clock limit by re-designing the cuircuty.
There are multiple options to improve the clock speed with different compromises.

Beyond the "simple" clock speed increase the efficiency of the whole design was improved by a large factor.
Both the Xbox Series and PS5 are obviously leveraging a lot if not all of that RDNA2 work.
Because otherwise both would consume much more power and even the Xbox Series wouldn't run at 1.825GHz.
The 5700XT runs at similar clock rates under games but AMD can bin the best chips to run at higher frequency, there is also the 5700 which is slower.
For console you can't bin the chips after different classes, all of them have to hit 1.825GHz which is why console designs are usually less agressively clocked than top stack SKUs for desktop or mobile products.
Since the Xbox Series X is already running at 1.825GHz, showcases RDNA2 level of energy efficiency and includes the new Render Backend design, which was also optimized for higher clock rates, you can expect that the silicon can also achieve well over 2GHz, which wouldn't be possible under RDNA1 GPUs.





If you go by AMDs definition, CUs include the "Shader Processors and Texture Mapping Units" but not the ROPs.
45268_radeon-rx6000-series-compute-unit.jpg
Like always very very interesting ...thanks
and (i promise is the last question!!) about the ML you think that there will be diffence between the two console (on hw level)?
I'm asking because i think that this could be bigger than the lack of something like VRS
 
Last edited:
So technically whatever ray tracing hardware AMD has included would be in the CUs?

it's inside the texture units which are part of the CUs. but what does that matter?

this has been known since the AMD RT patent for much over a year ago.


but i like to stress that this is not the same as the "amd is doing ray tracing in shaders" moronic trope certain imbeciles have been pushing here for ages. functionally the ray accelerators are doing the same parts of the RT pipeline as in nvidia GPUs. i think in turing and ampere the RT cores are also directly associated with the TMUs, but Locuza Locuza might correct me on that.
 
Last edited:
lol the guy tried to get a answer from experts and got a no answer lol
Console warriors in twitter are even stronger than hew in GAF (I never stated a fact but guess).




More about he sizes:



GE and TMUs seems to be way bigger on PS5.

I think we'll need some others experts opinion of why GE and TMUs seem bigger on PS5. Those people on that twitter thread don't seem to be interested into that matter and seem focused on the downgraded FPU, missing VRS and anything RDNA1 on PS5.

The most interesting aspect of PS5 is the Geometry engine (why is it bigger?), tempest engine and cache scrubbers.
 
Last edited:

Popup

Member
So, is every part of the PS5 Die now understood/mapped or are there still some unknown or unusual customisations/differences to consider and speculate on?
 
Update on my guesses.

Render Frontend
PS5: RDNA
Series: RDNA

Render Backends
PS5: RDNA
Series: RDNA 2

Compute Units
PS5: RDNA 2
Series: RDNA 2

Ray-tracing / TMUs
PS5: RDNA 2
Series: RDNA 2

Computer Unit looks to be RDNA 2 by James Stanard.
Yeah in general that's seems to be correct.
 
Top Bottom