PS5 Die Shot has been revealed

Somebody already jumped in the made up article lol


Eue3LJrXUAEZ8bG
 
Somebody already jumped in the made up article lol


Eue3LJrXUAEZ8bG
Like clockwork.

Some ppl just cant help themselves, lol. Maybe all this will get Cerny to give out more info. The post below....anyone with a passing interest in tech should know this. (Not the 1.1 but that ray tracing isn't on RDNA 1.) Its like some ultimate quest to make the PS5 so weak in comparison. Instead it makes some ppl look foolish. Like mentioned earlier the clock speeds the PS5 hits RDNA 1 wouldnt be able to hit.
 
Last edited:
Ok after read more... here some tweets from the author's article:








He is the only guy saying that RDNA 1.1 is a term used internally by AMD engineers.
 
Last edited:
Don't both the XSX and the PS5 have the same ROPs?
In numbers yes.

But in RDNA 2 it is redesigned:


image-135-1024x608.png



image-147-1536x876.png


If what he is talking is true then the only difference at hardware level between Series X and PS5 GPUs are the Render Backends (or ROPs)... excluding of course the different number of CUs.
 
Last edited:
Nope. If they did PS5 would have hardware VRS. Bah, it's all confusing. Let's just focus on the games lol. We'll go in circles forever.
AMD slide says it expand it to 2x1, 1x1 and 2x2... that means the old RB supports VRS at hardware level just not these 3 expanded modes.

Thar is what I understand.

Edit - A quote from the same guy on RX 6800 launch:

"Each RB+ can process 8 32-bit pixels, a 2x increase compared to RDNA 1 and 1.1. This is primarily the result of the doubled 32bpp color rate. The new multi-precision RBs are also supplied to the shader engine at twice the rate, primarily improving the performance with mixed-precision workloads such as VRS."

It improves VRS... so it doesn't exists at hardware level on previous ROPs.
 
Last edited:
AMD slide says it expand it to 2x1, 1x1 and 2x2... that means the old RB supports VRS at hardware level just not these 3 expanded modes.

Ahh ok so the PS5 does support hardware VRS if it's using RDNA1.1 ROPs. Just not the latest VRS technology. It's better than not having it at all.

But still there's a ton of similarities between RDNA1 and RDNA2 and both systems seem to have some sort of mix between the two.
 
Last edited:
Ahh ok so the PS5 does support hardware VRS if it's using RDNA1.1 ROPs. Just not the latest VRS technology. It's better than not having it at all.

But still there's a ton of similarities between RDNA1 and RDNA2 and both systems seem to have some sort of mix between the two.
Take in mind we are talking about RDNA, that supposed RDNA 1.1 and RDNA 2.0.

What PS5 uses from each one is still unclear... it is probably like Series X some pre-RDNA parts, some RDNA, others RDNA 2 and some custom.

For example "RDNA 1.1" doesn't have Ray-tracing but PS5 has... that means PS5 uses RDNA 2 TMUs imo.
 
Last edited:
Ahh ok so the PS5 does support hardware VRS if it's using RDNA1.1 ROPs. Just not the latest VRS technology. It's better than not having it at all.

But still there's a ton of similarities between RDNA1 and RDNA2 and both systems seem to have some sort of mix between the two.

It definitely doesn't have hardware VRS. It's just a custom collection of features based on RDNA 2. I initially was ready to believe the whole 1.1 stuff, but too much doesn't line up. Some features simply didn't make the cut. The reason why I decided to no longer go off of that RDNA 1.1 chart is because even RX 6000 doesn't support Int 8 and int 4. Series X does, however. And we already had a Sony engineer say no ML on PS5. And I see no other site out there with that information even though he said he heard it from AMD engineers.

As ethomaz said, PS5 has ray tracing, so the TMUs must also be RDNA 2 as well. And then another thing that doesn't line up with that hardwaretimes site is what he said about VRS and Sampler Feedback not being hardware features and work on all GPU architectures. If that's so, why does RDNA 1 even lack VRS? Yea, I'm no longer taking anything on hardwaretimes seriously. I was initially believing it.
 
Last edited:
You know what guys, play videogames on your favourite console and stop discussing about things you don't understand. I work in semiconductor industry and can't tell what is the version of the block I am implementing even by looking with EDA tools and access to individual standard cells unless I go through its architecture documents. Shape doesn't mean that much and you can implement different versions of the block with next to no change in shape of the block. People here are trying to identify logical functionality using die shots.
 
Last edited:
Well there it is ..
to me it still looks like Sony wanted to launch in 2019. MS waited extremely long for all the RDNA 2 features.

Understand , that it's a greatly designed console . Doing what Sony needs it to do. It's always the combo software and hardware , they have great first party studios, that create great games with clever tricks and art.
 
Last edited:
If I have to guess we are probably seeing the follow situation:

Render Backends
PS5: RDNA
Series: RDNA 2

Compute Units
PS5: RDNA 2
Series: RDNA

Ray-tracing / TMUs
PS5: RDNA 2
Series: RDNA 2
 
Last edited:
In numbers yes.

But in RDNA 2 it is redesigned:


image-135-1024x608.png



image-147-1536x876.png


If what he is talking is true then the only difference até hardware level between Series X and PS5 GPUs are the Render Backends (or ROPs)... exclusive of course the different number of CUs.

this kinda ties in with MS hint about waiting for something. this hardware vrs through improved rops is that something!?

with better rops, it give SX better vrs and more efficient fillrates that can counter ps5 raw fillrates from its higher clocks?

really excited to glance more info with the die release!
 
RDNA 2.0 FeatureRay tracing VRSMesh ShadersMachine LearningPerformance Per WattInfinity Cache
Playstation 5YesNoNoNoYesNo
Xbox Series XYesYesYesYesYes but using RDNA 1.0 clocksNo

Using this metric, I would say the PS5 is RDNA 1.33 and XSX is RDNA 1.66

The XSX might take the lead once devs start utilizing mesh shaders and VRS, but the PS5 is probably doing better at the moment because it's hitting those higher RDNA 2.0 clocks where as MS used those perf per watt gains to get more CUs on the GPU.

Both are lacking Infinity Cache which is the secret sauce and the key to the 6800xt keeping pace with the 3080.

Then there is this: The PS5 higher clocks give it several advantages over the xsx which is probably why we have seen it perform better or the same in several games in the cross gen period.

s0n39Hi.png
 
RDNA 2.0 FeatureRay tracing VRSMesh ShadersMachine LearningPerformance Per WattInfinity Cache
Playstation 5YesNoNoNoYesNo
Xbox Series XYesYesYesYesYes but using RDNA 1.0 clocksNo

Using this metric, I would say the PS5 is RDNA 1.33 and XSX is RDNA 1.66

The XSX might take the lead once devs start utilizing mesh shaders and VRS, but the PS5 is probably doing better at the moment because it's hitting those higher RDNA 2.0 clocks where as MS used those perf per watt gains to get more CUs on the GPU.

Both are lacking Infinity Cache which is the secret sauce and the key to the 6800xt keeping pace with the 3080.

Then there is this: The PS5 higher clocks give it several advantages over the xsx which is probably why we have seen it perform better or the same in several games in the cross gen period.

s0n39Hi.png


If that RDNA 1.1 is true then ML is the same.

EufJ0crXYAEvsIg
 
Last edited:
I personaly don't find Locuza's twitter posts to be very constructive with all that fixation and speculation with "RDNA1.", "true RDNA2", "Cut down FFU", "True Zen 2", "128-bit Zen 1 FPU" etc. It reminds me of earlier Microsoft PR statements.
 
RDNA 2.0 FeatureRay tracing VRSMesh ShadersMachine LearningPerformance Per WattInfinity Cache
Playstation 5YesNoNoNoYesNo
Xbox Series XYesYesYesYesYes but using RDNA 1.0 clocksNo

Using this metric, I would say the PS5 is RDNA 1.33 and XSX is RDNA 1.66

The XSX might take the lead once devs start utilizing mesh shaders and VRS, but the PS5 is probably doing better at the moment because it's hitting those higher RDNA 2.0 clocks where as MS used those perf per watt gains to get more CUs on the GPU.

Both are lacking Infinity Cache which is the secret sauce and the key to the 6800xt keeping pace with the 3080.

Then there is this: The PS5 higher clocks give it several advantages over the xsx which is probably why we have seen it perform better or the same in several games in the cross gen period.

s0n39Hi.png
now that we know SX has better rops, i wonder if that helps to even the raw number lead on pixel fillrate there.

also ps5 rdna2 boost clocks dependent on work load whereas sx rdna2 game clocks stay sustained. seems Sony has already discarded the full avx256 hw for something lighter

meaning you should tweak that part about 'rdna1 clocks', it should be change to 'rdna2 game clock'
 
Last edited:
we know it has machine learning support. The second wired article mentioned machine learning on the PS5. we just dont know its something legacy like primitive shaders or part of the RDNA 2.0 feature set.
That is what Leviathan is saying ML support for RDNA 1.1 and RDNA 2.0 is identical.
 
RDNA 2.0 FeatureRay tracing VRSMesh ShadersMachine LearningPerformance Per WattInfinity Cache
Playstation 5YesNoNoNoYesNo
Xbox Series XYesYesYesYesYes but using RDNA 1.0 clocksNo

Using this metric, I would say the PS5 is RDNA 1.33 and XSX is RDNA 1.66

The XSX might take the lead once devs start utilizing mesh shaders and VRS, but the PS5 is probably doing better at the moment because it's hitting those higher RDNA 2.0 clocks where as MS used those perf per watt gains to get more CUs on the GPU.

Both are lacking Infinity Cache which is the secret sauce and the key to the 6800xt keeping pace with the 3080.

Then there is this: The PS5 higher clocks give it several advantages over the xsx which is probably why we have seen it perform better or the same in several games in the cross gen period.

s0n39Hi.png

infinity cache is contributing a lot to Perf/W in rdna 2 GPUs. so im pretty confident that neither console has real parity in that regard. sadly i didn't find the time to do my power scaling tests when i had the 6800 lying around here. we will know for sure when RX6700 series launches i guess.

as a reminder, this was how rdna 1 behaved:

powerscalinggpuonlyuljwr.png
 
Last edited:

pQ3gFbu.jpg

Considering we are talking about 255 bits registers I can see cutting it a bit might have benefits and the compiler would not mind (the FPU has access to a portion of the register file and the rest are used for Out of Order execution / register renaming to allow to pause stalled work streams and restart new ones... architectural registers count < physical registers count).
 
RDNA 2.0 FeatureRay tracing VRSMesh ShadersMachine LearningPerformance Per WattInfinity Cache
Playstation 5YesNoNoNoYesNo
Xbox Series XYesYesYesYesYes but using RDNA 1.0 clocksNo

Using this metric, I would say the PS5 is RDNA 1.33 and XSX is RDNA 1.66

The XSX might take the lead once devs start utilizing mesh shaders and VRS, but the PS5 is probably doing better at the moment because it's hitting those higher RDNA 2.0 clocks where as MS used those perf per watt gains to get more CUs on the GPU.

Both are lacking Infinity Cache which is the secret sauce and the key to the 6800xt keeping pace with the 3080.

Then there is this: The PS5 higher clocks give it several advantages over the xsx which is probably why we have seen it perform better or the same in several games in the cross gen period.

s0n39Hi.png
Like I said the higher clocks will offset it. With mostly last gen engine designs. But it won't offset if games are going to use VRS 2.0 / SF and mesh shading all together.
 
Last edited:
now that we know SX has better rops, i wonder if that helps to even the raw number lead on pixel fillrate there.

also ps5 rdna2 boost clocks dependent on work load whereas sx rdna2 game clocks stay sustained. seems Sony has already discarded the full avx256 hw for something lighter

meaning you should tweak that part about 'rdna1 clocks', it should be change to 'rdna2 game clock'
you want to look at the boost clocks for rdna 2.0 cards instead of the game clocks. Everything from the 6800 to the 6900 is hitting over 2100 boost clocks. When you run the game, it will run at those boost clocks or very close to them and get the benefit of running every CU at higher clocks. Same as the PS5.

This video compares several games and the boost clock is being used in all of them at all times. Not the much lower game clock.

 
pQ3gFbu.jpg

Considering we are talking about 255 bits registers I can see cutting it a bit might have benefits and the compiler would not mind (the FPU has access to a portion of the register file and the rest are used for Out of Order execution / register renaming to allow to pause stalled work streams and restart new ones... architectural registers count < physical registers count).
from that amd diagram, you know which parts are cut? so it becomes 80 register?
 
you want to look at the boost clocks for rdna 2.0 cards instead of the game clocks. Everything from the 6800 to the 6900 is hitting over 2100 boost clocks. When you run the game, it will run at those boost clocks or very close to them and get the benefit of running every CU at higher clocks. Same as the PS5.

This video compares several games and the boost clock is being used in all of them at all times. Not the much lower game clock.


yes but that's because 6800/6900 has exclusive access to all 250-280w of power and help by a kg++ of heat sinking. on older game engines
 
AMD slide says it expand it to 2x1, 1x1 and 2x2... that means the old RB supports VRS at hardware level just not these 3 expanded modes.

Thar is what I understand.

Edit - A quote from the same guy on RX 6800 launch:

"Each RB+ can process 8 32-bit pixels, a 2x increase compared to RDNA 1 and 1.1. This is primarily the result of the doubled 32bpp color rate. The new multi-precision RBs are also supplied to the shader engine at twice the rate, primarily improving the performance with mixed-precision workloads such as VRS."

It improves VRS... so it doesn't exists at hardware level on previous ROPs.

I think the new ROPS are designed to work with the mixed rasterisation rate VRS enables further up the pipeline (as per AMD's slides): similar workload is present if multiple rendering precision's workloads are enabled at the GE level or with a variation of multi resolution render targets.

w8CIuqD.jpg

lI7pFWN.jpg

Vs.
k5127Wd.png

Vs.
6WtZttW.jpg

PS5's die shot with 16 RB/64 ROPS (RB is like a group of ROPS on nVIDIA, RB is the AMD naming convention, ROPS the nVIDIA one generally IIRC).

XSX hits about 116 GPixels/s at about 1.825 GHz which means 64 pixels per clock. The fillrate is the same per clock.

Unless we saw a rather big customisation in the RB's (on top of modifications for VRS support MS might have made), I am inclined to believe we have the same per clock fillrate between the two consoles.
 
Last edited:
More like jumping conclusions lol



The article is basically getting posts on tweet without any base... the fact RDNA 1.1 doesn't exists tells you a lot.





BTW I updated.

Locuza said RDNA 1.1 doesn't exists... it just a made up classification just like RDNA 1.5 that Xbros loves to say.

He personally should say PS5 is "RDNA 1.8".
so basically like the ps5 soft engineer said and that all sony fanboys didn't accept? ...if we had a bet i would have won and also mods after they soft/banned/silenced me from that thread and not someone else .. should revisit the decision ;)
 
Last edited:
So i don't understand all these tech stuff but let me get this straight, RDNA 2 is better than RDNA 1, 1.1, 1.5, .1.8 or whatever, Xsx has RDNA2, PS5 supposedly has a lower version, and for the most part games on PS5 preform almost the same and some games are even better on PS5 than on Xbox, and Xbox fans are celebrating?

SpectacularSadAddax-size_restricted.gif
 
The PlayStation 5 has a custom GPU based on AMD's RDNA 2 architecture.

There is no such thing as RDNA 1.5 or whatever.

The PS5 GPU is literally a custom RDNA 2 based GPU.

Any other label for the PS5 GPU (like RDNA 1.x) is false.

RDNA 2 added higher clocks speeds over 2.0 GHz, while keeping power consumption relatively low.
PS5 GPU frequency is over 2.2 GHz, and power draw is also relatively low.
This is not possible with RDNA 1.

RDNA 2 added hardware acceleration for ray tracing.
PS5 GPU has hardware acceleration for ray tracing.
This is not possible with RDNA 1.


pJHzTZ5.jpg





IN1JowF.jpg
jrMuPRi.jpg


 
That is what Leviathan is saying ML support for RDNA 1.1 and RDNA 2.0 is identical.
i doubt it i think is an highly modified rdna1 gpu (well is wrong to say just like this basically they choose what IP put into the gpu so naming it make very little sense but I think that they used as base the same as ps4pro and highly modified ...for compatibility adding they own version of most important rdna 2 things )...RT..GE
 
Last edited:
Someone correct me if I'm wrong, but aren't those RDNA2 PC card clocks a boost clock - meaning not a sustained game clock? And on the PS5, it is a continuous boost clock - IIRC from Road to PS5 wording - so not actually an AMD boost clock or an AMD game clock, no?

Without the RDNA 2 architecture, there is not way to get those clock speeds, among other things. Other than that, the PS5 has no boost mode: it's constantly variable depending on the power draw of GPU and CPU and 2.23ghz is a normal operation parameter.

He was going on for quite a while, but help me out with what I'm suppose to be seeing there. I understand various sections or functional blocks of the different GPU may have version numbers and such, but often times those version numbers aren't even settled on and can even change. For the record, Locuza himself readily acknowledges he himself is no expert on these things either.

What I do know, however, is this.

Xbox Series X is packing all the same DirectX 12 Ultimate Feature support as all RX 6000 GPUs, has built in hardware support for every new feature AMD highlighted at their reveal. Sampler Feedback Streaming, as built for Series X is actually not a default DX 12 Ultimate feature and has additional customizations on top of it according to a Graphics R&D & Engine Architect at Microsoft. Sampler Feedback is just a core feature of what Microsoft built custom for Xbox Series X. Not only does Xbox Series X cover the full DX 12 Ultimate feature set, it actually exceeds the DX 12 Ultimate spec for Mesh Shaders for thread group size. Max on RX 6000 series is 128. Series X goes up to 256, and 256 on Series X does indeed produce superior results to all other thread sizes. RX 6000's main advantage would be it's a much larger GPU, but Series X actually has a more advanced Mesh Shader implementation.

What else? Xbox Series X has Machine Learning Acceleration hardware support whereas RX 6000 does not. So Series X isn't only RDNA 2. By all documented accounts, it actually exceeds it.

fY8iQ71.png








Xbox Series X goes beyond the standard DX12 Ultimate feature Sampler Feedback, and has custom hardware built into the GPU to make it even better for streaming purposes. Drop this GPU on PC and give it as many compute units as RX 6800XT while freeing it from the power restraints of a console, and it's likely the superior chip in the long run when DX12 Ultimate becomes more prominent. Oh, and as a desktop chip it would also have IC. :p


Be careful, ML is the new "Power of the Cloud". Don't expect much from it, it's something new, that people are still working on. Think of this iteration as a stepping stone to further advances, and it's a nice to have feature.


Like clockwork.

Some ppl just cant help themselves, lol. Maybe all this will get Cerny to give out more info. The post below....anyone with a passing interest in tech should know this. (Not the 1.1 but that ray tracing isn't on RDNA 1.) Its like some ultimate quest to make the PS5 so weak in comparison. Instead it makes some ppl look foolish. Like mentioned earlier the clock speeds the PS5 hits RDNA 1 wouldnt be able to hit.

Ok after read more... here some tweets from the author's article:








He is the only guy saying that RDNA 1.1 is a term used internally by AMD engineers.


so basically like the ps5 soft engineer said and that all sony fanboys didn't accept? ...if we had a bet i would have won and also mods after they soft/banned/silenced me from that thread and not someone else .. should revisit the decision ;)

Not this shit again.(TM)

People that know nothing of IT and Engineering that talk to an engineer and trying to make do with their speak. The fact that there was, internally, an RDNA design/test, does not mean that the PS5 is 1.1. It means that the revision "1.1" was the stepping stone to a revision "2". Or more approrpiately, once the "1.1" got in the ends of the marketing department, they decided that "1.1" wasn't cool and forced a more marketable "2".

Long story short: PS5 is RDNA2 with a lot of tweeks, while the XSX is RDNA2 more vanilla to the AMD iteration. Everything else is fanboys that have no idea of how the corporate world of IT research and development works.
 
The PlayStation 5 has a custom GPU based on AMD's RDNA 2 architecture.

There is no such thing as RDNA 1.5 or whatever.

The PS5 GPU is literally a custom RDNA 2 based GPU.

Any other label for the PS5 GPU (like RDNA 1.x) is false.

RDNA 2 added higher clocks speeds over 2.0 GHz, while keeping power consumption relatively low.
PS5 GPU frequency is over 2.2 GHz, and power draw is also relatively low.
This is not possible with RDNA 1.

RDNA 2 added hardware acceleration for ray tracing.
PS5 GPU has hardware acceleration for ray tracing.
This is not possible with RDNA 1.


pJHzTZ5.jpg





IN1JowF.jpg
jrMuPRi.jpg



Since when Lisa Su and Mark Cerny are higher authorities than Locuza?..
 
Not this shit again.(TM)

People that know nothing of IT and Engineering that talk to an engineer and trying to make do with their speak. The fact that there was, internally, an RDNA design/test, does not mean that the PS5 is 1.1. It means that the revision "1.1" was the stepping stone to a revision "2". Or more approrpiately, once the "1.1" got in the ends of the marketing department, they decided that "1.1" wasn't cool and forced a more marketable "2".

Long story short: PS5 is RDNA2 with a lot of tweeks, while the XSX is RDNA2 more vanilla to the AMD iteration. Everything else is fanboys that have no idea of how the corporate world of IT research and development works.
Is basically what Locuza and the other are saying
I'm the one saying not this shit again
pls . not this fanboy shit again to mudd the water.
no one saying that ps5 is rdna2 after the dieshot. faster you accept it and faster you will live better
Nothing is changed , people before the dieshot was just not accepting lots of facts
 
Last edited:
i doubt it i think is an highly modified rdna1 gpu (well is wrong to say just like this basically they choose what IP put into the gpu so naming it make very little sense but I think that they used as base the same as ps4pro and highly modified ...for compatibility adding they own version of most important rdna 2 things )...RT..GE

I need to revisit Mesh Shader implementation on AMD vs nVIDIA as I am confused why MS calls this bit on the XSX architecture slides "Mesh Shading Geometry Engine" as I thought mesh shaders ran on the CU's like compute and fragment shaders were (but if they just want to highlight how their GE takes part into this, fine)...

nLxmWsJ.jpg
 
Is basically what Locuza and the other are saying
I'm the one saying not this shit again
pls not this fanboy shit again to mudd the water.
no one saying that ps5 is rdna2 after the dieshot. accept it faster you will live better
Nothing is changed
You are jumping to conclusions... it seems neither is "full" RDNA2... as they said RDNA2 based (and I thought we even agreed on this the other day :LOL:), but keep warring.
 
I need to revisit Mesh Shader implementation on AMD vs nVIDIA as I am confused why MS calls this bit on the XSX architecture slides "Mesh Shading Geometry Engine" as I thought mesh shaders ran on the CU's like compute and fragment shaders were (but if they just want to highlight how their GE takes part into this, fine)...

nLxmWsJ.jpg
guys we should need exactly what the GE doing ..to have a clearer picture :/ there's a way to know something more?
 
You are jumping to conclusions... it seems neither is "full" RDNA2... as they said RDNA2 based (and I thought we even agreed on this the other day :LOL:), but keep warring.
we agree but i hate to be attacked like that ...i agreed with you terminology is not worth anymore at this point ..features are
 
Last edited:
from that amd diagram, you know which parts are cut? so it becomes 80 register?
I am not sure what they cut, but they can reduce the register file and remove some HW without noticeably reducing performance in moderately optimised code (which is expected on consoles). The number of registers the ISA demands are there, it can still be AVX-256 but perform slightly worse if there are lots of dependent instructions and the Out of Order Execution engine cannot park any more work to find independent instructions to start working on (very slightly IPC reduction for the floating point pipe, but we are also not running general purpose random code as you expect on PC's hence why they made the cuts they did).
 
Last edited:
@Kerlurk I wouldn't necessarily use "blast processing" as an analogy here; MegaDrive actually did have hardware support for direct DMA framebuffer access (something that wasn't standardized with consoles until the Atari Jaguar and then the 5th-gen systems; 3DO might've had it but I'm not sure). The problem was always in figuring the timings, particularly for commercial software, without screwing up game logic.

Hackers and demoscene programmers have managed to figure out the timings though, and there's a lot of demos online showing MegaDrive using "blast processing" in practice; it's actually quite awesome. Wish more commercial games did it though.
Since we're talking DMA, Amiga had DMA from '85 already which predates the Sega Genesis by a good margin.
 
Top Bottom