PS5 Die Shot has been revealed

For mesh shaders all models are divided into meshlets - meshes with up to a hundred polygons or so, each meshlet has its own number and other metadata that the rasterizer needs. Mesh shaders are compute shaders with direct access to the gpu rasterizer. Roughly speaking, this is the evolution of everything that has been done in game engines for many years from compute shaders.

It's still relatively brand new capability for the geometry pipeline that's never had a chance to be taken advantage of to anywhere nearly this degree before. It first showed up on a wide scale in Nvidia Turing, and to this point hasn't taken up wide scale use. It's far more powerful and much more flexible than the existing more fixed function, linear pipeline and it's much better for performance. It's also better equipped for the strengths of GPUs. No AMD GPU prior to RDNA 2 has had any support for the feature.

It's a very big deal, not a mere evolution. It's a game changer for videogames going forward as it gets more use.
 
I just wish we would stick to trying to better uncover, speculate about and try to understand the more bespoke features and design choices of this rather than arguing over semantics.

it all starts sounding a bit pointless if we are talking about C3PO and everyone is like:-

200.gif
 
Probably until the next couple of games which fit her narrative.

He's an absolute clown. I actually had him on ignore and used the show content because someone quoted it and low and behold that's what was revealed - the same sentence regurgitated to infinity. I'm disappointed in myself for actually thinking it wouldn't be a clown post.
 
It's still relatively brand new capability for the geometry pipeline
Yes, this is a big deal and this is exactly evolution. An evolution from the days when developers began to abandon the fixed hardware function in the GPU and used polygon culling on the CPU. As a result, all this turned into the fact that triangles discard began to be done on the compute shader via async queue. For mesh shading, you'll have to rewrite the entire pipeline, unify the graphics and compute pipelines, share access to the GPU memory cache. Therefore, it is incorrect to say that mesh shaders are in the geometry engine.
 
Yes, this is a big deal and this is exactly evolution. An evolution from the days when developers began to abandon the fixed hardware function in the GPU and used polygon culling on the CPU. As a result, all this turned into the fact that triangles discard began to be done on the compute shader via async queue. For mesh shading, you'll have to rewrite the entire pipeline, unify the graphics and compute pipelines, share access to the GPU memory cache. Therefore, it is incorrect to say that mesh shaders are in the geometry engine.
Mesh Shaders from what I understood in nVidia docs uses basically all GPU units needed for shaders... that include the Geometry Engine... what it does is change the execution path to save up memory registry.... in single terms you do most of the execution steps with a single cached data.

Without Mesh Shaders the registers had to be cached again for each execution step.

That is the big performance improvement with Mesh Shaders... it still do all the graphical tasks in shader... an optimized execution path to reuse the same cached data.
 
Last edited:
Yes, this is a big deal and this is exactly evolution. An evolution from the days when developers began to abandon the fixed hardware function in the GPU and used polygon culling on the CPU. As a result, all this turned into the fact that triangles discard began to be done on the compute shader via async queue. For mesh shading, you'll have to rewrite the entire pipeline, unify the graphics and compute pipelines, share access to the GPU memory cache. Therefore, it is incorrect to say that mesh shaders are in the geometry engine.

The capability wouldn't be possible without specific hardware changes to the way the geometry engine works. It has to do with the handling of geometry after all. Additional flexibility and capability was directly hard coded into the silicon, and that's how we got Mesh Shaders. But you are correct that something like this has to involve more than just a change to the geometry engine. It's clear that specific instructions/algorithms are also implemented into the compute units and other relevant parts of the GPU to facilitate the new capability, which is then used through the relevant API.

I just wanted to simplify it by saying it's the Geometry Engine because Microsoft more or less signaled as much by calling it a "Mesh Shading Geometry Engine." That doesn't have to mean other important elements of the GPU don't also play a vital role.

Edit: Not impossible this stuff is somehow done in software, of course, but the advantage of it being supported in hardware with many instructions/algorithms already built in to manage how it works, and to make it more accessible is that life is made easier for the developers, which is what we all want instead of them having to always come up with their own solution, or reinventing the wheel entirely. Microsoft by providing this capability in hardware and supplying the API to make use of it while doing the same on the PC side with a unified codebase is doing exactly what they have to.
 
Last edited:
Mesh Shaders from what I understood in nVidia docs uses basically all GPU units needed for shaders... that include the Geometry Engine... what it does is change the execution path to save up memory registry.... in single terms you do most of the execution steps with a single cached data.

Without Mesh Shaders the registers had to be cached again for each execution step.

That is the big performance improvement with Mesh Shaders... it still do all the graphical tasks in shader... an optimized execution path to reuse the same cached data.

Performance comes both from the reusable data, the ability to take much better advantage of the parallelism and additional stream processors of the GPU, and the culling granularity is simply next level. Series X on right running at 4K, RTX 2080 Ti on the left running at slightly under 1440p.

Very impressive culling on that high poly dragon.



Per Meshlet sphere culling alone improves performance by almost half on top of the performance advantage he already got simply from rendering geometry using Mesh Shaders.

Then when he shows he can add HW Backface SW Frustum (triangle testing as he calls it). This is a combination of cool stuff. Mesh Shaders also help decrease RAM usage due to their compressibility, helping to counter the effects of Ray Tracing using more data. Lots of cool uses for these things going forward. Excited to see what developers do with them.
 
Last edited:
How can one be sure which functional blocks correspond to which colours? Presumably, Reds are RBs (render backends) due to the higher number at 8 blocks. We have still primitive units, rasterisers, scan converters, packers and so on below. And from the driver leaks, blocks in groups of 2s and 4s can easily be confused between Navi1x and Navi2x architectures.

DMDoMrX.jpg
Many hardware blocks can have a different shape but you can be quite certain that the blocks with the same SRAM amount and cell types are the same hardware units.

The Render Backends also consist of two seperate hardware instances.
One instance has 4 Color ROPs and the other 16 Depth/Stencil-ROPs.
In your picture you marked all RB blocks red but the left and right side are different hardware blocks.

Unless you are an expert, it's not easy to tell which hardware blocks are doing what.
You would need a deeper understanding about the designing costs of rasterizer units, primitive units and how hardware layouts can be done.
Without knowing multiple finer details, it's impossible to tell what each hardware block is actually responsible for.
 
https://pbs.twimg.com/media/EuN_KU0XIAMX6yH?format=jpg&name=4096x4096
(Above is a corrected version, my original post on twitter has a recycling error which stems from previous annotations.
I resized a color block, which was spanning across the whole L3$ area (4MiB), to a 2MiB section and I forgot to correct the text, sorry for the confusion)

Each CCX has 4MiB L3$, with two both consoles offer 8MiB L3$ in total.
And yes, the CPU and GPU numbers are higher if I didn't made it clear enough.
As said the uOP-Cache can be counted as 8x32KiB in addition, the branch target buffers are more in the range of 60-100KiB extra per core.
The same is true for the GPU, since I did not count the SRAM cells inside the TMUs and from other structures.

There is no SSD "cache" on the processor chips.
SRAM buffers für decompression are included but they are nowhere near 24MiB.
The corrected shot - versus the first OP ( LivingD3AD LivingD3AD 's) post quoting the link of LordOfChaos LordOfChaos 's old picture link that I was using - certainly proves your point, in terms of counting the L3 cache modules correctly as 8MB total. So thanks for clarifying the issue to unravel my mistake :)

With the 2 and 5th Core, and 4th and 6th being back to back mirrors - unless I looking wrong at it - is it possible that the cut down FPU that are back to back on those cores are wired together to act as two bigger FPUs - in an async FPU setup to cores 1, 3 , 7 & 8? just when reading about SIMD/AVX2 CPU Ray packet tracing, AVX512 was mentioned as being even better for coherent rays.
 
The Render Backends also consist of two seperate hardware instances.
One instance has 4 Color ROPs and the other 16 Depth/Stencil-ROPs.
In your picture you marked all RB blocks red but the left and right side are different hardware blocks.
DMDoMrX.jpg


Yes, if you look at the 8 red blocks, I did notice differences between the left and right, but lumped them as 8 RBs, presumably.

Instead of 8 reds = 8 RBs, consisting of colour ROPs and depth/ stencil ROPs, if we split left and right reds to be colour RBs and depth/ stencil RBs, we get 4 RBs per shader engine rather than 8 RBs per shader engine. And 8 RBs total instead of 16 RBs total in the GPU.

If we assume each RB has 4 colour ROPs as in RDNA1, that's 16 colour ROPs per shader engine, and with 2 shader engines in the GPU, that's 32 ROPs for colour in total. This conflicts with your block labels below, where you have 32 colour ROPs per shader engine, and 64 in total:

uG8Ps88.jpg


If these RBs are RDNA2 RB+, which have double the number of colour ROPs at 8 for every RB+, we get 8x8 = 64 colour ROPs as expected.

Unless you are an expert, it's not easy to tell which hardware blocks are doing what.
You would need a deeper understanding about the designing costs of rasterizer units, primitive units and how hardware layouts can be done.
Without knowing multiple finer details, it's impossible to tell what each hardware block is actually responsible for.
Yes, we already have conflicting info as shown above, and we cannot be certain with the information we have on the functionality of these blocks.
 
People here need to come to grips with reality.

1. Having some rdna 1 features doesn't mean the hardware cant be rdna 2
2. Both consoles have some gpu customization that isn't in the rdna 2 spec. No it doesn't mean they are rdna 3 features!
3. Xbox includes more rdna 2 features then ps5. Fact. This cant be disputed.
4. Stop using launch titles as a metric for optimal console performance. People that keep using the " oh so ps5 is rdna 1? Well its beating the xbox in all the games". That is bullshit and launch titles are the worst games to gauge comparative performance between systems.
 
People here need to come to grips with reality.

1. Having some rdna 1 features doesn't mean the hardware cant be rdna 2
2. Both consoles have some gpu customization that isn't in the rdna 2 spec. No it doesn't mean they are rdna 3 features!
3. Xbox includes more rdna 2 features then ps5. Fact. This cant be disputed.
4. Stop using launch titles as a metric for optimal console performance. People that keep using the " oh so ps5 is rdna 1? Well its beating the xbox in all the games". That is bullshit and launch titles are the worst games to gauge comparative performance between systems.
Can you name these features?
 
People here need to come to grips with reality.

1. Having some rdna 1 features doesn't mean the hardware cant be rdna 2
2. Both consoles have some gpu customization that isn't in the rdna 2 spec. No it doesn't mean they are rdna 3 features!
3. Xbox includes more rdna 2 features then ps5. Fact. This cant be disputed.
4. Stop using launch titles as a metric for optimal console performance. People that keep using the " oh so ps5 is rdna 1? Well its beating the xbox in all the games". That is bullshit and launch titles are the worst games to gauge comparative performance between systems.
While I agree with everything, launch games have been used as one metric for performance since forever.

Why change now?
 
1. People here need to come to grips with reality.


4. Stop using launch titles as a metric for optimal console performance. People that keep using the " oh so ps5 is rdna 1? Well its beating the xbox in all the games". That is bullshit and launch titles are the worst games to gauge comparative performance between systems.
a1.: YES

a4.: Show me one time it reversed mid gen, I'll grant you the point on the spot, the closest I could come up with was the PS3--This one had very very specific issues at launch, but I don't think it ever truly matched the 360 in multi-plat titles (in some it did beat it, quirks do happen, they are not the trend).

Any add-ons / boost chips cannot be admitted in this kind of comparison as it basically upgrade the system, negating "software optimization" as the source of the higher performance/better features.

- No a 2600 ever looked like an Intellivision or ColecoVision game
- No NES game ever looked or behaved like an SMS game
- Genesis / TG-16 / SNES always had the same tradeoffs and benefits compared to one another (obviously add-on chips and peripherals change the balance... but then you may as well count other systems)
- The 3DO always bested the Jaguas
- the Saturn and PSX always had the same strength and weaknesses from day one (Panzer Dragoon is as good looking as any Saturn game, Battle Arena Toshiden pretty much showed us what was possible on the PSX)
- The N64 compared to other consoles in its generation, never matched the Dreamcast, but always had benefits/downsides compared to PS1 games
- The PS2 never matched the Dreamcast's texture pushing abilities, but it outclassed it in everything else, same for its relative performance compared to og xbox and GameCube
- PS3 vs 360 This may be the exception
- xbone vs PS4: 900p or less vs 1080p (+ more stable FPS) in multi-plat titles was pretty much set on day one
 
a1.: YES

a4.: Show me one time it reversed mid gen, I'll grant you the point on the spot, the closest I could come up with was the PS3--This one had very very specific issues at launch, but I don't think it ever truly matched the 360 in multi-plat titles (in some it did beat it, quirks do happen, they are not the trend).

Any add-ons / boost chips cannot be admitted in this kind of comparison as it basically upgrade the system, negating "software optimization" as the source of the higher performance/better features.

- No a 2600 ever looked like an Intellivision or ColecoVision game
- No NES game ever looked or behaved like an SMS game
- Genesis / TG-16 / SNES always had the same tradeoffs and benefits compared to one another (obviously add-on chips and peripherals change the balance... but then you may as well count other systems)
- The 3DO always bested the Jaguas
- the Saturn and PSX always had the same strength and weaknesses from day one (Panzer Dragoon is as good looking as any Saturn game, Battle Arena Toshiden pretty much showed us what was possible on the PSX)
- The N64 compared to other consoles in its generation, never matched the Dreamcast, but always had benefits/downsides compared to PS1 games
- The PS2 never matched the Dreamcast's texture pushing abilities, but it outclassed it in everything else, same for its relative performance compared to og xbox and GameCube
- PS3 vs 360 This may be the exception
- xbone vs PS4: 900p or less vs 1080p (+ more stable FPS) in multi-plat titles was pretty much set on day one
You listed a whole lot of consoles that never had performance comparisons like today. Best you could do is going back to xbox 360 and ps3 when performance and resoloution were directly compared. Anything before that was purely anecdotal. Even still the performance of games at launch on both systems was no where near it was at the middle or end of that gen.

Nobody, and i mean nobody refers back to the performance of launch titles to gauge overall performance.

Developers just arent using the hardware to thier advantage at the start of each gen. Look how many patches we have seen that directly impacts performance. Some launch games are on patch 3 or 4.
 
Last edited:
Can you name these features?
Hardware VRS (up to Tier 2)
Sampler Feedback Streaming (expanded further for Xbox Series X with additional GPU customization that further benefits texture streaming)
Mesh Shaders (an implementation that actually goes beyond the spec of 128 thread groups, RX 6800XT only goes up to 128, Series X goes to 256. And 256 produces superior results to 128)
Hardware Accelerated Machine Learning (will be used for AI, animation, resolution upscaling)

But talk is cheap, Microsoft has to prove it.
 
No they haven't.
So the past 2 generations never happened?

Interesting.

The base XBO never caught up to the base PS4.

The PS360 was an interesting case, and I think the PS3 never caught up to the 360 for overall 3rd party comparisons. A few cases here n there, which probably had more to do with the disc sizes than anything.
 
Last edited:
Seriously?? Its pretty easy to find. You can look it up if really interested
Hardware VRS (up to Tier 2)
Sampler Feedback Streaming (expanded further for Xbox Series X with additional GPU customization that further benefits texture streaming)
Mesh Shaders (an implementation that actually goes beyond the spec of 128 thread groups, RX 6800XT only goes up to 128, Series X goes to 256. And 256 produces superior results to 128)
Hardware Accelerated Machine Learning (will be used for AI, animation, resolution upscaling)

But talk is cheap, Microsoft has to prove it.
Sorry, I can't find any RDNA 2 features that are on XBSX but not on PS5.
But I did notice PS5 doesn't support DirectX.

rPl1VCa.png


But if your talking about Machine Learning, PS5 has that.
"More generally, we're seeing the GPU be able to power Machine Learning for all sorts of really interesting advancements in the gameplay and other tools."
Laura Miele, Chief Studio Officer for EA.
Source: https://www.wired.com/story/exclusive-playstation-5/

Mesh Shaders? PS5 has that too and it's the Geometry Engine.
"A new block that will give developers more control over triangles, primitives and geometry culling."
"Removing back faced or off-screen vertices and triangles."
"More complex usage involves something called Primitive Shaders, which allow the game to synthesize geometry on the fly as it's being rendered."
Mesh shader meaning: a new type of shader that combines vertex and primitive processing.

VRS? also handle by the Geometry Engine using Primitive Shaders.
"Using Primitive Shaders on PS5 will allow for a board variety of techniques including, smoothly varying level of detail, addition of procedural detail to close up objects and improvements to practical effects and other visual practical effects."
Variable Rate Shading meaning: a new rendering technique that works by varying the number of pixels that can be processed by a single pixel shader operation.

Sampler Feedback Streaming?
The PS5 I/O Complex says hi.
lKYbJHL.jpg


PS5 has the same features but under a different naming scheme.
e.g.
XBSX Velocity Architecture = PS5 SSD + Kraken Decompression
XBSX Direct Ray Tracing = just Ray Tracing for PS5

Just give it up son, XBSX isn't anymore capable than the PS5.
 
Last edited:
Hardware VRS (up to Tier 2)
Sampler Feedback Streaming (expanded further for Xbox Series X with additional GPU customization that further benefits texture streaming)
Mesh Shaders (an implementation that actually goes beyond the spec of 128 thread groups, RX 6800XT only goes up to 128, Series X goes to 256. And 256 produces superior results to 128)
Hardware Accelerated Machine Learning (will be used for AI, animation, resolution upscaling)

But talk is cheap, Microsoft has to prove it.

PS5 has VRS
PS5 has their own version of Mesh Shading/Geometry Engine
PS5 has Machine Learning ( talked by Cerny last year)

 
PS5 has VRS
PS5 has their own version of Mesh Shading/Geometry Engine
PS5 has Machine Learning ( talked by Cerny last year)



We know PS5 doesn't have Mesh Shaders because Mark Cerny confirmed the Geometry Engine and listed the Primitive Shader feature that was already in RDNA 1. If nothing changed about the Geometry Engine from RDNA 1 t RDNA 2, then RX 5700XT would support Mesh Shaders.

PS5 doesn't support VRS in hardware. They will try to utilize a software solution. Doesn't mean it can't be good, but ease of implementation by devs and performance are both benefitted by having built in hardware for the task. If the GPU is apparently missing hardware VRS and Mesh Shaders, chances are there is no Sampler Feedback either.

PS5 doesn't have hardware accelerated Machine Learning. A sony engineer literally already confirmed it doesn't, then tried to clarify, walk back his comments when he appeared to get into trouble.
 
We know PS5 doesn't have Mesh Shaders because Mark Cerny confirmed the Geometry Engine and listed the Primitive Shader feature that was already in RDNA 1. If nothing changed about the Geometry Engine from RDNA 1 t RDNA 2, then RX 5700XT would support Mesh Shaders.

PS5 doesn't support VRS in hardware. They will try to utilize a software solution. Doesn't mean it can't be good, but ease of implementation by devs and performance are both benefitted by having built in hardware for the task. If the GPU is apparently missing hardware VRS and Mesh Shaders, chances are there is no Sampler Feedback either.

PS5 doesn't have hardware accelerated Machine Learning. A sony engineer literally already confirmed it doesn't, then tried to clarify, walk back his comments when he appeared to get into trouble.

That sony engineer (not a PS5 engineer) just said it doesn't have ML based on the public knowledge that navi doesn't have ML.

XSX and RDNA 2 is navi

Doesn't make a whole lot of sense does it
 
Last edited:
We know PS5 doesn't have Mesh Shaders because Mark Cerny confirmed the Geometry Engine and listed the Primitive Shader feature that was already in RDNA 1. If nothing changed about the Geometry Engine from RDNA 1 t RDNA 2, then RX 5700XT would support Mesh Shaders.

PS5 doesn't support VRS in hardware. They will try to utilize a software solution. Doesn't mean it can't be good, but ease of implementation by devs and performance are both benefitted by having built in hardware for the task. If the GPU is apparently missing hardware VRS and Mesh Shaders, chances are there is no Sampler Feedback either.

PS5 doesn't have hardware accelerated Machine Learning. A sony engineer literally already confirmed it doesn't, then tried to clarify, walk back his comments when he appeared to get into trouble.
I just provide official confirmation that the PS5 has does feature form official people working on or with PS5.
And you still talking about some random tweet that could have been made with some Xbox fan fake account claiming he works at Sony.

But what is even more funnier is you would believe that guy form twitter over Mark Cerny.
Like come on dude.
 

Xbox Series X|S are the only next-generation consoles with full hardware support for all the RDNA 2 capabilities AMD showcased today.

If anyone forgets ps5 does NOT support all of RDNA 2 hardware capabilities. PS5 has no VRS HW, mesh shading, SFS, etc. That is known and nothing to be disputed
 
Sorry, I can't find any RDNA 2 features that are on XBSX but not on PS5.
But I did notice PS5 doesn't support DirectX.

rPl1VCa.png


But if your talking about Machine Learning, PS5 has that.
"More generally, we're seeing the GPU be able to power Machine Learning for all sorts of really interesting advancements in the gameplay and other tools."
Laura Miele, Chief Studio Officer for EA.
Source: https://www.wired.com/story/exclusive-playstation-5/

Mesh Shaders? PS5 has that too and it's the Geometry Engine.
"A new block that will give developers more control over triangles, primitives and geometry culling."
"Removing back faced or off-screen vertices and triangles."
"More complex usage involves something called Primitive Shaders, which allow the game to synthesize geometry on the fly as it's being rendered."
Mesh shader meaning: a new type of shader that combines vertex and primitive processing.

VRS? also handle by the Geometry Engine using Primitive Shaders.
"Using Primitive Shaders on PS5 will allow for a board variety of techniques including, smoothly varying level of detail, addition of procedural detail to close up objects and improvements to practical effects and other visual practical effects."
Variable Rate Shading meaning: a new rendering technique that works by varying the number of pixels that can be processed by a single pixel shader operation.

Sampler Feedback Streaming?
The PS5 I/O Complex says hi.
lKYbJHL.jpg


PS5 has the same features but under a different naming scheme.
e.g.
XBSX Velocity Architecture = PS5 SSD + Kraken Decompression
XBSX Direct Ray Tracing = just Ray Tracing for PS5

Just give it up son, XBSX isn't anymore capable than the PS5.

Dude varying level of detail has nothing to do with VRS in that context... that's literally just about changing the geometry LOD levels when cerny spoke about the geometry engine. VRS isn't even in the geometry engine, it's in the ROPs... He was referring to geometry when he made those statements. VRS is about shading rate, not about geometry performance or manipulation.

Primitive Shaders are not Mesh Shaders because Mesh Shaders, unlike Primitive Shaders, does not need to rely on fixed function tesselation units. Primitive Shaders still relies on the hardware tesselation unit, Mesh Shaders does not need to and has the ability to replace the fixed function tesselator altogether using Amplification Shaders.

The PS5 I/O setup, while most impressive, is nothing close to being the exact same as Sampler Feedback Streaming. That only goes to show you don't understand what it actually is. Series X itself has a fast SSD, not as fast as PS5's on paper and it's own hardware decompression unit and advanced compression for textures, but Sampler Feedback Streaming is something else entirely after all those I mentioned already played their role, or better yet before some have done their job. PS5's I/O SSD setup is, according to Cerny, meant to move data into the RAM just in time as it's needed, and it can apparently do so really fast, but it's still doing it the more old fashion way, just faster.

Series X's Sampler Feedback Streaming is capable of a level of granularity that isn't supported on PS5, significantly cutting down on the amount of texture data that even needs to be inside VRAM or copied in the first place. Sampler Feedback Streaming is making sure texture data that was never supposed to even be in RAM in the first place for what's on screen, never goes there, leading to an effective RAM efficiency of up to 2.5x on Series X. What Sampler Feedback Streaming does is work to determine EXACTLY what's needed for the scene in such a precise, fast and accurate way that unnecessary data that would have never needed to have gone through decompression and thus end up in system RAM in the first place never actually ends up occupying system RAM. Thus your SSD transfer needs are significantly cutdown, your RAM usage is significantly cutdown, dramatically boosting effective usage. it's an absolute game changer that, believe it or not, goes much further than the PS5's SSD I/O setup.

PS5 is doing things the more old fashion way really, really fast. Series X's Sampler Feedback Streaming changes the very way texture data works for a GPU, and giving an effective RAM improvement that devs have wanted for years. Cache Scrubbers is a feature much less necessary for Series X with Sampler Feedback Streaming because often times it will do a better job of keeping useless texture data out of RAM and thus the GPU caches that shouldn't have been there in the first place if the streaming method was simply more intelligent, which is what Sampler Feedback is. It's one of the biggest and most slept on game changers on either console. Devs commonly ask for more RAM over anything else. This feature is essentially giving them that. It has the potential to do for Series X a version of what Infinity Cache does for PC RDNA 2 by significantly cutting down the bandwidth requirements due to less unnecessary reads/writes and smaller chunks of information.

pOqlBfr.jpg




You tried very hard with this one, but nearly everything you said is wrong.










And before someone goes saying it's just PRT. It's actually beyond that, though people do tend to interchange the terms.

 
Last edited:
I just provide official confirmation that the PS5 has does feature form official people working on or with PS5.
And you still talking about some random tweet that could have been made with some Xbox fan fake account claiming he works at Sony.

But what is even more funnier is you would believe that guy form twitter over Mark Cerny.
Like come on dude.
Corporate figurehead's purpose are to promote and exaggerate to create hype. Like that old Sony guy from the PS3 days sayin PS3 will do 1080p games and 120 fps. A ton of hyperbolic crap. Out of all games, you could probably count on your fingers the number of games that achieved this out of the 1,000s of games released.

That same time line, Sony also presented PS3 has 2 TF of GPU power. I don't think even any PC video cards were that good in 2006.

Cerny was also the guy promoting Knack as a great game. Game gets released, gets grilled and then disappeared.

Spencer, Mattrick and Greenburg would do the same shilling.
 
I just provide official confirmation that the PS5 has does feature form official people working on or with PS5.
And you still talking about some random tweet that could have been made with some Xbox fan fake account claiming he works at Sony.

But what is even more funnier is you would believe that guy form twitter over Mark Cerny.
Like come on dude.

I believe Mark Cerny never confirmed Variable Rate Shading, and he didn't.
I believe Mark Cerny confirmed Primitive Shaders (and RDNA 1 feature), but not the more advanced Mesh Shaders in RX 2000/3000 GPUs and PC RDNA 2 along with Series X|S.
I believe Mark Cerny never confirmed Sampler Feedback, you on the other hand seem to be confusing what he said about Geometry to be VRS. VRS has NOTHING to do with the Geometry Engine. It's in the ROPs.

And Mark Cerny didn't confirm Hardware Accelerated Machine Learning as a new GPU feature. See, I can go on what Microsoft has actually said. You, on the other hand, must extrapolate and invent facts that were never said by Mark Cerny. Mark Cerny even cautioned people to expect missing features when he stressed that new GPU features cost transistors, then he went on to state what they went for. Cache Scrubbers, Hardware Accelerated Ray Tracing, Geometry (which enables the Primitive Shaders feature, also on RX 5700XT)

And btw that tweet you claim was just some random dude was in fact a PS5 engineer. Which is why he was forced to come out and clarify it after he said it. He was shortly fired not long after. It's real my guy. He said no ML.
 
Last edited:
And btw that tweet you claim was just some random dude was in fact a PS5 engineer. Which is why he was forced to come out and clarify it after he said it. He was shortly fired not long after. It's real my guy. He said no ML.
how do you know he was fired? Also how do you know he was a "PS5 engineer"? Seemed he was a Sony software engineer.
 
And btw that tweet you claim was just some random dude was in fact a PS5 engineer. Which is why he was forced to come out and clarify it after he said it. He was shortly fired not long after. It's real my guy. He said no ML.
Which guy was this?

Was it that same guy last summer who tweeted back and forth with someone and said PS5 is more "like RDNA 1.5" and then deleted his tweet? I think he was a French guy and had a beard.

Googled it. This guy?

3709083-0870379835-i4Egj.jpg
 
Last edited:
He's a graphics engineer and he wasn't fired

Sageboy talking out of his ass as always

So the CURRENTLY hired Sony employee spoke on the PS5 GPU and confirmed it has no Machine Learning, and is between RDNA 1 and RDNA 2, and is missing features. Alright, I rest my case. Current employee, so that means we can take him at his word even more so before he got in trouble and had to clarify.
 
Which guy was this?

Was it that same guy last summer who tweeted back and forth with someone and said PS5 is more "like RDNA 1.5" and then deleted his tweet? I think he was a French guy and had a beard.

Googled it. This guy?

3709083-0870379835-i4Egj.jpg


Yes, him, and I believe either he or another person also made a statement saying there was no Machine Learning on the PS5 GPU. That tweet I think is the follow-up to what he said prior.


found it.
djSO4L3.jpg
 
Last edited:
So the CURRENTLY hired Sony employee spoke on the PS5 GPU and confirmed it has no Machine Learning, and is between RDNA 1 and RDNA 2, and is missing features. Alright, I rest my case. Current employee, so that means we can take him at his word even more so before he got in trouble and had to clarify.

Why do you think employees know everything? Maybe he deleted them because he was wrong. The guy litterally said it doesn't have ML based on the public fact that navi doesn't have ML, when RDNA 2 is navi
 
Last edited:
Why do you think employees know everything? Maybe he deleted them because he was wrong. The guy litterally said it doesn't have ML based on the public fact that navi doesn't have ML, when RDNA 2 is navi

Mark Cerny had a deep dive talk and he thought bringing up cache scrubbers was more important than hardware VRS, Mesh Shaders, Sampler Feedback and Hardware Accelerated Machine Learning? The PS5 is known guys. The cake is baked. That doesn't mean Sony's devs won't kick ass, or be cream of the crop yet again this gen as always. But the PS5 is not a more advanced piece of kit than Xbox Series X, and it's pretty clear based on all the mounting evidence from official sources.

Even DF, who has excellent sources have already said PS5 doesn't support features like VRS and speculated Series X has support for the next version of Tiled Resources and has a programmable front end or whatever it was he said. But on official information, Mark Cerny hasn't confirmed nor hinted about many of the features people are trying to give PS5.
 
Mark Cerny had a deep dive talk and he thought bringing up cache scrubbers was more important than hardware VRS, Mesh Shaders, Sampler Feedback and Hardware Accelerated Machine Learning? The PS5 is known guys. The cake is baked. That doesn't mean Sony's devs won't kick ass, or be cream of the crop yet again this gen as always. But the PS5 is not a more advanced piece of kit than Xbox Series X, and it's pretty clear based on all the mounting evidence from official sources.

Even DF, who has excellent sources have already said PS5 doesn't support features like VRS and speculated Series X has support for the next version of Tiled Resources and has a programmable front end or whatever it was he said. But on official information, Mark Cerny hasn't confirmed nor hinted about many of the features people are trying to give PS5.

Absense of evidence is not evidence of absense

Nice job dodging though
 
Wasn't the guy fired after the fact? I could have swore he posted on twitter about his situation after that took place. If I'm wrong, I stand corrected.
We need to stop parroting shit that isn't confirmed, that's why these threads get 40 pages long and convoluted as fuck. It also doesn't help credibility on the issue at hand either. No offense, just my general thoughts.

It would also be great if people can make it clear what's their opinion on given on issues so biases can be level set. I'll start: I never would have bought my series x if I didn't think it had at least a marginal performance benefit on 3rd party games.
 
Last edited:
He was going on for quite a while, but help me out with what I'm suppose to be seeing there. I understand various sections or functional blocks of the different GPU may have version numbers and such, but often times those version numbers aren't even settled on and can even change. For the record, Locuza himself readily acknowledges he himself is no expert on these things either.

What I do know, however, is this.

Xbox Series X is packing all the same DirectX 12 Ultimate Feature support as all RX 6000 GPUs, has built in hardware support for every new feature AMD highlighted at their reveal. Sampler Feedback Streaming, as built for Series X is actually not a default DX 12 Ultimate feature and has additional customizations on top of it according to a Graphics R&D & Engine Architect at Microsoft. Sampler Feedback is just a core feature of what Microsoft built custom for Xbox Series X. Not only does Xbox Series X cover the full DX 12 Ultimate feature set, it actually exceeds the DX 12 Ultimate spec for Mesh Shaders for thread group size. Max on RX 6000 series is 128. Series X goes up to 256, and 256 on Series X does indeed produce superior results to all other thread sizes. RX 6000's main advantage would be it's a much larger GPU, but Series X actually has a more advanced Mesh Shader implementation.

What else? Xbox Series X has Machine Learning Acceleration hardware support whereas RX 6000 does not. So Series X isn't only RDNA 2. By all documented accounts, it actually exceeds it.

fY8iQ71.png








Xbox Series X goes beyond the standard DX12 Ultimate feature Sampler Feedback, and has custom hardware built into the GPU to make it even better for streaming purposes. Drop this GPU on PC and give it as many compute units as RX 6800XT while freeing it from the power restraints of a console, and it's likely the superior chip in the long run when DX12 Ultimate becomes more prominent. Oh, and as a desktop chip it would also have IC. :p

Nobody in the industry praised dx12 unless they are MS owned or exclusive to MS as far as I can tell DX12<Open seems to be what studios prefer.
 
Nobody, and i mean nobody refers back to the performance of launch titles to gauge overall performance.
Name me that nobody :-/

Honestly, why did you even bring this up if that doesn't mean you think that the comparative performance of the consoles will somehow shift? I gave you examples that go back to the 1970s and all you answer is an absolute statement that is useless to the conversation at hand.

Sure developers get better over the generation, but when you have seen Killzone:Shadow Fall on the PS4 you pretty much knew what the ballpark would be for the generation, sure we got better AA and a couple of evolutions, but the ballpark was set, the xbone was never going to match the PS4.... and I would never expect the series X or PS5 to leave its sibling in the dust on a regular basis (outliers are likely to exist).
 
With the 2 and 5th Core, and 4th and 6th being back to back mirrors - unless I looking wrong at it - is it possible that the cut down FPU that are back to back on those cores are wired together to act as two bigger FPUs - in an async FPU setup to cores 1, 3 , 7 & 8?
Short answer no, the cores are not laid down that way.
In practise the design would need to be able to schedule 256-Bit instructions to a fused FPU, blocking a half for the core on the other side and getting the results back and retiring the instructions.
That would yield no performance gains, would be unnecessary complex and even slower because the execution units and the register file are laid down in a way for minimum latency.
Having some hackjob cross connections between two FPUs would not lead to good results.

Yes, if you look at the 8 red blocks, I did notice differences between the left and right, but lumped them as 8 RBs, presumably.

Instead of 8 reds = 8 RBs, consisting of colour ROPs and depth/ stencil ROPs, if we split left and right reds to be colour RBs and depth/ stencil RBs, we get 4 RBs per shader engine rather than 8 RBs per shader engine. And 8 RBs total instead of 16 RBs total in the GPU.

If we assume each RB has 4 colour ROPs as in RDNA1, that's 16 colour ROPs per shader engine, and with 2 shader engines in the GPU, that's 32 ROPs for colour in total. This conflicts with your block labels below, where you have 32 colour ROPs per shader engine, and 64 in total:

uG8Ps88.jpg


If these RBs are RDNA2 RB+, which have double the number of colour ROPs at 8 for every RB+, we get 8x8 = 64 colour ROPs as expected.


Yes, we already have conflicting info as shown above, and we cannot be certain with the information we have on the functionality of these blocks.
You asked me what I find more accurate on Nemez's annotation, I repressed that question because I was too lazy to look the PS5 die shot and count SRAM blocks.
I still didn't count the SRAM blocks but by the look of it Nemez got it right.
There are more RBs/ROPs and the PS5 has the classic RB design from RDNA1 (or at least it's physically and from a configuration standpoint very close to it).
What I noted as 16 Color ROPs are actually just 8 -> 32 ROPs in total, but 32 more (+ extra blocks for yield) are in the black area you marked.

There are other details Nemez included (Packers, Rasterizer, Prim-Units) which I simply can't verify.
 
Last edited:
He still keep spinning the PR marketing because the silicon has old non-RDNA2 para jn GPU.

Dodged the question 3 times and trying to talk about SoC lol
I asked him three times why parts of the silicon are non RDNA2... he said the three times the SoC have parts that are Zen2 totally unrelated to the question.

He never answered what I asked.




His answers.



Another answer to another guy.



He never touched what was asked.
He keep saying the SoC has Zen2 and other things lol

The talk was about GPU silicon not the whole SoC.
He did not answer why parts of the GPU are not RDNA2.

They are not 100% RDNA2.

The compliance guess is mostly like right... and I just included 4 big blocks... there are several others blocks that are not RDNA2.
He did not answer why there parts of the GPU silicon that are not RDNA2.
No body asked about the parts of the the silicon that are not GPU. He trowed Zen 2, non GPU parts, etc in the replies.

Maybe he got confused with the twitter thread he was replying so in the next question I included GPU silicon to make it clear and he replied again with Zen 2, non GPU parts, etc.

I realized I won't get a straight reply from him.
That is not correct from what the GPU die shot and drivers showed.... that is the main point why I asked.

We know the GPU is 100% RDNA2 functional... in simple terms it can to everything RDNA2 card can... imo Infinity Cache doesn't affect that because it is only a performance feature and not a functional.

Said that the we are discussing the GPU silicon design that are not 100% RDNA2... that is what we were asking to him to understand why they choose old parts instead the new RDNA2 ones.
OK fan warrior. You got your answer from the MS guy. Series GPU is RDNA 2.

You can stop reaching and waiting for someone to say the grey solder in every Xbox chip is just that. Solder. Which is technically RDNA 0. Which you'd post that you were right and Xbox Series S and X aren't technically 100% RDNA 2.

We get it.

By the way. I'm not a tech engineer, but I'll let you in on a secret. The power plug and optical drives for all consoles aren't RDNA 2 either.
 
Last edited:
OK fan warrior. You got your answer from the MS guy. Series GPU is RDNA 2.

You can stop reaching and waiting for someone to say the grey solder in every Xbox chip is just that. Solder. Which is technically RDNA 0. Which you'd post that you were right and Xbox Series S and X aren't technically 100% RDNA 2.

We get it.

By the way. I'm not a tech engineer, but I'll let you in on a secret. The power plug and optical drives for all consoles aren't RDNA 2 either.
That is nice for you but I still prefer to understand more than just the PR... so the expert come to talk with me and I tools the opportunity to question him... sadly at the end he did not reply what I questioned.

So it is better we discover ourselves.
 
Last edited:
It's funny how people quote the Rosario Leonardi but leave this out.

Ended inevitably in the middle of a fierce controversy, the engineer has clarified his statements to make it clear exactly how things stand. His new messages, also private and unfortunately shared on social networks, are very interesting:

"RDNA 2 is a commercial abbreviation to simplify the market, otherwise GPUs with completely random features would come out and it would be difficult for the average user to choose," wrote Leonardi.

"For example, support for ray tracing is not present in any AMD GPU currently on the market. (...) The PlayStation 5 GPU is unique, it is not classifiable as RDNA 1, 2, 3 or 4."

"It is based on RDNA 2, but it has more features and, I think, one less. That message came out badly, I was tired and I shouldn't have written the things I wrote," continued the engineer, complaining to have received insults for his statements.

Fanboys pick and choose what they want to hear.
 
We know PS5 doesn't have Mesh Shaders because Mark Cerny confirmed the Geometry Engine and listed the Primitive Shader feature that was already in RDNA 1. If nothing changed about the Geometry Engine from RDNA 1 t RDNA 2, then RX 5700XT would support Mesh Shaders.

I believe Mark Cerny confirmed Primitive Shaders (and RDNA 1 feature), but not the more advanced Mesh Shaders in RX 2000/3000 GPUs and PC RDNA 2 along with Series X|S.
Primitive shaders and mesh shaders don't share the same shader path. Yes, that's a fact. Mesh shaders take a completely separate path and when they are ready they go straight to the pixel shaders. Culling is a big advantage that primitive shaders do it ahead of the assembly phase. Compared to a standard mesh shading pipeline, they can use fixed function tessellation hardware and this gives more control. The problem is not with primitive versus mesh shaders, but what is on or off. Mesh shaders have not been programmed, so AMD cards use primitive shaders, either explicitly or as a recompilation, improve front-end performance. I don't know what else Sony did to further tweak GE, but this is also not a direct implementation of RDNA 1, this part was not clear as to what AMD did with it for RDNA2.
 
That is nice for you but I still prefer to understand more than just the PR... so the expert come to talk with me and I tools the opportunity to question him... sadly at the end he did not reply what I questioned.

So it is better we discover ourselves.
That's because he answered already lengthy , with a expert explanation.. which he didn't even have to do..
 
The PS5 has a more intelligently and gracefully designed chip. It's not just brute power and teraflops and other marketing names that are already dazzling in the eyes. Not tired of fighting, buddy? The games themselves will demonstrate clearly which console offers the most interesting hardware solution. After all, it was created for games. From all that marketing list of features that makes you wet, only SFS is really interesting and unlike the PS5, which lacks it, because it allows to reduce the footprint memory and this is really cool. And finally, you are also mistaken about the lack of acceleration of ML inference in PS5. This is it, take it or leave it. If you need to warring, go on.
I hope you are right, because this will be fucking embarrassing soon if you are wrong and the real next gen games show something different.

So far all we have seen are cross gen games splitting the difference, real next gen games are still to come.
 
Last edited:
Top Bottom