Red Gaming Tech (RGT): The PS5 has a custom feature to be found in future RDNA 3 Cards.

What do you think? Is it true that the PS5 has a custom feature to be found in future RDNA 3 cards?


  • Total voters
    431
Ok so what makes Sony's geometry engine better than Microsoft's? What customisations did they make and what improvements to do they give? What customisations have Microsoft made, if any? What benefits do they bring?

What Sony has claimed publicly are three things (but there might be more of course):

1) The PS5 GE performs culling of geometries that are not visible (e.g. obscured behind another object on screen etc). This is done downstream of the GE in current GPUs by the mesh-shaders. The advantage of front loading it is that no other part of the rendering pipeline needs tp conduct a single calculation on those geometries since they are already taken away.

2) The PS5 GE assigns priorities to remaining geometry depending on how important it is in the final rasterised frame. This is down downstream of the GE in current GPUs through VRS. The advantage of front loading it is that downstream parts of the pipeline does not need to do any geometry calculations - the priorities are already there and can be applied off the bat. This also means that all functions can access those priorities such as RT and texture handling.

3) The PS5 GE is built to handle geometry with very small vertices (almost pixel sized). In a 'normal' rendering pipeline your primitives cannot be too small or the functions that access the geometry such as mesh shaders and the VRS function become exponentially slower. Sony has tried to address this (partly through 1) above).

customised doesn't mean better (or worse), it just means different. This whole "AMD will use Sony tech in RDNA3" is all just fanboy garbage at this point.

You are right. Please note that the text you wrote that I reacted to was your claim that the GE is the same - it is not. Does it perform better? We do not know. The early indicators are positive however and the dev buzz is very positive.

I believe there might be challenges with some multi-plats in the short-run since they are built around the current way of rendering. That might introduce worse performance in the PS5 - we will see.

Wow did mark cerny told you all that?

Why not ask your friend mark to give a full Q&A with DF, so he can show off all his next gen secret sauces? I figure would make good official PR. Why has he been hesitating so long? 🤷‍♀️

Why so aggressive?

Because it is a complicated topic and the majority of people just see 12 and 10 TFLOPs and cannot think beyond that. Better to let games do the talking and then bring up the reason 'why'. So I expect some disclosures around how this works after games have been tested and shown on both the XSX and PS5.

As a side note - I also expect some PS5 cache information to be released. I believe we are in for a small treat there as well.
 
Last edited:
I would be as receptive as I was in 2013 when it was Microsoft shills spinning their secret sauce bullshit.



Funny how no one has an answer for what would be the smallest space that a PS5 could be physically placed in because answering as much would be admitting they made a console 50% larger than their competition.

This is a lot different to then, gpu in the power brick? That was a crazy fan rumour.

Penelo said were no way were giving up a 30% power advantage to sony, also power of the cloud etc.

There was obviously nothing in these claims.

This is completely different. Sony hasn't come out, nor have any fans and said this 1 feature levels a power gap, what brings them so close is already kind of known.

The power gap i already predicted way back would be miniscule.

Each console should have slight advantages.

Ps5 faster clocks leading to faster caches, whole system designed round speed, latency. In RT should mean more ray bounces I read, but less rays. Rumour of caching helping with RT.

Xbox has more theoretical compute, could have more rays for RT but less bounces. The only real advantages I see is in games less that 10gb, then may benefit from that faster bandwidth. But there was always a question, could it use both faster and slower at the same time or did it have to wait. We had a crytek dev say this was a potential issue.

People go on about rdna2 full features. Both are rdna2 and benefit from the architectural improvements. Both have what they want from it. Xbot with stock rdna 2 features, sony with in some cases their own custom implementations.

People have been going mad over TF but overlooking CU utilisation, which given things like cache scrubbers is likely to be higher on ps5. Also less cus in an array so more cache per cu.

Way faster io meaning less loaded into ram to be potentially used, meaning more for whats on screen. More than 10gb available without running into potential issues. Much better streaming.

When you add all this up and the fact its harder to fully give more cu's meaningful work, its quite easy to see how amazing efficiency, as per dev leaks, has made any gap almost nonexistent. As proven in results so far.

Something has to explain it. The rumour of a more advanced geometry engine is still unconfirmed, but RGT seems to think so.

As for ps5 size, its big, but the components are in a similar space, but its takes more room in a cabinet, but at least gives it airflow. Would have loved last gen size consoles but with the power being pushed though, no chance, xbox on its side looks toppled with that bast and actually doesn't fit the way most people have it, its too tall on its side.
 
Could be, I don't see nothing of shocking about it. Geometry engine seems a very smart tech even better than vrs , RDNA3 could use it. Not sure where coming all that surprise.
 
Bullshit artists being bullshit artists.

He has legit sources. A few days before the RDNA2 event, he said exactly what AMD would be showing. 10 games, with their card delivering better performance than the nvidia cards, especially at 1440p. he also said how often the amd cards came out on top and how often they tied

RGT is one of the guys in YT I would follow,if you're interested in rumors
 
What Sony has claimed publicly are three things (but there might be more of course):

1) The PS5 GE performs culling of geometries that are not visible (e.g. obscured behind another object on screen etc). This is done downstream of the GE in current GPUs by the mesh-shaders. The advantage of front loading it is that no other part of the rendering pipeline needs tp conduct a single calculation on those geometries since they are already taken away.

2) The PS5 GE assigns priorities to remaining geometry depending on how important it is in the final rasterised frame. This is down downstream of the GE in current GPUs through VRS. The advantage of front loading it is that downstream parts of the pipeline does not need to do any geometry calculations - the priorities are already there and can be applied off the bat. This also means that all functions can access those priorities such as RT and texture handling.

3) The PS5 GE is built to handle geometry with very small vertices (almost pixel sized). In a 'normal' rendering pipeline your primitives cannot be too small or the functions that access the geometry such as mesh shaders and the VRS function become exponentially slower. Sony has tried to address this (partly through 1) above).




You are right. Please note that the text you wrote that I reacted to was your claim that the GE is the same - it is not. Does it perform better? We do not know. The early indicators are positive however and the dev buzz is very positive.

I believe there might be challenges with some multi-plats in the short-run since they are built around the current way of rendering. That might introduce worse performance in the PS5 - we will see.



Why so aggressive?

Because it is a complicated topic and the majority of people just see 12 and 10 TFLOPs and cannot think beyond that. Better to let games do the talking and then bring up the reason 'why'. So I expect some disclosures around how this works after games have been tested and shown on both the XSX and PS5.

As a side note - I also expect some PS5 cache information to be released. I believe we are in for a small treat there as well.
That's very interesting. Thanks for sharing. The part where you said PS5's GE is built to handle geometry with small pixel-sized vertices reminds me of the Nanite tech we saw in the UE5 demo running on PS5. It is essentially a pixel-sized polygon renderer. Also, this reminds me of Mark Cerny's quote from his 'Road to PS5':
Also, it's easier to fully use 36 CUs in parallel than it is to use 48 CUs. When triangles are small, it's much harder to fill all those CUs with useful work.
 
Last edited:
Yes, yes it is. Your take is absolutely nonsensical. Everything in the console was built around it having very high frequencies from the get go: The liquid metal addition, the giant heatsink, the tweaks at the architectural level to process data at top speed...You do not do that overnight, buddy. This has taken them years and years of prototyping and testing.
in fact you don't do it ovenight it toke an year long . and no i think no hw engineer would design a gpu with only 36cu clocked at an exaggerated level in 2020. in fact no one else does. Especially if the cost of the project does not demonstrate obvious economic savings.I think that from all the evidence that I have brought you it is more logical to think like me than like you .. so it is better to change the subject because you will not convince me that cerny has deliberately designed a console with only 36 cu not even using the full set of technologies present in the new amd architecture I have been following the console market since their first dawn and I can say with relative confidence that if Ms had been in the position of Sony (with a 36cu clocked that high with a huge console and doubts about the architecture used) we would not even have these discussions because everything would be very evident and clear.....stop chasing the possible "different approaches" or "secret sauces"
 
Last edited:
in fact you don't do it ovenight it toke an year long . and no i think no hw engineer would design a gpu with only 36cu clocked at an exaggerated level in 2020. in fact no one else does. Especially if the cost of the project does not demonstrate obvious economic savings.I think that from all the evidence that I have brought you it is more logical to think like me than like you .. so it is better to change the subject because you will not convince me that cerny has deliberately designed a console with only 36 cu not even using the full set of technologies present in the new amd architecture I have been following the console market since their first dawn and I can say with relative confidence that if Ms had been in the position of Sony (with a 36cu clocked that high with a huge console and doubts about the architecture used) we would not even have these discussions because everything would be very evident and clear.....stop chasing the possible "different approaches" or "secret sauces"

You do realise that XSX is the outlier having low clocks compared to PS5 and the recently announced big navi cards right?
 
Could be, I don't see nothing of shocking about it. Geometry engine seems a very smart tech even better than vrs , RDNA3 could use it. Not sure where coming all that surprise.
Again - geometry engine is already in RDNA2. The geometry engine is not some Sony specific thing. The Xbox has it too lol. People are now saying Sony have customised theirs, but without any actual evidence or proof or details, and if they have then the possibility also exists that MS have customised theirs too.

this notion that MS simply took a stock off the shelf APU but Sony have made a big custom one is being thrown around a lot the last few days since the "Xbox is the only full rdna2 console" news got a few people shook. That's no coincidence.
 
You do realise that XSX is the outlier having low clocks compared to PS5 and the recently announced big navi cards right?
"the outlier" it is not the xsx with a relatively lower "fixed" clock than it would be on a desktop gpu individually cooled by 3 fans. the new RX 6800 has a game clock of 1815 GHz and 2105 Ghz in boost the XSX has 52 cu (less than the 60 cus in brand new RX 6800 rdna2) and is clocked at 1825 GHz ....... the Ps5 it's only (this should be remembered more often) 36 cu's ... clocked 2233 GHz (remain to see how much it will keep them in the real world) pretty much as high as the RX 6900 XT (80 cu's!!!) which has a base game clock of 2015 GHz and only in boost reaches 2250 !!!! So don't talk to me about "outlier" I repeat the logic pushes everyone to think that the ps5 had to come out 1 year earlier and has been modified where possible to remain competitive. this is what I think
 
Last edited:
"the outlier" it is not the xsx with a relatively lower "fixed" clock than it would be on a desktop gpu individually cooled by 3 fans. the new RX 6800 has a game clock of 1815 GHz and 2105 Ghz in boost the XSX has 52 cu (less than the 60 cus in brand new RX 6800 rdna2) and is clocked at 1825 GHz ....... the Ps5 it's only (this should be remembered more often) 36 cu's ... clocked 2233 GHz (remain to see how much it will keep them in the real world) pretty much as good as the RX 6900 XT which has a base game clock of 2015 GHz and only in boost reaches 2250 !!!! So don't talk to me about "outlier" I repeat the logic pushes everyone to think that the ps5 had to come out 1 year earlier and has been modified where possible to remain competitive. this is what I think

You should be comparing boost clocks against the fixed console clock as that is the max range each can run at.

You'd have a point if the 6800 boost clock was 1825 but that's not the case here.

XSX has more in common with the 5000 series in that regard.
 
Last edited:
All GPUs have a piece of hardware dealing with geometry. AMD are calling this piece the geometry engine. Sony has redesigned the entire rendering pipeline around a heavily customised GE in PS5. It is also rumoured to be part of the future RDNA roadmap. It is unclear to me why you write stuff per above unless your intention is to only troll.
Additionally, isn't PS5's GE fully programmable compared to fixed function GE's as hinted by Matt Hargett in RedGaming tech podcast?
 
in fact you don't do it ovenight it toke an year long . and no i think no hw engineer would design a gpu with only 36cu clocked at an exaggerated level in 2020. in fact no one else does. Especially if the cost of the project does not demonstrate obvious economic savings.I think that from all the evidence that I have brought you it is more logical to think like me than like you .. so it is better to change the subject because you will not convince me that cerny has deliberately designed a console with only 36 cu not even using the full set of technologies present in the new amd architecture I have been following the console market since their first dawn and I can say with relative confidence that if Ms had been in the position of Sony (with a 36cu clocked that high with a huge console and doubts about the architecture used) we would not even have these discussions because everything would be very evident and clear.....stop chasing the possible "different approaches" or "secret sauces"

Everything points in the other direction. The freqeuncies is in line with RDNA2, they redesigned the rendering pipeline around the new GE and the cooling solution is there because it is not one but several ICs in the package so additional cooling is required - a System in Package design. All of these things takes year to develop from prototype etc - it was the plan all along. And they wanted the high frequency because time is as an important component when you do a lot of post-processing as computational power - and time is measured in frequency.

I know I might have to back-track on this one but after all reading I have done, I am certain that Sony is using a stacked chip. And in that vertical stack there is at least a significant cache for the GPU that we currently do not know about. The Sony cooling patent has the following main illustration language:

"In the example illustrated in FIG. 1, the integrated circuit apparatus 5 has two IC chips 5 c and 5 d that are vertically stacked one on top of the other."

And as everyone that has written patents know, you start with broad claims but ultimately exemplify with your actual solution so that you know that your actual solution is covered by the patent as a minimum if there is any challenge/opposition. The patent's main illustration uses a vertically stacked chip with two IC layers.
 
Last edited:
You should be comparing boost clocks against the fixed console clock as that is the max range PC cards can run at.

You'd have a point if the 6800 boost clock was 1825 but that's not the case.
fixed clock is easier in the console world ...the ps5 is the first "outlier" console with a variable clock (and this brings us back to why for the first time they had to insert a variable clock) probably to gain some tf from the original project
 
Last edited:
Everything points in the other direction. The freqeuncies is in line with RDNA2, they redesigned the rendering pipeline around the new GE and the cooling solution is there because it is not one but several ICs in the package so additional cooling is required - a System in Package design. All of these things takes year to develop from prototype etc - it was the plan all along. And they wanted the high frequency because time is as an important component when you do a lot of post-processing as computational power - and time is measured in frequency.

I know I might have to back-track on this one but after all reading I have done, I am certain that Sony is using a stacked chip. And in that vertical stack there is at least a significant cache for the GPU that we currently do not know about. The Sony cooling patent has the following main illustration language:

"In the example illustrated in FIG. 1, the integrated circuit apparatus 5 has two IC chips 5 c and 5 d that are vertically stacked one on top of the other."

And as everyone that has written patents know, you start with broad claims but ultimately exemplify with your actual solution so that you know that your actual solution is covered by the patent as a minimum if there is any challenge/opposition. The patent's main illustration uses a vertically stacked chip with two IC layers.
Ok 👌 so Cerny the same cerny that we all consider brilliant did all this crazy work. to have in the end what? a solution that costs the same (if not more) a physically huge console and more difficult to cool and more importantly less performing. sorry no I don't believe it.
 
Last edited:
Ok 👌 so Cerny the same cerny that we all consider brilliant did all this crazy work. to have in the end what? a solution that costs the same (if not more) a physically huge console and more difficult to cool and more importantly less performing. sorry no I don't believe it.
Well, we can all argue back and forth til the cows come home. But not long to go til we find out.
 
Well, we can all argue back and forth til the cows come home. But not long to go til we find out.
Sure like everyone else I can't wait to see the DF comparations .....but again wait for? ..to find out what? if the ps5 is more powerful than what the hw show it is ? i dont think that magic can transform what there's inside the ps5 guys. we can only wait to understand if the differences are simply those of the hw (around 20%) or if the variable clock isn't all of that and increases the gap ... or if the customizations made by Sony do not correspond exactly to those of rdna2. this is what I personally expect
 
Last edited:
Ok 👌 so Cerny the same cerny that we all consider brilliant did all this crazy work. to have in the end what? a solution that costs the same (if not more) a physically huge console and more difficult to cool and more importantly less performing. sorry no I don't believe it.

As to performance - we do not know yet :)

Question:

Why is Vega 64 @ 12.66 TFLOPs performing as a 2070 @ 7.47 TFLOPs in most games at 1440p and 4K?

Answer:

CU utilisation is really poor for Vega 64 and cache size is a major driver for that. Vega 64 has 16 Kb per CU and 4 MB of L2 cache for 64 CUs, RTX 2070 has 64 Kb per SM and 4 MB L2 Cache for 30 SMs - i.e. 2070 has 4x the cache at the lowest level and 2x the cache at the L2 level.

Theoretical TFLOPs peak is one measure but can say much less than we are sometimes are lead to believe. Sufficiently large cache is really important - the latest AMD cards show the same thing.

I might of course be all wrong but if you assume I am right with the stacked chip and at least a significant cache (most likely infinity cache) in the stack - everything falls into place including the 36 CUs.
 
Last edited:
fixed clock is easier in the console world ...the ps5 is the first "outlier" console with a variable clock (and this brings us back to why for the first time they had to insert a variable clock) probably to gain some tf from the original project

Their's detail on it here.

an innovation that essentially gives the system on chip a set power budget based on the thermal dissipation of the cooling assembly.

The behaviour of all PS5s is the same," says Cerny. "If you play the same game and go to the same location in the game, it doesn't matter which custom chip you have and what its transistors are like. It doesn't matter if you put it in your stereo cabinet or your refrigerator, your PS5 will get the same frequencies for CPU and GPU as any other PS5.

 
Ok 👌 so Cerny the same cerny that we all consider brilliant did all this crazy work. to have in the end what? a solution that costs the same (if not more) a physically huge console and more difficult to cool and more importantly less performing. sorry no I don't believe it.
Cerny mentioned that they could run the GPU way over 2230 MHz, but they had to cap it so that on-chip logic operated properly. The cooling solution of the PS5 was designed after the GPU/CPU frequencies and the power levels were set. It was designed specifically for that power level. And they spent over 2 years preparing the adoption of liquid metal as their thermal interface material.

All of this wasn't an overnight decision.
 
Last edited:
Sure like everyone else I can't wait to see the DF comparations .....but again wait for? ..to find out what? if the ps5 is more powerful than what the hw show it is ? i dont think that magic can transform what there's inside the ps5 guys. we can only wait to understand if the differences are simply those of the hw (around 20%) or if the variable clock isn't all of that and increases the gap ... or if the customizations made by Sony do not correspond exactly to those of rdna2. this is what I personally expect
Well, we'll see.:messenger_sunglasses:
 
in fact you don't do it ovenight it toke an year long . and no i think no hw engineer would design a gpu with only 36cu clocked at an exaggerated level in 2020. in fact no one else does. Especially if the cost of the project does not demonstrate obvious economic savings.I think that from all the evidence that I have brought you it is more logical to think like me than like you .. so it is better to change the subject because you will not convince me that cerny has deliberately designed a console with only 36 cu not even using the full set of technologies present in the new amd architecture I have been following the console market since their first dawn and I can say with relative confidence that if Ms had been in the position of Sony (with a 36cu clocked that high with a huge console and doubts about the architecture used) we would not even have these discussions because everything would be very evident and clear.....stop chasing the possible "different approaches" or "secret sauces"

A wild developer appears! Stopped reading after that gem of a line. You did not check the latest AMD GPUs line, did you?
 
Last edited:
You should be comparing boost clocks against the fixed console clock as that is the max range each can run at.

You'd have a point if the 6800 boost clock was 1825 but that's not the case here.

XSX has more in common with the 5000 series in that regard.

:messenger_downcast_sweat:

so you part of the sony discord, now going with the new FUD about series x running at 'non-rdna2 clocks'?

people got banned for questioning ps5 gpu clocks (with what we seen of 6000 unveiled, this still a fair question that remains unverified)
 
i did. how many video cards have they presented with less than 60 cu's and with a game clock above 2230? ...zero

giphy-downsized-large.gif
 
exactly if sony wanted a powerful and less complicated console it had other much simpler and more profitable ways than go with a huge console to build (and for its unpleasant size to see) ... because of such a high clock that prompted designers to insert the problematic liquid metal (anyone who builds PCs knows the risks of this type of heat conductor) and a huge and very expensive heatsink.and to invent a whole new way to keep that clock as much as possible! all just because you had 36 cu's. number of cu's far from today's standard and much ... much closer to the previous a amd architecture.
Again ... sorry but I can't believe it at all ..in very sorry to think differently from you
 
Last edited:
i did. how many video cards have they presented with less than 60 cu's and with a game clock above 2230? ...zero

The presented top flagship die, only 1 PRESENTED so far, all have 80 CU the differentiator is some have CU and some have a whole SE disabled.

Only the founders edition have been presented by AMD at conservative clocks, here is the Stryx Aib



Of course you realise bigger die are clocked lower than smaller die, but it wont be much.

The 40 CU is 2.4 GHz rumoured, but not available yet for sale, does not mean it does not exist and will announced soon enough.

I am sure with Gallium eutectic alloy AIB options will go higher, some reports of 2.8 Ghz peak have been stated.,

The Eutectic alloy has an additional benefit is that it wont degrade like most simpler thermal pastes over time. Most consoles get louder after 6-12 months or so, it wouldbe nice if consoles remain quiet in years 2+.

Finally, there is no issue with Eutectics that are sealed, zero risk, you would understand this if you give it more thought.
 
Last edited:
The presented top flagship die, only 1 PRESENTED so far, all have 80 CU the differentiator is some have CU and some have a whole SE disabled.

Only the founders edition have been presented by AMD at conservative clocks, here is the Stryx Aib



Of course you realise bigger die are clocked lower than smaller die, but it wont be much.

The 40 CU is 2.4 GHz rumoured, but not availabel yet for sale, does not mean it does not exist and will announced soon enough.

I am sure with Gallium eutectic alloy AIB options will go higher, some reports of 2.8 Ghz peak have been stated.,

1) you talking about boost and not game mode.
2) gpus in a console are not dgpu and are cooled very ..very...very differently
3) Substantial fluctuations in the clock such as those that occur in the PC world between game and boost mode would create enormous problems for developers in the world of consoles.
none of the new architecture rdna2 gpus have less than 60 cu's the only outlier is the xsx gpu with 52 clocked basically as the rx 6800 in game mode
 
Last edited:
exactly if sony wanted a powerful and less complicated console it had other much simpler and more profitable ways than go with a huge console to build (and for its unpleasant size to see) ... because of such a high clock that prompted designers to insert the problematic liquid metal (anyone who builds PCs knows the risks of this type of heat conductor) and a huge and very expensive heatsink.and to invent a whole new way to keep that clock as much as possible! all just because you had 36 cu's. number of cu's far from today's standard and much ... much closer to the previous a amd architecture.
Again ... sorry but I can't believe it at all ..in very sorry to think differently from you
Some of the worlds best engineers spending millions on RnD for years on end built the PS5.

But you've seen right through them...good job!
 
Some of the worlds best engineers spending millions on RnD for years on end built the PS5.

But you've seen right through them...good job!
the thing is ....the ps5 was rumored to be a 2019/2020 console. or I don't know what but everything points that they have had a design review. all. how can you not see it?
 
1) you talking about boost and not game mode.
2) gpus in a console are not dgpu and are cooled very ..very...very differently
3) Substantial fluctuations in the clock such as those that occur in the PC world between game and boost mode would create enormous problems for developers in the world of consoles.
none of the new architecture rdna2 gpus have less than 60 cu's the only outlier is the xsx gpu with 52 clocked basically as the rx 6800 in game mode

1. Game clock for 40 CU will also be > 2.2, also those were the range of game clocks tested on AiB running GAMES. Its variable.

2. No they are cooled with fans and thermal conduction, just because you can dedicate 2 fans to a card does not make it different, just more dedicated.

3. Hahah, thats funny, I am sure sony are really struggling. Whata load of tripe.

MS clearly went with a solution that did not involve the fast gated frequency control and optimisations to get high clocks. Its clear, if you understood any of the below points we could discuss, but you clearly dont.


hzBv1Jv.png


Note they are written in a way to obscure so not to uspet some partners (for now), White paper will show fast CUs are designed with per WGP clock gating for starters.

Do you know what the other 2 points mean ?
 
Last edited:
all just because you had 36 cu's. number of cu's far from today's standard and much ... much closer to the previous a amd architecture.
Yikes. Are you serious? Do you also look at FX 8350 which is an octa-core and then look at Ryzen 7 3700X which is also an octa-core and go "eh... 8 corezz same numerzz much mucchh clozerr...."?
 
Amd event really hurt a lot of you. Now pushing this clowns rumors as facts for RDNA3. He also said both consoles would have RDNA3 features before lol. How'd that turn out?

Either way the guys tweets read like a fanboy. And this is nothing more than just speculation. So it should be in the spec thread, the one you guys constantly make up Xbox conspiracies in all the time.
 
the thing is ....the ps5 was rumored to be a 2019/2020 console. or I don't know what but everything points that they have had a design review. all. how can you not see it?
Nothing points to that. NOTHING.

From reading your posts, are you telling me because Sony went with 36CU design that somehow they felt caught off guard by Series X? In response they delayed the console and decided to boost clock frequency to the max and hope for the best? Before I go any further I just want to get this straight, so is that how you view this?
 
1) you talking about boost and not game mode.
2) gpus in a console are not dgpu and are cooled very ..very...very differently
3) Substantial fluctuations in the clock such as those that occur in the PC world between game and boost mode would create enormous problems for developers in the world of consoles.
none of the new architecture rdna2 gpus have less than 60 cu's the only outlier is the xsx gpu with 52 clocked basically as the rx 6800 in game mode
So are you trying to say there won't be any RDNA 2 GPU with less than 60 CUs? 40, 36 CUs and even lower?
 
Last edited:
1. Game clock for 40 CU will also be > 2.2

2. No they are cooled with fans and thermal conduction, just because you can dedicate 2 fans to a card does not make it different, just more dedicated.

3. Hahah, thats funny, I am sure sony are really struggling. Whata load of tripe.

MS clearly went with a solution that did not involve the fast gated frequency control and optimisations to get high clocks. Its clear, if you understood any of the below points we could discuss, but you clearly dont.


hzBv1Jv.png


Note they are written in a way to obscure so not to uspet some partners (for now), White paper will show fast CUs are designed with per WGP clock gating for starters.

Do you know what the other 2 points mean ?

I minor in psychology. Some of you are in the grief and anger stages, and you are in trolling free style mode.

I understand 28 October was supposed to be the day of crow eating, supposed the world to see super high game clocks and special sauces. Too bad the event belonged to AMD and MS DX12u. Too bad we see the best 7nm rDNA2 dies runs only slightly above 2ghz at 300w tgp.

You did not get the beautiful clocks, so now you are resorting to fud that Series X clocks are too slow and so not rDNA2. :messenger_weary:

Nevermind its pointed out repeatedly, go look at 6800, at 250w.

Im sure you know are trolling and egging at this point. Grow up and stop polluting the forums.

I have a feeling you are from REE and trying to spoil neogaf. Go back to your socially politically 'perfect' place and be 'normal.'
 
Last edited:
What Sony has claimed publicly are three things (but there might be more of course):
i doubt someone competent enough said mesh shader which is part of a new render pipeline is downstream of anything.
If you want to do culling you'll do it at first step with an amplification shader....
ex: in turing
037-755x425.png


because culling is a simple example Cerny gave at road to ps5 using primitive shader... if it's shader input assembly is past you are on MS/CU and then i don't see how it can happen earlier than using amplification shader .
I would add even amd primitive shader were performed on shader array .
if you don't have link saying explicitly otherwise there is no reason to think ps5 will execute shader on something else (why would you ? SM / CU are made for that....)


edit : good era post . Even amd dumped solution was replacing fixed function for more programmable CU.
 
Last edited:
I minor in psychology. Some of you are in the grief and anger stages, and you are in trolling free style mode.

I understand 28 October was supposed to be the day of crow eating, supposed the world to see super high game clocks and special sauces. Too bad the event belonged to AMD and MS DX12u. Too bad we see the best 7nm rDNA2 dies runs only slightly above 2ghz at 300w tgp.

You did not get the beautiful clocks, so now you are resorting to fud that Series X clocks are too slow and so not rDNA2. :messenger_weary:

Nevermind its pointed out repeatedly, go look at 6800, at 250w.

Im sure you know are trolling and egging at this point. Grow up and stop polluting the forums.

I have a feeling you are from REE and trying to spoil neogaf. Go back to your socially politically 'perfect' place and be 'normal.'

Good Lord, this explains a lot.
 
I understand 28 October was supposed to be the day of crow eating, supposed the world to see super high game clocks and special sauces. Too bad the event belonged to AMD and MS DX12u.

DX12u is important to the PC market but entirely irrelevant to Sony. Its the reason why MS are so keen on branding everything - they are in the business of creating standardized APIs, so they need to present any new technology in that manner. Customizations made solely for strategic partners like Sony don't warrant a mention, because until that tech is ready for "primetime" it doesn't warrant explanation, and given that status requires integration with standard APIs...

The reality is that people were rabbiting on about RDNA2 purely because of the 2 at the end, long before any information was available about the precise how's and why's of its advancements over RDNA1 were known. That the PS5 APU might contain features not standard in RDNA2 GPUs is pretty much a certainty because its a bespoke design for a large client, the relative significance of these deviations however only matters in terms of intended usage parameters. For example the PS4pro APU was modified to accelerate checkerboard rendering, a relatively uncommon technique in the wider PC space to this day.

In simple terms. Cerny will go to AMD's design team and go "we would like provision to accelerate or expose this, this and this if possible. Can you help us?" Basically the client (Sony) will have a set of desires and the merchant (AMD) will try and meet them at a price both can live with. If these strategic "nips and tucks" turn out to be more useful than anticipated, or simply lead to a fruitful path forward then they probably will show up in future iterations of their mass-market output. There is no conspiracy or game of one-upmanship, its just work and business building on that work.
 
Last edited:
Nothing points to that. NOTHING.

From reading your posts, are you telling me because Sony went with 36CU design that somehow they felt caught off guard by Series X? In response they delayed the console and decided to boost clock frequency to the max and hope for the best? Before I go any further I just want to get this straight, so is that how you view this?
I think that the design of the ps5 clearly demonstrates a reworking of the initial project as I have already said nobody in their right mind would want to approve a console with that size ...cu's / clock setup .. at that price ...if not forced for some obscure raeason especially if already before going out on paper it is not even at the top of the performances compared to the competition.
 
MS are going to be privy to any graphic techniques being developed by AMD. It has to be compatible with future Directx releases. This requires communication and collaboration. This stuff doesn't get developed in a vacuum. I don't believe Paul has a source as no one is stepping up to independently corroborate his "intel". At best, Paul is just reading the patents filed by Sony that were applicable to advanced culling techniques and is extrapolating for youtube views. He is well aware of the huge number of clicks that videos like this receive during a build up to a console or similar products launch.

There is no doubt that Sony has put some engineering into extracting as much power out of its 36 cu's as possible. This is a given and there always more than one way to skins cat. What Sony has shown so far has been impressive.
 
Last edited:
I think that the design of the ps5 clearly demonstrates a reworking of the initial project as I have already said nobody in their right mind would want to approve a console with that size ...cu's / clock setup .. at that price ...if not forced for some obscure raeason especially if already before going out on paper it is not even at the top of the performances compared to the competition.
First and foremost, you have not explained how the PS5 demonstrates a reworking.

Mark Cerny, in the road to PS5, had clearly stated his preference for the highest clocks possible. They also clearly did not want to repeat the same mistakes in PS4 Pro as it relates to cooling. All of this stuff was in the road to PS5 talk and your suspicions of a secret reworking only exist in your head. Not to mention the final hardware inside the PS5 was decided two years ago.

It's hilarious to think Sony somehow didn't know that Microsoft would be going with the largest GPU they could. PS4 Pro has 36 compute units and so does the PS5. It was clearly obvious to them Xbox would have more in their next-gen system. In fact, in the road to PS5, Cerny talks about testing different GPU configurations (ones with more CU). They clearly felt based on the clocks they wanted to hit, and the combination of the rest of the hardware along with their customizations, that it would provide the next-gen experience they've been seeking.

It's aslo entirely possible, based on their hardware design, that they didn't feel it was worth spending the money on a larger GPU. The performance jump may not have been worth it. I guess we'll see soon, but pretending there's this large gap between the two consoles as if we're talking Splinter Cell on Xbox versus PS2 here is nonsense. There's a reason why Sony went with a more narrow but faster GPU design compared to Series X, and you have to look at the entire system holistically to suss that out. A console is more than just its compute units.
 
Top Bottom