• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

(*) Ali Salehi, a rendering engineer at Crytek contrasts the next Gen consoles in interview (Up: Tweets/Article removed)

Note that a developer/programmer aka software engineer. Cerny is a software engineer.

Krazy Ken's background is with electronics degree. Krazy Ken is an electronics engineer.

Yeah just because someone has 'engineer' in their title is meaningless. I wouldn't want a civil engineer weighing in on software.
 
Last edited:

rnlval

Member
Again.

VRS = Variable Rate Shading.
Variable Rate Shading in done at Geometry Engine time before you draw.

activision.jpg


Sample Feedback (it is a software logic not hardware feature) has two uses only.
Streaming.
Texture Space Shading.

So I don't think we are talking about texture streaming (that is a non issue with faster SSDs) here so it is basically used for Texture Space Shading.
Anyway both Streaming Texture and Texture Space Shading can be done without Sample Feedback logic.

Mesh Shaders are Primitive Shaders on AMD side.
It is done by the Geometry Engine.

All these features are related/done by Geometry Engine on AMD hardware.
Geometry Engine is not a single hardware unit... there is a lot of small units that do these works before the draw... for example there are 4 Primitives Shaders units inside the Geometry Engine.
ShutterMunster's post is about placing a shader task in the pipeline not about in RDNA's "Geometry Engine" hardware e.g.

small_navi-stats.jpg



meshlets_pipeline.png


Notice optional task shader before "mesh generation" stage.


Mesh shader replaces multi-stage geometry shaders.


Variable Rate Shading example

maxresdefault.jpg
 

ethomaz

Banned
ShutterMunster's post is about placing a shader task in the pipeline not about in RDNA's "Geometry Engine" hardware e.g.

small_navi-stats.jpg



meshlets_pipeline.png


Notice optional task shader before "mesh generation" stage.


Mesh shader replaces multi-stage geometry shaders.


Variable Rate Shading example

maxresdefault.jpg
Vertex Attribute Shader, Vertex Shader, Tess. Control Shader, Tessellation, Tess. Evaluation Shader, Geometry Shader... are all done in Geometry Engine in AMD hardware.
That is replaced by Mesh Generation and Mesh Shader that are done in Geometry Engine in AMD hardware.

GCN_Geometry_Processors.svg


nVidia basically put in a chart what is done by the Geometry Engine in AMD hardware.

VRS happens at Geometry Engine time in AMD hardware.

Edit - Yeap Vertex Attribute Shader and Vertex Shader are done by Geometry Engine too in AMD hardware.
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
I've been saying this all this time. Cerny put about 6 custom chips in the APU to remove all possible bottlenecks for the SSD usage. Those are 6 hardware blocks. Meanwhile, MS has only a decompression block that is not even half as fast as the one in the PS5.

It's not just about the raw speed. It's the over-all I/O throughput.

This is what some people are missing. Sony made a trade. They decided to use their hardware budget on different things, instead of putting the biggest GPU in the console. This is why the PS5 will probably cost the same as the XSX. Those 6 hardware blocks are actual silicon in the PS5. Not just some software A.I.
 

Gediminas

Banned
This is what some people are missing. Sony made a trade. They decided to use their hardware budget on different things, instead of putting the biggest GPU in the console. This is why the PS5 will probably cost the same as the XSX. Those 6 hardware blocks are actual silicon in the PS5. Not just some software A.I.
i think so too. there is no way it is 399 console, nor i think 449. look at the controller. we have just half info, maybe little more than half about PS5.
 

rnlval

Member
Tess. Control Shader, Tessellation, Tess. Evaluation Shader, Geometry Shader... are all done in Geometry Engine in AMD hardware (I believe Vertex Attribute Shader and Vertex Shader are done by Geometry Engine in AMD hardware but I need to check).
That is replaced by Mesh Generation and Mesh Shader that are done in Geometry Engine in AMD hardware.

VRS happens at Geometry Engine time in AMD hardware.
That's not correct.

RDNA's Geometry Engine's workload coverage

2019-08-02-image.png


Good luck with RDNA v1 supporting DirectX 12 Ultimate LOL.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Note that a developer/programmer aka software engineer. Cerny is a software engineer.

Krazy Ken's background is with electronics degree. Krazy Ken is an electronics engineer.

Good point! Ken's thinking was so much more physical if that makes any sense. Ease of coding and why that matters never seem to click with him. Technically he was correct that once you tamed the CELL processors the results would beat anything any PC or console at the time could do. But he never considered the cost (both financially or mentally) that it would have on devs.

He wanted devs to take pride in fighting the CELL and defeating it. It was totally Anti-developer in nature. But as an electronics engineer (that he was) the CELL processor was a thing of beauty. Engineers are still using some of the theories of the CELL today, in future processors.
 

rnlval

Member
Vertex Attribute Shader, Vertex Shader, Tess. Control Shader, Tessellation, Tess. Evaluation Shader, Geometry Shader... are all done in Geometry Engine in AMD hardware.
That is replaced by Mesh Generation and Mesh Shader that are done in Geometry Engine in AMD hardware.

GCN_Geometry_Processors.svg


nVidia basically put in a chart what is done by the Geometry Engine in AMD hardware.

VRS happens at Geometry Engine time in AMD hardware.

Edit - Yeap Vertex Attribute Shader and Vertex Shader are done by Geometry Engine too in AMD hardware.
Notice bi-direction between Compute Units (shaders, textures) and Geometry Processors.

XYZ shaders are done on shader units. LOL.
 

ethomaz

Banned
Notice bi-direction between Compute Units (shaders, textures) and Geometry Processors.

XYZ shaders are done on shader units. LOL.
Pixel Shader yes... but these tasks nVidia described are done in Geometry Engine from what I understood.
BTW the AMD VRS patente has the traditional pipeline for AMD hardware.

Page 4.
 
Last edited:

rnlval

Member
That is what I said all these tasks are done by Geometry Engine in RDNA like that picture shows.
RDNA's GE list has placed a limit on its workload scope. Geometry related shaders are done on CU (shaders) which is communicated with geometry processor in a bi-direction (ping-pong) fashion.
 

ethomaz

Banned
RDNA's GE list has placed a limit on its workload scope. Geometry related shaders are done on CU (shaders) which is communicated with geometry processor in a bi-direction (ping-pong) fashion.
Ok.

I read all the document.

The shade rate is defined in Raterizer stage.
"Also at step 402, the rasterizer stage 314 determines one or more shading rates for the samples of the triangle."

The shader rate can be used/applied in:
1) "a tile-based rate determination technique, in which the render target is divided into a set of shading rate tiles and each shading rate tile is assigned a particular shading rate"
2) "a triangle-based rate determination technique, in which a particular shading rate is assigned to each primitive "
3) "state-based rate determination technique, in which shading rate state changes propagate through the pipeline, and, at the rasterizer stage"

From what I understood 2 is done in Geometry Engine (only the GE works with primitive shaders)... 3 can happen in any stage of the pipeline... and 1 I'm still trying to determine where it is applied.
 
Last edited:
Thanks for the info.

Like I said they are aiming with two goals:

- Campign: 4k Ultra added effects 60fps (if they didn’t reach 4k all the time it will be pretty close to it).
- Multiplayer: Ultra 120fps (the resolution here will drop below 4k most of time whatever it needs to hold 120fps)

The drastic drops in resolution in the campaign to hold 60fps is due optimizations... the actual performance without resolution scaling is near RTX 2080, that means around 40fps... I can see they reaching 60fps with fixed 4k or little resolution drop in heavy scenes.

That is a example that just creating a profile to scale for the stronger machine doesn’t work and it needs a lot of optimizations...
the scaling across machines magic doesn’t exists.

Yep, so seems the Gears 5 port was just a rough gauge of what the system can do basically before optimization, so they went ahead and just threw everything they had at the system with dynamic resolution to better understand where they should aim optimizations. The Gears 5 4K ultra settings benchmark pushing RTX 2080 equivalent performance is a not too bad result at all.

Ultimately, in the end you were right. I can acknowledge when I'm wrong. :messenger_fistbump:
 

rnlval

Member
Pixel Shader yes... but these tasks nVidia described are done in Geometry Engine from what I understood.
BTW the AMD VRS patente has the traditional pipeline for AMD hardware.

Page 4.
NVIDIA's Polymorph workload scope is similar to AMD's Geometry processor solution, but NVIDIA has more of Geometry Processor units since it scales with SM units.

AMD scales CU and forgets to properly scale Geometry Processors, hence creating a geometry related bottleneck. WIth RDNA v1, AMD managed to get geometry culling to work. LOL

Let's see RDNA 2 fixes this bottleneck.


fermipipeline.png


The above Polymorph block diagram limits its workload scope.

NVIDIA's CU level (aka SM) has a Geometry Processor aka Polymorph. AMD places its Geometry Processor at GPC level aka Shader Engine container.

NVIDIA's CUDA (shaders) cores ping-pong with Polymorph in greater unit numbers, hence NVIDIA has superior scaling.
 
Last edited:

ethomaz

Banned
NVIDIA's Polymorph workload scope is similar to AMD's Geometry processor solution, but NVIDIA has more of Geometry Processor units since it scales with SM units.

AMD scales CU and forgets to properly scale Geometry Processors, hence creating a geometry related bottleneck. WIth RDNA v1, AMD managed to get geometry culling to work. LOL

Let's see RDNA 2 fixes this bottleneck.


fermipipeline.png


The above Polymorph block diagram limits its workload scope.

NVIDIA's CU level (aka SM) has a Geometry Processor aka Polymorph. AMD places its Geometry Processor at GPC level aka Shader Engine container.
Yeap... nVidia solution seems more efficienty.
I don't thing AMD will change that with RDNA 2 except maybe making the GE bigger/stronger... it will be one big Engine for the whole GPU.
 

rnlval

Member
Yeap... nVidia solution seems more efficienty.
I don't thing AMD will change that with RDNA 2 except maybe making the GE bigger/stronger... it will be one big Engine for the whole GPU.
Lisa Su promises disruptive Ryzen style competition with RDNA 2. NVIDIA bias Gears 5 shows promising results for RDNA 2(via XSX) and it's good MS didn't demo Forza Motosport 7 for XSX since it's friendly to AMD GPUs like Battlefield V.
 

pawel86ck

Banned

It looks like PS5 thanks to higher clocks (36 CUs 2.2 GHz) should result in 7% higher performance, although 12TF (52 CUs 1.8 GHz) should still beat PS5 GPU a little (10-13%). Probably no one besides DF will notice any difference between PS5 and XSX and that's a good thing. I dont know about you guys, but right now I would like to see some more next gen games in action, because hardware is not the reason why people will buy next gen consoles.
 
Last edited:

It looks like higher clocks on navi GPUs will result in 7% higher performance thanks to higher clock (36 CUs 2.2 GHz), although 12TF (52 CUs 1.8 GHz) should still beat PS5 GPU a little (10-13%). Probably no one besides DF will notice any difference between PS5 and XSX and that's a good thing. I dont know about you guys, but right now I would like to see some more next gen games in action, because harware is not the reason why people will buy consoles.

The more I read takes on the next-gen consoles, the closer they seem in terms of performance. That's great!

I agree with you though. I'm getting tired of talking specs already. Give me games so we can see the hardware in action!
 

Renozokii

Member
Look at how easily my words get "fucking" misinterpreted. IDK where you got me saying that Series X will be hard to develop for, you pull that from the thoughts in your head. You need to chill man and stop nitpicking comments.

I said the numbers never really mattered, at the end both systems had great looking exclusives. The only thing you took from my comment is that Series X is going to be hard to develop for like the PS3... SMH. Chill bro, chill.

I mean I can break down the comment I replied to word for word if that appeases you.

>Nah dawg, who cares what this guys says, clearly he is bias some how... 12 is higher than 10!

So what exactly is the implication here in this obviously sarcastic comment? Is it that you genuinely believe Sony has magic sauce they pour on every PS5 to make it run better then a decently more powerful console? Is it that teraflops are completely irrelevant when discussing how well console can run games? The specs of said consoles are irrelevant as well? The SSD is magically going to bring a whole new evolution to gaming, despite fast SSDs being available on PC years?

>Yall need to stop falling in love with the numbers. Having the "most powerful console" didn't work in the PS3 era, it didn't work in this one either.

This is what I responded because it's frankly, fucking stupid. The PS3 was more powerful in sheer hardware, but by all accounts was a nightmare to develop for. The Series X doesn't have that issue. It, like the PS5, is a glorified PC.

>In the end the numbers do no matter.

How many millions of PS4Pros did Sony sell? If numbers don't matter, why did people rush out to buy a console that could barely do it's promised 4k? The PS4 and Xbox One had a much smaller difference but it was all this forum could talk about.. for years.

>It didn't matter in the PS3 days. Game after game came out and the PS3 fell behind. The exclusives were beautiful, but so were the 360's! The numbers don't mean squat, it's always been ease of getting that power and.... shockingly, the games!

Absolute nonsense. The PS3s cell caused so many multiplat games to run significantly worse than on the 360. It's easy to say numbers don't anything until you watch people playing the same game on a similarly priced console with double the framerate.
 
I mean I can break down the comment I replied to word for word if that appeases you.

>Nah dawg, who cares what this guys says, clearly he is bias some how... 12 is higher than 10!

So what exactly is the implication here in this obviously sarcastic comment? Is it that you genuinely believe Sony has magic sauce they pour on every PS5 to make it run better then a decently more powerful console? Is it that teraflops are completely irrelevant when discussing how well console can run games? The specs of said consoles are irrelevant as well? The SSD is magically going to bring a whole new evolution to gaming, despite fast SSDs being available on PC years?

>Yall need to stop falling in love with the numbers. Having the "most powerful console" didn't work in the PS3 era, it didn't work in this one either.

This is what I responded because it's frankly, fucking stupid. The PS3 was more powerful in sheer hardware, but by all accounts was a nightmare to develop for. The Series X doesn't have that issue. It, like the PS5, is a glorified PC.

>In the end the numbers do no matter.

How many millions of PS4Pros did Sony sell? If numbers don't matter, why did people rush out to buy a console that could barely do it's promised 4k? The PS4 and Xbox One had a much smaller difference but it was all this forum could talk about.. for years.

>It didn't matter in the PS3 days. Game after game came out and the PS3 fell behind. The exclusives were beautiful, but so were the 360's! The numbers don't mean squat, it's always been ease of getting that power and.... shockingly, the games!

Absolute nonsense. The PS3s cell caused so many multiplat games to run significantly worse than on the 360. It's easy to say numbers don't anything until you watch people playing the same game on a similarly priced console with double the framerate.
TLDR you're too invested bro. I'll read this later when I have time, give me a day or two. Like I said, you need to chill.
 
Last edited:

Don't know if anybody saw this yet... Ali Salehi just retracted his statement about PS5 being easier to develop for.


Windows Central is commenting on this article.


"Yesterday, GamesRadar+ said that Salehi had retracted his statements as stated by the outlet that interviewed him, Vigiato. GamesRadar+ wrote the following."
 
Last edited:
But it does mean that 6GB of that 10GB is blocked. Because you're accessing the other half of these chips at that time.
Only the smaller 4x1GB chips are idle.

Potentially more than just the 4 x 1GB chips!

You're assuming that any CPU / IO / Audio / GPU access to the slower 6GB needs the entire 192-bit bus. Chances are that at anyone one time only some of that is needed.

And because CPU access is likely to be prioritised (as taught by Sony with PS4 at GDC), you can't wait until you have data across the entire 192-bits of "slower" memory ready to be accessed simultaneously until you trigger an access on that 6GB. And remember that the OS will be tapping into that 6GB every frame too....

To tie up the entire 192-bits of bus linked to the "slow" memory, and then also the entire 320-bit bus (as Lady Gaia expects), for even partial access of the "slow" memory would be a fantastically dumb approach to controlling memory access.

It would create a potentially several hundred percent penalty for accessing "slower" memory. It would be a truly abhorrent approach to scheduling memory access!

If the XSX has multiple channels per memory controller (X1X had two for each of its six), then the potential inefficiency only multiplies.

The only logical design is one where channels that are free to schedule an access can do so, no matter whether some other part of the system is using memory channels they don't even need to access the "slow" 6GB.

Let me ask you a question, if I may! If you were designing a system to control memory access across a system like the XSX's, how would you do it? I *think* know what I'd specify....
 

psorcerer

Banned
Potentially more than just the 4 x 1GB chips!

You're assuming that any CPU / IO / Audio / GPU access to the slower 6GB needs the entire 192-bit bus. Chances are that at anyone one time only some of that is needed.

And because CPU access is likely to be prioritised (as taught by Sony with PS4 at GDC), you can't wait until you have data across the entire 192-bits of "slower" memory ready to be accessed simultaneously until you trigger an access on that 6GB. And remember that the OS will be tapping into that 6GB every frame too....

To tie up the entire 192-bits of bus linked to the "slow" memory, and then also the entire 320-bit bus (as Lady Gaia expects), for even partial access of the "slow" memory would be a fantastically dumb approach to controlling memory access.

It would create a potentially several hundred percent penalty for accessing "slower" memory. It would be a truly abhorrent approach to scheduling memory access!

If the XSX has multiple channels per memory controller (X1X had two for each of its six), then the potential inefficiency only multiplies.

The only logical design is one where channels that are free to schedule an access can do so, no matter whether some other part of the system is using memory channels they don't even need to access the "slow" 6GB.

Let me ask you a question, if I may! If you were designing a system to control memory access across a system like the XSX's, how would you do it? I *think* know what I'd specify....

I don't understand the question.
To service a request from a 2GB chip you occupy the whole bus of that chip. That's what "servicing a request" means.
You cannot do more than that chip was designed to service at any given time.
Therefore when all the "bigger" chips are servicing a request you cannot squeeze in any others.
And you cannot even use the remaining 4Gb efficiently. Because of striding into bigger chips.
 
Why make it complicated?

Reminder
RX 5600 XT (NAVI 10) already has six GDDR6 chips with 336 GB/s memory bandwidth.
RX 5700 (NAVI 10) already has eight GDDR6 chips with 448 GB/s memory bandwidth.

XSX's six GDDR6 chips with 336 GB/s memory bandwidth already duplicates RX 5600 XT's 336 GB/s memory bandwidth with six GDDR6 chips.

336 / 560 = 60%, hence potential memory bandwdith lost is 40%

GDDR6 enables full-duplex read/write or read/read or write/write or write/read with dual 16bit channels.

We don't know if the memory controller arbiters have semi-custom changes to reserve odd or even 16bit links in the six 2GB chips to give GPU optimal memory range higher priority.

I'm trying to explain why a complete shutdown of the "fast" 10GB for any access to the "slow" 6GB would be illogical and an unrealistic (IMO) idea. I have a problem with that idea, I don't thing MS would engineer a system that did that.

Let me flip the tables, and ask you a question:

Lets say 1 memory controller is accessing data from the "slow" 6GB. Lets also say the other 4 memory controllers have accesses they are able to fulfil in the "fast" 10GB.

Do you think they'll sit there, doing nothing, or do you think they'll fulfil those requests?
 
Last edited:

Renozokii

Member
TLDR you're too invested bro. I'll read this later when I have time, give me a day or two. Like I said, you need to chill.


>Look at how easily my words get "fucking" misinterpreted. IDK where you got me saying that Series X will be hard to develop for, you pull that from the thoughts in your head. You need to chill man and stop nitpicking comments.

So first I nitpicked and misinterpreted what I said, then I wrote farrrr too much for you to read? Come on, which is it bud? Am I nitpicking your comment or am I too invested?

>I said the numbers never really mattered, at the end both systems had great looking exclusives. The only thing you took from my comment is that Series X is going to be hard to develop for like the PS3... SMH. Chill bro, chill.

So I fixed what you deemed to be an issue, and responded to your entire comment.

I would also like to add that in that comment of mine, all my words put together adds up to like.. one decent sized paragraph? Maybe two really small sized ones? In other words an amount of reading that shouldn't take you longer than what, 2 minutes?
 
I don't understand the question.
To service a request from a 2GB chip you occupy the whole bus of that chip. That's what "servicing a request" means.
You cannot do more than that chip was designed to service at any given time.
Therefore when all the "bigger" chips are servicing a request you cannot squeeze in any others.
And you cannot even use the remaining 4Gb efficiently. Because of striding into bigger chips.

So now assume that not all the 2GB chips are servicing a request from the "slow" 6 GB area (or assume that some data from the "small" 4GB can be accessed).

Assume it's only one 2GB chip. Or two. Or three.

Now assume that striding allows some accesses to occur, or that the queue depth is deep enough that some none prioritised accesses can be re-ordered to allow access to some chips on the "fast" 10GB.

You don't think the system has planned for that? Do you think that any access to the slow 6GB kills all possible accesses to the fast 10GB?

Because these are some of the crazy, whack ass claims I'm seeing (not saying you've made them!).

If any access to the "slow" 6GB disables any possible access to the entire "optimal" 10GB .... I'll eat my own asshole. And I'm pretty sure that'll taste as bad as many of the opinions on NeoGaf.

I think accesses will be based on what's prioritised and what accesses can fit on the channels that aren't being used. There isn't a hard and fast boundary between accessing "slow" and "fast" memory where - across the entire bus width - it's one or the other.

That would be fucked up! [edit: IMO, of course]
 
Last edited:

Don't know if anybody saw this yet... Ali Salehi just retracted his statement about PS5 being easier to develop for.


What we are seeing right now is the main issue with the “only MS people care about specs” claims posted earlier.

Like I said before, everyone cares about specs. Some just more so, or MUCH more so, than others. If specs didn’t matter to both sides then this thread wouldn’t have over 2k responses, and we wouldn’t be living in this world of hyperspin.

The DualSense was revealed yesterday and it looks incredible, imo. No spin required. I hope these kind of headlines become the norm instead of trying to find a tweet from a guy whose cousin saw a PS5 dev kit once to use as some token of PS5’s Secret power.
 

Jigsaah

Member
What we are seeing right now is the main issue with the “only MS people care about specs” claims posted earlier.

Like I said before, everyone cares about specs. Some just more so, or MUCH more so, than others. If specs didn’t matter to both sides then this thread wouldn’t have over 2k responses, and we wouldn’t be living in this world of hyperspin.

The DualSense was revealed yesterday and it looks incredible, imo. No spin required. I hope these kind of headlines become the norm instead of trying to find a tweet from a guy whose cousin saw a PS5 dev kit once to use as some token of PS5’s Secret power.
I really like the Dodge Viper look the PS5 controller has. Blue white n black. Just sexy. The wings seem to be a bit fatter as well so it'll be more comfy in my hands.

I care about specs, but far less than a lot of people who do all the calulations and minute comparisions. I'm glad XSX will not be lacking in potential in any manner. I don't even think PS5 will either when it comes to their exclusives.

I still think 9.2 or 10. whatever TFs is enough to do 4k 60 on most games. I mean isn't that what we all want at a bare minimum? I don't get why it matters so much.
 
Last edited:

xHunter

Member

Don't know if anybody saw this yet... Ali Salehi just retracted his statement about PS5 being easier to develop for.

What do you mean by "just"? The article is 2 days old and doesnt add anything we dont already know. It was already posted on page 13.
Jesus christ, people are so obsessed with this guy, its like they got personally insulted by his comments.
 
Last edited:

Jigsaah

Member
What do you mean by "just"? The article is 2 days old and doesnt add anything we dont already know. It was already posted on page 13.
Jesus christ, people are so obsessed with this guy, its like the got personally insulted by his comments.
My bad bro. That was my first post in the thread. I work from home so I haven't been paying as much attention.
 

psorcerer

Banned
So now assume that not all the 2GB chips are servicing a request from the "slow" 6 GB area (or assume that some data from the "small" 4GB can be accessed).

Assume it's only one 2GB chip. Or two. Or three.

Now assume that striding allows some accesses to occur, or that the queue depth is deep enough that some none prioritised accesses can be re-ordered to allow access to some chips on the "fast" 10GB.

You don't think the system has planned for that? Do you think that any access to the slow 6GB kills all possible accesses to the fast 10GB?

Because these are some of the crazy, whack ass claims I'm seeing (not saying you've made them!).

If any access to the "slow" 6GB disables any possible access to the entire "optimal" 10GB .... I'll eat my own asshole. And I'm pretty sure that'll taste as bad as many of the opinions on NeoGaf.

I think accesses will be based on what's prioritised and what accesses can fit on the channels that aren't being used. There isn't a hard and fast boundary between accessing "slow" and "fast" memory where - across the entire bus width - it's one or the other.

That would be fucked up! [edit: IMO, of course]

While servicing one request the chip cannot service another.
All that you described happens between requests but not within a request.
So, effectively for the one request duration (to 6GB pool) the 10GB one is "disabled".
It can be accessed on next request. But not on this one.
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
I really like the Dodge Viper look the PS5 controller has. Blue white n black. Just sexy. The wings seem to be a bit fatter as well so it'll be more comfy in my hands.

I care about specs, but far less than a lot of people who do all the calulations and minute comparisions. I'm glad XSX will not be lacking in potential in any manner. I don't even think PS5 will either when it comes to their exclusives.

I still think 9.2 or 10. whatever TFs is enough to do 4k 60 on most games. I mean isn't that what we all want at a bare minimum? I don't get why it matters so much.

It's 10.3 TFs.
 

B_Boss

Member
Which one exactly?

Yeah o’dium (I don’t know (by memory) how to tag other users lol) made a similar claim and I asked if he could mention a few of Salehi’s points he felt were inaccurate essentially. The more we (who are genuinely interested of course) the better.
 
Last edited:

plip.plop

Member
The more I read about both of these systems the more I believe that they are going to cost the same. Sony and Microsoft both had a price tag in mind and both took different paths through what they felt was important for delivering a next-gen experience. If Sony believes they have the premium system then there is no reason they should or would sell it less than the XsX. Both will come in at the same price.
 

Jigsaah

Member
Yeah o’dium (I don’t know (by memory) how to tag other users lol) made a similar claim and I asked if he could mention a few of Salehi’s points he felt were inaccurate essentially. The more we (who are genuinely interested of course) the better.
Your use of parentheses offends me deeply.

I kid.

Just so you know, just use the "@" symbol and the beginning of the user's name and it should pop up.

B_Boss B_Boss
 

Jigsaah

Member
Seems like our are back to days ago...
I'm not defending this any more. Just like you apparently didn't read my apologies for re-mentioning this, I didn't go through the multitude of pages this thread is produced.

Ain't nobody got time for that. Got it? We on the same page now?
 
Last edited:

ethomaz

Banned
I'm trying to explain why a complete shutdown of the "fast" 10GB for any access to the "slow" 6GB would be illogical and an unrealistic (IMO) idea. I have a problem with that idea, I don't thing MS would engineer a system that did that.

Let me flip the tables, and ask you a question:

Lets say 1 memory controller is accessing data from the "slow" 6GB. Lets also say the other 4 memory controllers have accesses they are able to fulfil in the "fast" 10GB.

Do you think they'll sit there, doing nothing, or do you think they'll fulfil those requests?
Do you really know how a memory controller works?

The full bus do any request at time... to have two or more simultaneous access at the same time you need 2 or more memory controllers and busses.

So if the CPU is doing a request in the slow memory part the GPU needs to wait until that request ends.... same for the GPU of it is doing a request on the fast part the CPU needs to wait it time.

If that helps it can do parallel read and write in different parts of the memory by the same request.
 
Last edited:

ethomaz

Banned
I'm not defending this any more. Just like you apparently didn't read my apologies for re-mentioning this, I didn't go through the multitude of pages this thread is produced.

Ain't nobody got time for that. Got it? We on the same page now?
It is fine... I did not criticized you with my comment.
It was more about Windows Central posting something late... April 7.

Sorry if it looks like I was saying that to you... I apologize for that.
 
Last edited:

B_Boss

Member
I've been saying this all this time. Cerny put about 6 custom chips in the APU to remove all possible bottlenecks for the SSD usage. Those are 6 hardware blocks. Meanwhile, MS has only a decompression block that is not even half as fast as the one in the PS5.

It's not just about the raw speed. It's the over-all I/O throughput.

Now to be fair, it seems as though MS may have an ace up their sleeve with BCPack? There’s an article that covers a series of tweets between Richard Geldreich and James Stanard (Graphics Optimization R&D and Engine Architect at Microsoft)


Another interesting Geldreich tweet:



Of course I chose Geldreich’s tweets because, as far as I’m aware, he’s certainly the only one I’ve come across who has spoken at great detail about BCPack. It’ll be fascinating to understand the differences between the two as time move forward and more is revealed.
 
Last edited:
While servicing one request the chip cannot service another.

I wasn't talking about chips - the point I was making was about controllers and/or channels. And I made that abundantly clear. Repeatedly.

But given the awfulness of your newest dodge, I must point out that actually a GDDR6 chip can actually service two requests at once, given the right controller.

All that you described happens between requests but not within a request.

With multiple channels (between 5 and 20 for XSX depending on controller logic) there are independent requests.

I cannot fucking believe I have to explain this.

So, effectively for the one request duration (to 6GB pool) the 10GB one is "disabled".
It can be accessed on next request. But not on this one.

No, this is not how computers and memory channels work.

No, this is wrong. Where did you learn this? In what tests using integrated graphics, or ganged / unganged memory did any evidence convince you this was the case ...?
 
Top Bottom