• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Velocity Architecture - 100 GB is instantly accessible by the developer through a custom hardware decompression block

TBiddy

Member
So Rich who interviewed Cerny is no longer credible? Ok What was your interpretation?
Next time I'll find a way to get the devkits to answer your questions.

Of course he's a credible source. Unlike most people on this forum (including me), he knows his stuff, has access to devs and talked directly with Cerny.

Claiming that he's "not credible" is just a desperate attempt to move focus from the unpleasant truth that he was reporting.
 
Last edited:

ToadMan

Member
The power budget is per workload and in order for the GPU to hit the max clocks they have to draw power The CPU. We have concrete proof of this in the dev kits and its what some devs have been doing.

Why are devs throttling the CPU to consistently get 2.23GHz on the GPU?

It’s not clear what your meaning is from your text because you’re using terms loosely.

For example you say “why are devs throttling the cpu”. What do you mean by the term “throttling” in this context?
 

rntongo

Banned
It’s not clear what your meaning is from your text because you’re using terms loosely.

For example you say “why are devs throttling the cpu”. What do you mean by the term “throttling” in this context?

Watch between 4:40-4:50. Devs are lowering the CPU clocks in order to consistently get 2.23GHz. This will be automated on the retail units(Rich explains at minutes 5:40-6:00)

PlayStation 5 New Details From Mark Cerny: Boost Mode, Tempest Engine, Back Compat + More
 

Ar¢tos

Member
The power budget is per workload and in order for the GPU to hit the max clocks they have to draw power The CPU. We have concrete proof of this in the dev kits and its what some devs have been doing.

Why are devs throttling the CPU to consistently get 2.23GHz on the GPU?
Dev kits don't have Smartshift yet, and there is context missing.
Are devs throttling the CPU to provide more Power to GPU because the dev kits can't keep both at max?
Are devs throttling the CPU to provide more Power to GPU to simulate scenarios where the CPU doesn't need full power?
Since the dev kit has fixed clocks, devs are most likely still exploring the hardware. Only when the final dev kits are out we will know for sure (if anyone leaks the infos).
 

rntongo

Banned
Dev kits don't have Smartshift yet, and there is context missing.
Are devs throttling the CPU to provide more Power to GPU because the dev kits can't keep both at max?
Are devs throttling the CPU to provide more Power to GPU to simulate scenarios where the CPU doesn't need full power?
Since the dev kit has fixed clocks, devs are most likely still exploring the hardware. Only when the final dev kits are out we will know for sure (if anyone leaks the infos).

How do you know the devkits don’t have smartshift? And in any case, the only difference mentioned was that retail machines have multiple power profiles per workload.
 

rntongo

Banned
Cerny said it.

I’d like to see the source but it would explain why in the video, Rich explains the devs have to throttle the CPU while in the retail units, the algorithm will automatically kick in to adjust the clocks per workload. It’s a seesaw between the cpu/gpu clocks using amd shift.
 

Dodkrake

Banned
It is about trying to make the narrative the norm or change the topic and/or just get the other fans pissed off for some of the hardcore fans.

Of course people will cycle through talking points without caring much about being right or wrong, once they pushed it far enough they will just move to the next one and then come back to it at some other point as if nothing happened/they got no pushback. Hopefully people landing on the posts seeing this said over and over will go from “what the heck are they saying?” To a most believing there must be something behind it after all... beauty of repetition :/...

Yeah. I've been in forums for a decade and a half and this is always the strategy. But since I'm not a parrot I won't be repeating myself over and over. Let them normalize lies and spread misinformation, we'll see soon enough who's right and who's wrong
 

Ar¢tos

Member
.
I’d like to see the source but it would explain why in the video, Rich explains the devs have to throttle the CPU while in the retail units, the algorithm will automatically kick in to adjust the clocks per workload. It’s a seesaw between the cpu/gpu clocks using amd shift.
He said either in the raodmap video or one of the interviews, can't remember which, but I think it is in the video.
 

rntongo

Banned
.

He said either in the raodmap video or one of the interviews, can't remember which, but I think it is in the video.

But yeah the fundamental points in the video still stand, the PS5 uses AMD shift to consistently adjust the clocks like a seesaw. Max clocks for the processor depending on the workload.
 

jimbojim

Banned
Do you have links to all the concrete and I suspect voluminous proofs?
That is not concrete proof from devkit or devs themselves. This is your interpretation of Leadbetter’s interpretation of what he said he read and heard on top of that.

Yeah.

So Rich who interviewed Cerny is no longer credible? Ok What was your interpretation?
Next time I'll find a way to get the devkits to answer your questions.

Rich just asked the questions. But you ignored the answers which are given by Cerny.

There's enough power that both CPU and GPU can potentially run at their limits of 3.5GHz and 2.23GHz, it isn't the case that the developer has to choose to run one of them slower."

Put simply, with race to idle out of the equation and both CPU and GPU fully used, the boost clock system should still see both components running near to or at peak frequency most of the time.
 

jimbojim

Banned
I replied these statements earlier on why repeat them?

Because you trying to spread FUD. Same crap has been told to you on ERA. For your own sanity, don't do this. Now i'm really begin to doubt that you're the one from XboxERA Discord who are doing this deliberately. Or you trying to present yourself as a "Sony fan" like you said you are.
 
Last edited:

rntongo

Banned
Because you trying to spread FUD. Same crap has been told to you on ERA. For your own sanity, don't do this. Now i'm really begin to doubt that you're the one from XboxERA Discord who are doing this deliberately. Or you trying to present yourself as a "Sony fan" like you said you are.

Ad hominem.
 

Redlight

Member
Refrain? I did not state anything beyond what he stated in the presentation or the follow up DF article/interview with Leadbetter, but sure let’s play semantics on majority, vast majority, and most of the time. I would ask you to show me where I said that it was an actual quote word by word and where in the Road to PS5 or the follow up DF interview anything is said about clockspeed to indicate otherwise... sue me for paraphrasing I guess :).

You posted this, alone, as a quote without any suggestion that you were paraphrasing...
“full clock speed for the vast majority of the time”

I simply asked for a link because I had only read 'most of the time'. The difference between 'most' and 'vast majority' could be significant, so an actual quote from Cerny saying 'vast majority' would be enlightening.

You obviously thought so. :)

The Eurogamer quote is...

"So, when I made the statement that the GPU will spend most of its time at or near its top frequency, that is with 'race to idle' taken out of the equation - we were looking at PlayStation 5 games in situations where the whole frame was being used productively. The same is true for the CPU, based on examination of situations where it has high utilisation throughout the frame, we have concluded that the CPU will spend most of its time at its peak frequency."

Put simply, with race to idle out of the equation and both CPU and GPU fully used, the boost clock system should still see both components running near to or at peak frequency most of the time."
You could've just said you didn't have a link when I orginally asked. Would've been easier all round. :)
 
I watched the entire video. I want to know what you're referring to. I mentioned which section of the paper to read.

How is the feedback different. It has to be rendered separately? Look at the DX12 spec on SF. In fact watch your own video at 11:13. Then start reading the section in the paper I linked.

This isn't a terminology mixup. I want to know what hardware you think is now there for sampler feedback. You said something that is not hardware related and I gave you an example of it already being done.

Edit: I decided make life easier for you and just post the section of the paper here:

In the video the key portion is 5:17 through 6:40.
 

ToadMan

Member
But yeah the fundamental points in the video still stand, the PS5 uses AMD shift to consistently adjust the clocks like a seesaw. Max clocks for the processor depending on the workload.

Those aren’t the “fundamental points”.

It’s difficult for me to understand how you can remain so misinformed when several people here have pointed out your error.

SmartShift is not shifting clock speeds, it is shifting power. I don’t know how to state it any more clearly than that.

Until you can grasp that simple point, you will be unable to discuss this topic rationally. I’m afraid quoting YouTube videos is not a replacement for using your own brain and gathering knowledge.

Here’s a simple test for you - go to this page and search for “clock” or “frequency” or whatever you like that implies SmartShift is juggling clock rates

https://www.amd.com/en/technologies/smartshift

AMD have cool and quiet and ... I can’t remember the name, power...something for frequency fiddling. SmartShift is all about power.
 

Three

Member
In the video the key portion is 5:17 through 6:40.
Again it didn't mention any new hardware for this.

This is why cards from 2018 got support for this DX feature
You say "but that feedback is different". It isn't for the memory/ streaming bandwidth saving.
Let me explain the virtual textures paper and its predecessor in software based virtual textures RAGE.

"feedback is necessary for determining which parts of the texture need to be resident."

This is the main part of the saving. You use this to find what textures need to be loaded. What does SF do differently here?

"feedback needs to be rendered to a separate buffer to store the virtual page coordinates
(x,y), desired mip level, and virtual texture ID (to allow multiple virtual textures)."

The is the same as the section you reference in your sample feedback video. "You say but it needs to be rendered" . Yes as does sampler feedback. That is what the lower res "sampler texture" is in your video. What feedback do you get from "sampler feedback" that is different?


"information is then used to pull in the texture pages needed to render the scene."

this is where you get your textures and are now using only what you need from the feedback

"feedback can be rendered in a separate rendering pass or to an additional render target during
an existing rendering pass. An advantage of rendering the feedback is that the feedback is
properly depth tested, so the virtual texture pipeline is not stressed with requests for texture
pages that are ultimately invisible."

Savings! Now again what feedback is SF getting to make a saving of more than 2x on this? This is the crux of the conversation we are having.

" When a separate rendering pass is used it is fine for the
feedback to be rendered at a significantly lower resolution (say 10x smaller)."

it even mentions the lower res mentioned in the SF video so clearly not that.

In fact it makes it pretty clear at 6:28 in the video under the slide called "worth calling out" that it is not an overhaul to it at all. This feature was a black box to their API because the driver and DirectX did not expose it to the application. That's all.

Nothing on the actual hardware level on new GPUs capable of PRT prevent it. You have virtual memory you have a SW driver stack. What do you need in hardware? What can't older GPUs do on a purely hardware level? Nothing.

The "hardware feature" is a driver update for support on their DX API.

This is why cards from 2018 got it. You said but this is new hardware to AMD. It isn't, in OpenGL you can do everything SF does. The reason old AMD cards may not get this feature in DirectX but a 2018 nvidia card will is because the 2018 nvidia cards met DX12U's Raytracing support requirement. It can support these other features just fine, it may not get them as part of DX12U.
 
Last edited:
giphy.webp


This thread is embarrassing.
 

rnlval

Member
From https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs

"but there was one startling takeaway - we were shown benchmark results that, on this two-week-old, unoptimised port, already deliver very, very similar performance to an RTX 2080." - DF




Your RTX 2080 Supe argument is not valid.
 

rnlval

Member
God, it's always the same people in a circular argument. Not even gonna try anymore, you want other posters to just lose their shit and get banned, I won't. Good day
I didn't post "Its a 44% difference in CU.. thats nothing to sneeze at" statement and I don't endorse it.
 

rnlval

Member
Those aren’t the “fundamental points”.

It’s difficult for me to understand how you can remain so misinformed when several people here have pointed out your error.

SmartShift is not shifting clock speeds, it is shifting power. I don’t know how to state it any more clearly than that.

Until you can grasp that simple point, you will be unable to discuss this topic rationally. I’m afraid quoting YouTube videos is not a replacement for using your own brain and gathering knowledge.

Here’s a simple test for you - go to this page and search for “clock” or “frequency” or whatever you like that implies SmartShift is juggling clock rates

https://www.amd.com/en/technologies/smartshift

AMD have cool and quiet and ... I can’t remember the name, power...something for frequency fiddling. SmartShift is all about power.

Showing a diagram is better.

1T2jGy7.png


Smartshift can alter clock speed depending on CPU and GPU load.

I have Ryzen APU 2500U 25W mobile and I can limit CPU usage by 50% via WIndows power management settings to enable the GPU to reach boost mode at paper spec. I have Ryzen Adjtool to modify Power Design limits beyond 25watts e.g. 30 watts to 45 watts which enables increase clock speeds in both CPU and GPU boost modes. PSU is 65 watts rated.
 

Panajev2001a

GAF's Pleasant Genius
You posted this, alone, as a quote without any suggestion that you were paraphrasing...


I simply asked for a link because I had only read 'most of the time'. The difference between 'most' and 'vast majority' could be significant, so an actual quote from Cerny saying 'vast majority' would be enlightening.

You obviously thought so. :)

The Eurogamer quote is...

"So, when I made the statement that the GPU will spend most of its time at or near its top frequency, that is with 'race to idle' taken out of the equation - we were looking at PlayStation 5 games in situations where the whole frame was being used productively. The same is true for the CPU, based on examination of situations where it has high utilisation throughout the frame, we have concluded that the CPU will spend most of its time at its peak frequency."

Put simply, with race to idle out of the equation and both CPU and GPU fully used, the boost clock system should still see both components running near to or at peak frequency most of the time."
You could've just said you didn't have a link when I orginally asked. Would've been easier all round. :)

Yes, but the point was or is? I was not pretending to do an actual quote, you knew that and of course there was no link to a quote that was not there... but the point of that and of this post still eludes me unless it is either playing semantics a bit (most of the time vs majority vs ...).

Fair enough though, I can be clearer and to the point specifying more what I think is an educated guess, how likely it is and why, and what is an actual quote.
 
Last edited:

rntongo

Banned

Panajev2001a

GAF's Pleasant Genius
Yeah. I've been in forums for a decade and a half and this is always the strategy. But since I'm not a parrot I won't be repeating myself over and over. Let them normalize lies and spread misinformation, we'll see soon enough who's right and who's wrong

I get how you feel, but I cannot stand them normalising misinformation and lies and I still enjoy arguing with those in good faith even if it can get a bit intense hehe.
 

Ascend

Member
Again it didn't mention any new hardware for this.

This is why cards from 2018 got support for this DX feature
You say "but that feedback is different". It isn't for the memory/ streaming bandwidth saving.
Let me explain the virtual textures paper and its predecessor in software based virtual textures RAGE.

"feedback is necessary for determining which parts of the texture need to be resident."

This is the main part of the saving. You use this to find what textures need to be loaded. What does SF do differently here?

"feedback needs to be rendered to a separate buffer to store the virtual page coordinates
(x,y), desired mip level, and virtual texture ID (to allow multiple virtual textures)."

The is the same as the section you reference in your sample feedback video. "You say but it needs to be rendered" . Yes as does sampler feedback. That is what the lower res "sampler texture" is in your video. What feedback do you get from "sampler feedback" that is different?


"information is then used to pull in the texture pages needed to render the scene."

this is where you get your textures and are now using only what you need from the feedback

"feedback can be rendered in a separate rendering pass or to an additional render target during
an existing rendering pass. An advantage of rendering the feedback is that the feedback is
properly depth tested, so the virtual texture pipeline is not stressed with requests for texture
pages that are ultimately invisible."

Savings! Now again what feedback is SF getting to make a saving of more than 2x on this? This is the crux of the conversation we are having.

" When a separate rendering pass is used it is fine for the
feedback to be rendered at a significantly lower resolution (say 10x smaller)."

it even mentions the lower res mentioned in the SF video so clearly not that.

In fact it makes it pretty clear at 6:28 in the video under the slide called "worth calling out" that it is not an overhaul to it at all. This feature was a black box to their API because the driver and DirectX did not expose it to the application. That's all.

Nothing on the actual hardware level on new GPUs capable of PRT prevent it. You have virtual memory you have a SW driver stack. What do you need in hardware? What can't older GPUs do on a purely hardware level? Nothing.

The "hardware feature" is a driver update for support on their DX API.

This is why cards from 2018 got it. You said but this is new hardware to AMD. It isn't, in OpenGL you can do everything SF does. The reason old AMD cards may not get this feature in DirectX but a 2018 nvidia card will is because the 2018 nvidia cards met DX12U's Raytracing support requirement. It can support these other features just fine, it may not get them as part of DX12U.
 

Ascend

Member
If you are making the point that SFS can make efficient virtual texturing easier to achieve than SF and SF does it over PRT and all of those do it over manual virtual texturing approaches... it was known, understood, and accepted.

Anything else on top of that lacks factual evidence IMHO.
That's fair I guess. We have to wait and see how it will play out in practice.
My real point was that there's hardware there, because unless I misunderstood Three, he was arguing that it's still software.
 
Again it didn't mention any new hardware for this.

...

In fact it makes it pretty clear at 6:28 in the video under the slide called "worth calling out" that it is not an overhaul to it at all. This feature was a black box to their API because the driver and DirectX did not expose it to the application. That's all.

It seems like you're being pretty selective with your hearing at 6:28.

"So one thing to know is that sampler feedback is not a complete overhaul of sampling hardware..."

Ok, seems like you got that so far. Did you close the tab right at this moment?

"...but it's an extension to it. It's a GPU hardware feature that extends existing hardware designs and gets you something new out of what used to be that closed black box."
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
This shit is still going on?!?! You know what this thread really shows me... how much better Xbox is at messaging than PS is. Both these guys knew what the other was doing. Xbox got ahead of PS and talked about virtual memory, to "match" the SSD advantages of PS5, but it's just not the same. But in the end, this little advantage here and there will basically make them equal. So really, all the fighting is just a waste.

100% truth!
 

longdi

Banned
From AT 10900K reveiw, interesting quote for the console fanboys. :messenger_grinning_sweat:

Note, 254 W is quite a lot, and we get 10 cores at 4.9 GHz out of it. By comparison, AMD's 3990X gives 64 cores at 3.2 GHz for 280 W, which goes to show the trade-offs between going wide and going deep. Which one would you rather have?

116012.png
 

Deto

Banned
Because you trying to spread FUD. Same crap has been told to you on ERA. For your own sanity, don't do this. Now i'm really begin to doubt that you're the one from XboxERA Discord who are doing this deliberately. Or you trying to present yourself as a "Sony fan" like you said you are.

R rntongo
Joined Feb 25, 2020
 
Last edited:

Three

Member
It seems like you're being pretty selective with your hearing at 6:28.

"So one thing to know is that sampler feedback is not a complete overhaul of sampling hardware..."

Ok, seems like you got that so far. Did you close the tab right at this moment?

"...but it's an extension to it. It's a GPU hardware feature that extends existing hardware designs and gets you something new out of what used to be that closed black box."
I know you're trying to be condescending but what is a closed black box in this context, what is a 'hardware feature', and what is an 'existing hardware design' to you?

A closed black box is the application not having that information because the driver and DX API do not expose it to the developer. A 'hardware feature' in this context is the driver not doing it. What on the silicon is needed for this? Why do you keep dodging that question.



And that has all already been discussed.

Read the information:
"It allows us to elegantly stream individual texture pages (rather than the whole mips) based on GPU texture fetches." This is not what is new but is what offers the savings on the bandwidth and memory even in the days before SFS. It's what PRT does. It's what megatextures does, it's what virtual memory and virtual textures allow.
Hence why he mentions them afterwards with 'that being said...games have been streaming virtual memory pages for a while blah blah blah'

Now the custom hardware mentioned is something different. The graceful fallback to the resident mip levels isn't something that is giving you 2x the bandwith and memory space. Its a fallback because that texture WAS NOT ABLE TO LOAD into memory in the frame time. It is already resident so completely irrelevant to the 2x-3x claim. You're not gaining streaming bandwidth there you're falling back to something already in memory. Hence why I've been arguing about people trying to link the 2x-3x figure to the custom hardware tweet just to claim 'secret sauce' to close a gap but the only response to that logic has been 'We don't know', 'we'll have to see'.

MS never once claimed 2x-3x efficiency in both bandwith and memory use compared to other currently available hardware. They never claimed 2x or 3x efficiency over other modern virtual texture streaming methods. Only that there were textures on xbox one that were in memory unused in the scene and the reason why is the games with 4K textures and the slow HDD which meant that those textures were intentionally there because of the game/engine design and hardware performance. Not every game was a Doom 4 or RAGE.

2x-3x would be a fucking big deal. I'm sure MS may have intentionally implied it because it makes them look good but they never make that claim at all and when asked directly to elaborate on the figure and its link to custom hardware avoided the question.
That's fair I guess. We have to wait and see how it will play out in practice.
My real point was that there's hardware there, because unless I misunderstood Three, he was arguing that it's still software.

Notice I'm talking about SF and DX12U cards and the crux being that 2x is not from custom hardware. no custom hardware is required to stream only required partial textures. This does not mean there cannot be hardware changes in whatever MS uses on the XSX. It just means there is no custom hardware for SF.

Whatever MS decides to include under the 'SFS' terminology is up to them but no matter what it is it's not custom hardware getting you 2x the memory savings and bandwidth. The savings comparison is to something else entirely.
 
Last edited:

jimbojim

Banned
This shit is still going on?!?! You know what this thread really shows me... how much better Xbox is at messaging than PS is. Both these guys knew what the other was doing. Xbox got ahead of PS and talked about virtual memory, to "match" the SSD advantages of PS5, but it's just not the same. But in the end, this little advantage here and there will basically make them equal. So really, all the fighting is just a waste.

Or maybe because XSX must not be weaker in any segment. :D . Also, surely UE5 showcase create huge buzz.
 
At this stage it’s not realistic to expect any boost. Or maybe “realistic” is the wrong way to say it - it’s more that there doesn’t seem to be anything to gain by increasing clock speed.

The problem is power. The Xsex gpu is rated to max 200W.

A similar PC Card - 2080rtx - that’s rated to 250W. The Xsex is already running their gpu at a significantly lower power than the equivalent PC Card (this is usual for consoles).

The xb1 gpu had a power rating of 95W. That gpu was roughly equivalent to a GTX750 which had a power rating of 55W.

The XB1 had a power surfeit for its gpu and the cooling was good enough, so MS could afford to increase the clock without risking system stability.

Xsex isn’t enjoying this scenario and that’s why it won’t get a clock boost.

Your power ratings for XSX might be off. Keep in mind PC GPU cards also have to take the GDDR6 memory's TDP into account. Depending on the amount of chips that can add upwards 20 watts if it's eight chips.

There's a chance they could upclock XSX's GPU but it's not a large probability. That said, they are definitely in the lower range of RDNA2's sweetspot on DUV enhanced, so there is some room for them based simply off of that.

That is not concrete proof from devkit or devs themselves. This is your interpretation of Leadbetter’s interpretation of what he said he read and heard on top of that.

The thing with the CPU throttling is in relation to the devkits, which use power profiles, meaning components are hard-set to given power budgets which affects their frequency. Cerny was asked about this and said that the final retail units won't operate this way i.e the power load shifting will be autonomous.

But that does mean the devkits do scale back power (and thus frequency) performance on given components to provide sufficient power to another component, via the profiles.
 
Last edited:
I know you're trying to be condescending but what is a closed black box in this context, what is a 'hardware feature', and what is an 'existing hardware design' to you?

A closed black box is the application not having that information because the driver and DX API do not expose it to the developer. A 'hardware feature' in this context is the driver not doing it. What on the silicon is needed for this? Why do you keep dodging that question.

Sorry to be condescending. You may very well have more knowledge and experience than me in this area, so I don't mean to speak down to you. It's just it seems like you're being pretty obtuse to first gloss over her clear statement that this is a hardware feature, and now you're quibbling over the definition of "hardware feature".

She's talking about the texture sampling process, so my guess is this some extra hardware in the TMU. They're not exactly showing a block diagram where the hardware is, probably because it sounds like there is more than one implementation in hardware. Here is someone talking about this on the nvidia hardware side (in relation to the texture space shading feature that nvidia promoted in 2018)

The use of texture filtering (trilinear, anisotropic, etc) determines what texels actually need to be shaded and the texture sampling hardware already identifies these as part of the sampling routine. Turing has a new hardware feature that returns a list of texels touched by a texture sampling function. They call this the texture footprint. This removes the shader calculations required to manually compute the footprint and makes implementing this technique less complex.

 
Last edited:
Showing a diagram is better.

1T2jGy7.png


Smartshift can alter clock speed depending on CPU and GPU load.

I have Ryzen APU 2500U 25W mobile and I can limit CPU usage by 50% via WIndows power management settings to enable the GPU to reach boost mode at paper spec. I have Ryzen Adjtool to modify Power Design limits beyond 25watts e.g. 30 watts to 45 watts which enables increase clock speeds in both CPU and GPU boost modes. PSU is 65 watts rated.

This slide is always great to show that the Ps5 max CPU and GPU clocks are mutually exclusive.
But Sony is at fault for the misunderstanding. Cerny was always talking about Clock speeds and not power consumption. But at the end of the day, the increased power target will lead to higher clocks for one or the other.
 
Last edited:

rnlval

Member
He was talking about early initial strategy not what they had two seconds before and so they decided to flick the SmartShift switch (part of their variable/unlocked clocks strategy not the only bit as part of the video).
Then they decided to focus on fixing power consumption target and optimise around that. You are being very sensationalistic there. What they have now is a system that allows you to trade off workload and workload complexity for frequency in some cases and trade power across units when one needs some help and the other can spare cycles: on top of it, if needed slight down clocking (with voltage reduction too likely) allowing minimal performance impact on the unit with a sizeable power consumption reduction)
All with tooling which makes it easy to optimise for it.

Take those statements in context of console development when it started and the box and cooling they thought about giving the console as well as take CPU and GPU together not split as if they were different chips with their own cooling (also the SSD I/O is a considerable source of heat they need to account for).
The big focus on the GPU was always very high frequency and to design that regardless of the workload and not set a low baseline for the CPU. Thoughts you need to have if you want to prevent a preventable RRoD like scenario.

Nothing that hidden or mysterious, but sure it does not play well with the idea that 10.23 TFLOPS is fake and 9.2 TFLOPS is the real number now, does it :)?
Mark Cerny revealed the base clock speeds for CPU and GPU before applying Smartshift.
 
Mark Cerny revealed the base clock speeds for CPU and GPU before applying Smartshift.
Yes but what is matter here, looks in a normal game even if is AAA he workloads where the use of CPU and GPU use 100% of processing power are very rarely.

The thing is XSX follow the normal approach of have clock fixed in order to have control of the temperature. Cerny follow the another approach he just follow
the consume of the SoC to do that.

That is why when DF asked what will happens if a workload where use the 100% of CPU and GPU happens in PS4 what will happens ? and then he answer the console
will shutdown.

Many people I don't say you guys confuse clock with workload, yes if the PS5 was a kind of server which should be to 100% all the time yeah that will a disadvantage but
in a normal scenario doesn't work like that anyway.
 

Dory16

Banned
This slide is always great to show that the Ps5 max CPU and GPU clocks are mutually exclusive.
But Sony is at fault for the misunderstanding. Cerny was always talking about Clock speeds and not power consumption. But at the end of the day, the increased power target will lead to higher clocks for one or the other.
I've been explaining this multiple times on the forum. Increasing the power budget for one decreases what's available for the other and the clock speed it can reach.
 

FALCON_KICK

Member
Top Bottom