[Digital Foundry]PS5 uncovered

Am I right in understanding that the increased frequency not only pushed up the (blagh) teraflops, but increases overall system performance in a way that extra CUs don't? I thought that's what he meant by increased gains through overclocking rather than widening.
I figured that was the whole point, to run in tandem with the IO and system buses as a unified system rather than brute forcing it?

Or am I singing sweet lullabies from my aged sphincter?
Extra CU's still bridge a higher gap than an overclock, and under stress also take rendering load better. It's true that an overclock can increase fill rates and performance but it's not a competent way of reaching stated performance.

Decoupled from costs; more hardware is better.
 
That defeats the purpose because I originally stated that the ps4 pro beats the xbox one x version in terms of performance. Again you can have all the high resolution you want but I'm sure alot of people rather have checker board and a consistent frame rate than having the most powerful current console dip in the 40's. Since the new xbox is more powerful as well I can see this being a theme for next gen as well as well as an edge for sony. Performance matters most over resolution
The PS4 doesnt actually beat the Xbox One X in performance. Because you are not testing equivalent things. And you are using Subjectivity to hand wave that it's all about the game performance based on two different resolutions.

While I appreciate the Developers giving FPS priority for PRO gamers when it comes down to it the X is better hardware. It's not really even debatable is it? A game may choose to run at lower resolution but technically the X could run that version (if offered) better than than the PRO could.

I game at 1080p and I rather turn off 4K all together, so I understand your sentiment. I just don't agree with your methodology.
 
Last edited:
Agreed. Because - what else are you going to do? Not like anything can change the specs at this point. And besides, The PS5 will be a great machine too. Just not as good as the Series X.
Just like PS1, PS2, and Pro weren't the most powerful. Sony fans go more for games as those all sold just fine
 
Spreewaldgurke Spreewaldgurke Some choice quotes you can use for the OP Summary:
  • PS5 design is easy for PlayStation 4 developers to get to grips with, but digging deeper into the new system's capabilities, there are many aspects of the PS5 design that PCs will be hard-pressed to match
PlayStation 5 Variable frequency system
  • There is a set power level for the SoC, power budget based on the thermal dissipation of the cooling assembly
  • PS5 uses an algorithm in which the frequency depends on CPU and GPU activity information (load). Inside the SoC a power control unit is constantly measuring the activity of the CPU, the GPU and the memory interface, assessing the nature of the tasks they are undertaking.
  • A 'model SoC' is used, a simulation of how the processor is likely to behave, and that same simulation is used at the heart of the power monitor within every PlayStation 5, ensuring consistency in every unit.
  • The time constant, the amount of time that the CPU and GPU take to achieve a frequency that matches their activity, It's quite short, if the game is doing power-intensive processing for a few frames, then it gets throttled. There isn't a lag where extra performance is available for several seconds or several minutes and then the system gets throttled. PS5 is very responsive to power consumed
  • Developers have feedback on exactly how much power is being used by the CPU and GPU.
  • Devs need to optimize their game engines in a different way - to achieve optimal performance for the given power level.
  • The CPU and GPU each have a power budget. If the CPU doesn't use its power budget - for example, if it is capped at 3.5GHz - then the unused portion of the budget goes to the GPU.
  • There's enough power that both CPU and GPU can potentially run at their limits of 3.5GHz and 2.23GHz, it isn't the case that the developer has to choose to run one of them slower.
  • GPU will spend most of its time at or near its top frequency in situations where the whole frame is being used productively in PS5 games. The same is true for the CPU, based on examination of situations where it has high utilization throughout the frame, we have concluded that the CPU will spend most of its time at its peak frequency.
  • With race to idle out of the equation and both CPU and GPU fully used, the boost clock system should still see both components running near to or at peak frequency most of the time.
  • Dropping frequency by 10 per cent reduces power consumption by around 27 per cent. "In general, a 10 per cent power reduction is just a few per cent reduction in frequency,
  • One of our breakthroughs was finding a set of frequencies where the hotspot - meaning the thermal density of the CPU and the GPU - is the same
  • Several developers speaking to Digital Foundry have stated that their current PS5 work sees them throttling back the CPU in order to ensure a sustained 2.23GHz clock on the graphics core
  • Dev kits support locked profiles, it can be helpful not to have variable clocks when optimising. Released PS5 games always get boosted frequencies so that they can take advantage of the additional power
  • Developers don't need to optimize in any way; if necessary, the frequency will adjust to whatever actions the CPU and GPU are performing
SSD
  • There's low level and high level access - it's the new I/O API that allows developers to tap into the extreme speed of the new hardware.
  • The concept of filenames and paths is gone in favour of an ID-based system which tells the system exactly where to find the data they need as quickly as possible.
  • With latency of just a few milliseconds, data can be requested and delivered within the processing time of a single frame, or at worst for the next frame. 'Just in time' approach to data delivery with less caching (RAM usage).
  • Developers can easily tap into the speed of the SSD without requiring bespoke code to get the best out of it solid-state solution.
  • A significant silicon investment in the flash controller ensures top performance: the developer simply needs to use the new API. Technology should deliver instant benefits, and won't require extensive developer buy-in to utilise it.
Audio
  • Pushing surround significantly beyond anything we've seen in the gaming space before
  • Thanks to the extra power of Tempest engine, the algorithms deliver more precision, enabling cleaner, more realistic, more believable sound
  • Indefinite amount of power available, didn't want the cost of a particular algorithm to be the reason for choosing that algorithm, we wanted to be able to focus simply on the quality of the resulting effect
  • The Tempest engine, a revamped AMD compute unit, which runs at the GPU's frequency and delivers 64 flops per cycle. Peak performance 100 gigaflops (8 jaguar cores).
  • Tempest engine supports two wavefronts: 3D audio and other system functionality, and one is for the game.
  • Tempest engine can use over 20GB/s
  • At launch, users with standard headphones should get the complete experience as intended
  • With TV speakers and stereo speakers, the user can choose to enable or disable 'TV Virtual Surround, Virtual surround sound works in a sweet spot. There is a basic implementation for TV and stereo speakers up and running and the PlayStation 5 hardware team continues to optimise it
  • For now, 5.1 and 7.1 channel systems get a solution that approximates PS4 solution, locations of the sound objects determine to what degree their sounds come out of each speaker.
 
Last edited:
So when talking about the audio chip on PS5, they said it was a Compute unit that was repurposed for audio? So is it using one of the 4 CUs that are disabled? If So maybe that's why they didn't just enable all the Cu's to increase performance.
And pretty weird that it can use so much bandwidth. If unchecked it can consume 20% of total memory bandwidth?
That's kinda crazy.
 
Of course Cerny is going to say blasting a gpu at 2.23 ghz is better vs. going with more CU at lower clocks.

He's supposedly the "lead architect" and he saddled PS5 with 36CU (the same amount as PS4 Pro in 2016).

What he should have done is do the MS route of more CU and less gpu ghz. Find the right combo and PS5 could be 12tf too along with the super fast SSD.
yeah but doing that you can say goodbye at that 400 dollars price point for crybabies.
 
The longer they play this game the higher people expectations are going to be. I hope they have this under control. Ok truthfully I really don't care.
 
Am I right in understanding that the increased frequency not only pushed up the (blagh) teraflops, but increases overall system performance in a way that extra CUs don't? I thought that's what he meant by increased gains through overclocking rather than widening.
I figured that was the whole point, to run in tandem with the IO and system buses as a unified system rather than brute forcing it?

Or am I singing sweet lullabies from my aged sphincter?
Increasing clocks increase the performance of others parts of the GPU like raterization, cache spedds, etc.
So the GPU will perform better with the same TFs than having more CUs.

But of course if you have way more CUs it will be better.

Extra CU's still bridge a higher gap than an overclock, and under stress also take rendering load better. It's true that an overclock can increase fill rates and performance but it's not a competent way of reaching stated performance.

Decoupled from costs; more hardware is better.
That is not true at all.

The chip running at 1500Mzh with 10CUs will perform better than 1000Mhz with 15CUs.
 
Last edited:
The amount of FUD you guys will believe is incredible. That guy was called out, then deleted those tweets that Jason Schrier pointed out. Must be getting worried by how next gen is going to turn out, huh?
Most likely due to the attacks he received from people for posting it. He probably didnt realize how many crazy people would attack him over it. Some people don't need that in their lives.
 
So I tried getting caught up on the thread but I'm confused.

Half the people use the quote from a dev saying they have to throttle the CPU back to obtain max GPU clock

Other half use Cerny quote that both can run maxed. So which is it?

Both. But actually, the first one.

Cerny is doing PR to the fullest, but he chooses his words very carefully. That way, nobody can say in a few months when everything is clear that he was lying. He is just trying to sell his console.

The CPU and the GPU can work at maximum clocks at the same time, but they can't have maximum workload at the same time. That's why devs have to sacrifice the perfomance of one to achieve the maximum perfomance of the other.

So yeah, despite what Sony fans want so desperately to believe in order to avoid to be put in suicide watch, the teraflop count and/or the CPU perfomance are going to be less than what Sony advertises. And the advertised perfomance was already weak to begin with.
 
That is not true at all.

The chip running at 1500Mzh with 10CUs will perform better than 1000Mhz with 15CUs.
It's completely true and in practice it shows itself to be true. I'm not playing your hypothetical fake news games, teraflops matched like for like with CU's and frequencies will always side with a higher CU count.

You're completely uneducated on this matter. I told you guys this last week and you didn't listen, Richard just told you guys and showed you this in action today. Those are the facts, deal with them and stop talking, period.
 
Last edited:
Both. But actually, the first one.

Cerny is doing PR to the fullest, but he chooses his words very carefully. That way, nobody can say in a few months when everything is clear that he was lying. He is just trying to sell his console.

The CPU and the GPU can work at maximum clocks at the same time, but they can't have maximum workload at the same time. That's why devs have to sacrifice the perfomance of one to achieve the maximum perfomance of the other.

So yeah, despite what Sony fans want so desperately to believe in order to avoid to be put in suicide watch, the teraflop count and/or the CPU perfomance are going to be less than what Sony advertises. And the advertised perfomance was already weak to begin with.
The first quote is not about what people are using.
The full quote explain that the devs are using a profile where the CPU throttles.
 
Because he posted a made up lie.
Did he say he posted a made up lie? Just because he deleted it doesnt mean it's an an admission of a lie? It most likely means he doesnt want the attention. Theres a bit going on in the world, and dealing with angry Sony fanboys isn't probably what he wants to be doing.
 
It's completely true and in practice it shows itself to be true. I'm not playing your hypothetical fake news games, teraflops matched like for like with CU's and frequencies will always side with a higher CU count.

You're completely uneducated on this matter. I told you guys this last week and you didn't listen, Richard just told you guys today. Those are the facts, deal with them and stop talking, period.
False like usual.
Yours pics is flawed like I exposed.

Show me that 2100Mhz RX 5700 sustaining these 2100Mhz and we can start to talk.
 
Did he say he posted a made up lie? Just because he deleted it doesnt mean it's an an admission of a lie? It most likely means he doesnt want the attention. Theres a bit going on in the world, and dealing with angry Sony fanboys isn't probably what he wants to be doing.
No others exposed him lol after that he deleted and locked the account.
Before he was spreading bullshit over it.
 
So I tried getting caught up on the thread but I'm confused.

Half the people use the quote from a dev saying they have to throttle the CPU back to obtain max GPU clock

Other half use Cerny quote that both can run maxed. So which is it?
The first quote is cut to make you believe the CPU is throttling... the full quote explain that the dev choose a profile that makes the CPU decrease the clock and it is a fixed profile... it is not the automatically like Cerny described... so you don't know if the CPU is really throttling because it is fixed at a lower clock.
The second quote is indeed true.
 
Last edited:
Blocked and forgotten again, I'll see you in two weeks. Waste of time.
You post a fake info and run away with the tail between legs because you know RX 5700 can't sustain 2100Mhz and that is why it show less performance in the tests lol

You need to understand one and for all FACTS > YOU!!!
 
Last edited:
Assuming all you just said is true... I have the following questions for you;

  • Where is what Cerny talked about, regarding the PS5 doing things differently? Because what you just described is how basically every phone, laptop and PC out there works, and he was specific that the PS5 works differently.
  • Where does the power limit come in? Cerny was quite clear about workloads and the power limit. You didn't mention that anywhere.
  • If the console could handle both the CPU and GPU at max load, why would developers have to choose a profile to throttle the CPU to ensure the GPU runs at 2.23 GHz?
  • If developers prefer non-variable clocks for optimization, why have variable clocks if the console can reach the max clocks at max workloads at all times anyway?

Good luck.

  • Where is what Cerny talked about, regarding the PS5 doing things differently? Because what you just described is how basically every phone, laptop and PC out there works, and he was specific that the PS5 works differently.
PS5 works on a constant POWER budget (watts). The point is it gets the maximum value from its components by doing so at every instant. It has nothing to do with "boost mode" like boosting frequency from a base value.

GPU and CPU usually are at max frequency, but needed WORKLOAD at any time is nearly never maximum both on CPU and GPU, which means you can very frequently spare power from one or the other without affecting the game at all. Every PS5 will behave the same, because it depends on workload, not temperature or anything else.
  • Where does the power limit come in? Cerny was quite clear about workloads and the power limit. You didn't mention that anywhere.
Power limit is the maximum the cooling system can handle, it was calculated beforehand. It's pretty well explained by Cerny.

  • If the console could handle both the CPU and GPU at max load, why would developers have to choose a profile to throttle the CPU to ensure the GPU runs at 2.23 GHz?
- You need profiles because you need to decide what to do in the rare cases where both CPU and GPU are on max worlkoad. Usually on console games you'll simply slightly throttle the CPU and it will have zero impact. Once again, it rarely happens anyway.

  • If developers prefer non-variable clocks for optimization, why have variable clocks if the console can reach the max clocks at max workloads at all times anyway?
- Because the console can't reach max CPU and GPU frequency at max workloads on both, but you're certianly not a dev, because such situations rarely occur : CPU and GPU never are at max workloads together. And when they are, throttling CPU 1 or 2 % is enough to drop power draw majorly, so it's not a problem anyway.

All the point here is getting more for your money while being sure the console never makes too much noise (since power is constant and known beforehand, unlike on PS4 or other consoles). That's actually very smart.

Had XSX done the same, the XSX would be more powerful, not less.

You're welcome.
 
Agnostic2020 and B BlackBuzzard

I am perma-banned from that thread.


No, I don't make stuff out of thin air, what's the point? I tried to warn everyone about Tommy Fisher and OsirisBlack.

With a 17-21% gap in GPU performance and roughly the same amount of bandwidth proportional to their GPUs computational power. Im curious what makes you think Sony will have to resort to parity policies?
Games will just use dynamic resolution, at worst the PS5 CPU throttles to 3GHZ but i can't see that being an issue for cross gen games designed around jaguar cores as richard pointed out.
 
With a 17-21% gap in GPU performance and roughly the same amount of bandwidth proportional to their GPUs computational power. Im curious what makes you think Sony will have to resort to parity policies?
Games will just use dynamic resolution, at worst the PS5 CPU throttles to 3GHZ but i can't see that being an issue for cross gen games designed around jaguar cores as richard pointed out.

If it wasn't for DF, vgTech, and nxGamer comparison articles, Xb1 would arguably be in a better position.

In the end, Xb1 and PS4 had comparable experiences power wise. As did PS4 Pro and Xsx.
 
Last edited:
lol what?

The 3D audio can consume that much bandwidth?

Oh my God 😂

In theory it can, but in practice it never would. Devs will decide how much it should take and plan accordingly. Devs not wanting to budget too much to it and the Tempest Hardware not being fully utilized could be a reality. Speaking to the 20 GB/s, even that is a ton and I doubt it will hit that number ofen.
 
Last edited:
No others exposed him lol after that he deleted and locked the account.
Before he was spreading bullshit over it.
How did they expose him? Just saying he is full of shit and then when he makes his twitter private and deletes the tweet claiming that this is the proof, isnt proof.
The only thing that has been proved is that Jason Schreier is a full on Sony shill and has taken it upon himself to defend Sony at all costs, including the cost to his credibility.
 
That's right, folks. Overclocking is a better console solution than more on-chip computational power!

God, I am loving this generation already.
 
How did they expose him? Just saying he is full of shit and then when he makes his twitter private and deletes the tweet claiming that this is the proof, isnt proof.
The only thing that has been proved is that Jason Schreier is a full on Sony shill and has taken it upon himself to defend Sony at all costs, including the cost to his credibility.
Actually Jason exposed him.

BTW he already unlocked his tweeter after delete all the fake posts.
 
Spreewaldgurke Spreewaldgurke Some choice quotes you can use for the OP Summary:
  • PS5 design is easy for PlayStation 4 developers to get to grips with, but digging deeper into the new system's capabilities, there are many aspects of the PS5 design that PCs will be hard-pressed to match
PlayStation 5 Variable frequency system
  • There is a set power level for the SoC, power budget based on the thermal dissipation of the cooling assembly
  • PS5 uses an algorithm in which the frequency depends on CPU and GPU activity information (load). Inside the SoC a power control unit is constantly measuring the activity of the CPU, the GPU and the memory interface, assessing the nature of the tasks they are undertaking.
  • A 'model SoC' is used, a simulation of how the processor is likely to behave, and that same simulation is used at the heart of the power monitor within every PlayStation 5, ensuring consistency in every unit.
  • The time constant, the amount of time that the CPU and GPU take to achieve a frequency that matches their activity, It's quite short, if the game is doing power-intensive processing for a few frames, then it gets throttled. There isn't a lag where extra performance is available for several seconds or several minutes and then the system gets throttled. PS5 is very responsive to power consumed
  • Developers have feedback on exactly how much power is being used by the CPU and GPU.
  • Devs need to optimize their game engines in a different way - to achieve optimal performance for the given power level.
  • The CPU and GPU each have a power budget. If the CPU doesn't use its power budget - for example, if it is capped at 3.5GHz - then the unused portion of the budget goes to the GPU.
  • There's enough power that both CPU and GPU can potentially run at their limits of 3.5GHz and 2.23GHz, it isn't the case that the developer has to choose to run one of them slower.
  • GPU will spend most of its time at or near its top frequency in situations where the whole frame is being used productively in PS5 games. The same is true for the CPU, based on examination of situations where it has high utilization throughout the frame, we have concluded that the CPU will spend most of its time at its peak frequency.
  • With race to idle out of the equation and both CPU and GPU fully used, the boost clock system should still see both components running near to or at peak frequency most of the time.
  • Dropping frequency by 10 per cent reduces power consumption by around 27 per cent. "In general, a 10 per cent power reduction is just a few per cent reduction in frequency,
  • One of our breakthroughs was finding a set of frequencies where the hotspot - meaning the thermal density of the CPU and the GPU - is the same
  • Several developers speaking to Digital Foundry have stated that their current PS5 work sees them throttling back the CPU in order to ensure a sustained 2.23GHz clock on the graphics core
  • Dev kits support locked profiles, it can be helpful not to have variable clocks when optimising. Released PS5 games always get boosted frequencies so that they can take advantage of the additional power
  • Developers don't need to optimize in any way; if necessary, the frequency will adjust to whatever actions the CPU and GPU are performing
SSD
  • There's low level and high level access - it's the new I/O API that allows developers to tap into the extreme speed of the new hardware.
  • The concept of filenames and paths is gone in favour of an ID-based system which tells the system exactly where to find the data they need as quickly as possible.
  • With latency of just a few milliseconds, data can be requested and delivered within the processing time of a single frame, or at worst for the next frame. 'Just in time' approach to data delivery with less caching (RAM usage).
  • Developers can easily tap into the speed of the SSD without requiring bespoke code to get the best out of it solid-state solution.
  • A significant silicon investment in the flash controller ensures top performance: the developer simply needs to use the new API. Technology should deliver instant benefits, and won't require extensive developer buy-in to utilise it.
Audio
  • Pushing surround significantly beyond anything we've seen in the gaming space before
  • Thanks to the extra power of Tempest engine, the algorithms deliver more precision, enabling cleaner, more realistic, more believable sound
  • Indefinite amount of power available, didn't want the cost of a particular algorithm to be the reason for choosing that algorithm, we wanted to be able to focus simply on the quality of the resulting effect
  • The Tempest engine, a revamped AMD compute unit, which runs at the GPU's frequency and delivers 64 flops per cycle. Peak performance 100 gigaflops (8 jaguar cores).
  • Tempest engine supports two wavefronts: 3D audio and other system functionality, and one is for the game.
  • Tempest engine can use over 20GB/s
  • At launch, users with standard headphones should get the complete experience as intended
  • With TV speakers and stereo speakers, the user can choose to enable or disable 'TV Virtual Surround, Virtual surround sound works in a sweet spot. There is a basic implementation for TV and stereo speakers up and running and the PlayStation 5 hardware team continues to optimise it
  • For now, 5.1 and 7.1 channel systems get a solution that approximates PS4 solution, locations of the sound objects determine to what degree their sounds come out of each speaker.

Nice summary.
 
That's right, folks. Overclocking is a better console solution than more on-chip computational power!

God, I am loving this generation already.
If the clock scale without any lose of performance yes it is better for the same power level.
Like the example I showed 10CUs at 1500Mhz give better performance than 15CUs at 1000Mhz.
But more the clock increases to close the limit of the silicon less the performance scale... so RDNA at 2100Mhz can't both sustain that clock and scale the performance at that high clocks.

We will have a ideia how RDNA2 scale in clock when it launches in July.
That is when we will can make benchs to simulate PS5 and Xbox.
 
Last edited:
Actually Jason exposed him.

BTW he already unlocked his tweeter after delete all the fake posts.
How did Jason expose him? Did he find out who the guys source was and then have the guy say that that's not what he said?
Just calling out someone as a liar doesn't make them so.
I'm not saying what he and Windows Central have said is true, I have no idea. I dont know any devs who are working on the PS5.
 
Last edited:
How did Jason expose him? Did he find out who the guys source was and then have the guy say that that's not what he said?
Just calling out someone as a liar doesn't make them so.
I'm not saying what he and Windows Central have said is true, I have no idea. I dont know any devs who are working on the PS5.
Whatever fits your agenda wrongiswrong.
I'm just telling you what happened.
 
If the clock scale without any lose of performance yes it is better for the same power level.
Like the example I showed 10CUs at 1500Mhz give better performance than 15CUs at 1000Mhz.
But more the clock increases to close the limit of the silicon less the performance scale... so RDNA at 2100Mhz can't both sustain that clock and scale the performance at that high clocks.

I eagerly look forward to multi-platform titles. So much nonsense is being thrown around right now it's now past the point of absurdist parody.
 
Yeah, the way things are unfolding is getting pretty comical to say the least.
Can you shows show that RX 5700 is sustaining that 2100Mhz even with the shit RDNA scale in perf. per clock at high clocks.

I eagerly look forward to multi-platform titles. So much nonsense is being thrown around right now it's now past the point of absurdist parody.
There is any nonsense lol
It is public and anybody can try with both nVidia and AMD cards.

If you have a GPU you can try yourself... just make sure to try clocks that is not harmed by the performance scaling or that won't throttle.
 
Last edited:
That's right, folks. Overclocking is a better console solution than more on-chip computational power!

God, I am loving this generation already.

What Sony does on PS5 isn't traditional overclocking, it's keeping maximum acceptable power draw at any time. It's using their components the best way, so getting maximum value for the price.
 
What Sony does on PS5 isn't traditional overclocking, it's keeping maximum acceptable power draw at any time. It's using their components the best way, so getting maximum value for the price.
They make claims looking at RDNA.

2200Mhz or even more is probably the default clock for RDNA 2 cards with that 40% perf. per watt increase.
 
Whatever fits your agenda

*Checks username*

Larry-David-Laughter-on-Couch.gif
 
If it wasn't for DF, vgTech, and nxGamer comparison articles, Xb1 would arguably be in a better position.

In the end, Xb1 and PS4 had comparable experiences power wise. As did PS4 Pro and Xsx.
Precisely and back then the gap was much bigger (40%) compared to now (17-21%). Why would it be an issue now?
 
Precisely and back then the gap was much bigger (40%) compared to now (17-21%). Why would it be an issue now?

Ah, more funny numbers that don't take into account situational performance.

God, I wish I could skip right to the fall. It's going to be glorious. So many mouths writing checks their respective asses can't cover.

Anyway, I just hope Microsoft has the common sense to lower their CPU speeds, downgrade their RAM speed, and remove those extra CUs so their console can compete!
 
The PS4 doesnt actually beat the Xbox One X in performance. Because you are not testing equivalent things. And you are using Subjectivity to hand wave that it's all about the game performance based on two different resolutions.

While I appreciate the Developers giving FPS priority for PRO gamers when it comes down to it the X is better hardware. It's not really even debatable is it? A game may choose to run at lower resolution but technically the X could run that version (if offered) better than than the PRO could.

I game at 1080p and I rather turn off 4K all together, so I understand your sentiment. I just don't agree with your methodology.
perfoamce wise RE3 runs better on the pro than the x....i mean this is a fact with the numbers shown, performance as in frame rate. Why is this hard to understand people.
The X has better resolution
The Pro has better framerate performance, hence that version performs better
 
Last edited:
Top Bottom