Does performance scale linearly with frequency with that 35% improvement?
True. Also please kill me.Arguing and dick measuring is what u can expect for the next 7 years
Does performance scale linearly with frequency with that 35% improvement?
You've only seen a picture of it
We don't know anything about the launch lineup, let alone the lineup for the rest of the generation
Based on the specs 3rd party games will definitely look better on XSX, 1st party we'll have to see since Sony has amazing studios.
True. Also please kill me.
No need to wait and see. This is said every Gen but we know what sequels we are getting from Sony and what studios are working on them and we know thyll look better than competitors first parties. It always happens that way since ps3 days. God of War will be coming and it'll blow shit out the water, another Naughty Dog game will do the same no matter what it's called, Horizon Zero Dawn sequel will melt retinas etc. This isn't a blind faith but a faith built on ganes we've already experienced from the same developers through various generations who've proven they are just on a different level. Sony first parties will look the best.
This isn't how it works though. Things might change. We don't know yet, just look at the last of us part II for example. indefinitely delayed.. Who knows what happens?!
Also, did you see the PS4 launch lineup back then?
This is the PS4 exclusive games from Sony back then:
Sony Computer Entertainment Titles:
Killzone Shadow Fall: 73% at Metacritic
Knack: 54% at Metacritic
at launch.
Later on, so not even at launch, we had more exclusive games like:
Driveclub 71% at Metacritic
The Order: 1886: 63% at Metacritic
So, it will take years until we see all those games you are talking about! I don't think we will see horizon 2 or god of war at launch! At least, Sony did not announce anything like that for the launch
So, let's wait and see.
Whoever told you god of war on the ps4 is a blurry mess is a idiot. And your an idiot for believing it.I have been told that God of War PS4 looks like a blurry mess, so maybe the next one will be a bit sharp because of the extra TFs.
Time for true cinematic frame rate: 24FPs locked!
Seriously, I expect most games to run at 60fps on these machines, the graphical benefits of going 30 will not be as obvious this time around.
Whoever told you god of war on the ps4 is a blurry mess is a idiot. And your an idiot for believing it.
It's interesting that nobody talks about how the RTX 2080 Ti is not really a 13 TFLOP card because it rarely ever runs at it's max frequency . In fact, most PC GPUs never come close to reaching their theoretical max performance when running workloads. That's one of the fundamental differences between the PC platform and a console. Developers can control every aspect of the execution on a console so you can maximize the efficiency of the hardware. It's all about efficiency guys. That 13 TFLOP 2080 Ti may be only reaching a max throughput of 6 TFLOPs in even the best case (i.e. most demanding games). This is especially true when you realize that the games it is running is largely designed for MUCH lower hardware specs (1.8 TFLOP PS4 for example).
- I think it's a foregone conclusion that the RDNA 2 PC GPUs coming this year will indeed be clocked above the PS5 clocks (rumblings suggest well above 2.5Ghz). AMD has suggested as much and the PS5 just shows what's possible even in a closed box with limited power.
- People still talk about the PS5 "variable clocks" like it's something so foreign. In the PC space, both the CPU and GPU operate with varying clock speeds during execution. This is just how it works but it is a foreign concept in a console. However, based on what Cerny has said, the PS5 clocks will likely be less variable than their PC counterparts. The typical operating frequency will be at the caps of 3.5Ghz and 2.23Ghz for the CPU and GPU. This has the benefit of saving a ton of power when the workload does not demand full frequency while keeping a consistent experience expected from a console.
- I pointed out that the diminishing returns on clock speed increase on the PC is largely due to external factors with power gating and board design. Since a console is designed from the ground up, they can design around such limitations when building the box. There of course is still a curve when it comes to clock speed gains that will hit a wall at some point. But if Sony designs the PS5 around the 2.23 Ghz value, it can ensure that the box has everything around it to maximize performance at that frequency
This is also why we consistently see console games that seem to punch above their weight and do things thought not possible on such low end hardware. We are implicitly comparing that to PC standards (i.e what a 1.8 TFLOP GPU can do on PC) which is wrong since it's entirely dependent on the software it runs and the software designed for a console in constructed differently than that on a PC in many ways. But that God of War game for example is able to maximize those 8 Jaguar cores and 1.8 TFLOP GPU performance to a degree that PC software generally doesn't come close to.
The Order only 63% at metacritic? Why? Superb graphics, great atmosphere and fun shooting mechanics. I hope we will see The Order sequel on PS5.This isn't how it works though. Things might change. We don't know yet, just look at the last of us part II for example. indefinitely delayed.. Who knows what happens?!
Also, did you see the PS4 launch lineup back then?
This is the PS4 exclusive games from Sony back then:
Sony Computer Entertainment Titles:
Killzone Shadow Fall: 73% at Metacritic
Knack: 54% at Metacritic
at launch.
Later on, so not even at launch, we had more exclusive games like:
Driveclub 71% at Metacritic
The Order: 1886: 63% at Metacritic
So, it will take years until we see all those games you are talking about! I don't think we will see horizon 2 or god of war at launch! At least, Sony did not announce anything like that for the launch
So, let's wait and see.
The difference between the two consoles :
- Lower Resolution
- Less FPS
- Less Ray Tracing
- Less details
That's about it, most won't even care, just buy what you like.
Let's not forget pricing, more ram, higher the price, never believed that 24gb ram speculation.
The difference will be negligible
PS4 and Xbox One - 40%
PS5 and XSX - 15-20%
RT on AMD hardware is tied to CU count.Not sure if serious ? The only difference I see is some 20% less pixels for PS5.
While that is all true, we don't know yet if Ps5 will have VRS and ML or not.I don't understand where the 15% is coming from when XSX teraflop number is constant and even at the best case scenario, the PS5 will never shrink this difference down to just 15% because XSX will never have to be clocked down to 11.8 Terfaflops (10.28 + 15% = 11.8).
Also
-Xbox Series X has 25% more VRAM bandwidth (Bandwidth is a huge factor)
-Xbox Series X has 45% more CUs (RDNA 1 tests prove that more CUs beat out frequency by just adding 4, imagine 16 more)
-Xbox Series X has Variable Rate Shading which has shown significant improvement in performance on PC, on console will be much more.
-Xbox Series X has Direct Machine Learning which will improve visual fidelity like DLSS (Deep Learning SS)
Later on, so not even at launch, we had more exclusive games like:
While that is all true, we don't know yet if Ps5 will have VRS and ML or not.
XSX has a split ram pool and PS5's SSD cache streaming capabilities is the same thing VRS is doing but likely betterI don't understand where the 15% is coming from when XSX teraflop number is constant and even at the best case scenario, the PS5 will never shrink this difference down to just 15% because XSX will never have to be clocked down to 11.8 Terfaflops (10.28 + 15% = 11.8).
Also
-Xbox Series X has 25% more VRAM bandwidth (Bandwidth is a huge factor)
-Xbox Series X has 45% more CUs (RDNA 1 tests prove that more CUs beat out frequency by just adding 4, imagine 16 more)
-Xbox Series X has Variable Rate Shading which has shown significant improvement in performance on PC, on console will be much more.
-Xbox Series X has Direct Machine Learning which will improve visual fidelity like DLSS (Deep Learning SS)
How many people here believe that Sony under delivered with the PS5's graphical capabilities based on the 10.3 TFLOP metric? Do you subscribe to the notion that the PS5's GPU is nothing more than a RX 5700/XT card or that it's only on par with midrange GPUs today?
This post is about providing real data and educated estimations to dispel the notion that the PS5 GPU is only a "midrange" GPU that is not on par with today's top tier commercial GPUs. Indeed, looking at the TFLOP number in isolation is indeed very misleading and the truth about the actual performance of the GPU points paints a very different picture. I know many of you don't know me but I can say that I am not just pulling this info from nowhere. I have over 15 years experience working in gaming and have spent nearly 5 years of my career doing critical analysis of GPU performance. Take it for what it is.
Before I begin, a full disclaimer: this post is not about a comparison to or commentary on the Xbox Series X. No console fanboyism here please. The fact is, the Xbox Series X has a bigger GPU with more theoretical horsepower. Period. Nobody is refuting that so please no Xbox defense force please.
Like many, I too was initially somewhat disappointed to when I first heard the PS5 specs mainly because there was so much information before hand that pointed to more performance being a possibility. We've all been hearing about the 12-14 TFLOP monster that Sony was building and honestly it's not about the raw numbers that matter. I was more happy about the idea that both consoles would come out being really close in power which benefits gamers by establishing a high baseline where neither machine will have subpar 3rd party releases. But after taking some time to process the specs and information Sony released as well as doing some in depth analysis, I am pretty happy with what Sony ended up with from a GPU standpoint.
Let me be clear: the goal of what I'm presenting here is not to define an absolute performance metric for PS5 with a given piece of content. In other words, I am not trying to predict that PS5 can run game X at Y Fps specifically. That is impossible since there are some many variables affecting overall performance that we do not know about: CPU, memory, driver, other console specific optimizations etc. Instead what I am doing is establishing a realistic expectation of a baseline of performance of the GPU specifically by looking at known real world performance data from comparable hardware.
How am I doing this? Let me break it down:
- Let's establish a comparison mapping to other known GPUs based on their GPU architectures and theoretical computation power based on what we know:
- We know that AMD's RDNA architecture is a general 25% increase in performance per clock when compared to GCN -> 1TFLOP (RDNA) = 1.25 TFLOP (GCN)
- We know that RDNA 2 will be even more efficient than RDNA (i.e. perf per clock and per watt will be better). Now we can guess how much more efficient based on some actual hints from Sony and AMD:
- Mark Cerny himself during the PS5 tech dive revealed that the size of each CU in the PS5 GPU is roughly 62% larger than a PS4 CU. Thus, there is the equivalent of 58 PS4 CUs in the PS5. So 36 CU (PS5) = 58 CU (PS4). Now 58 CUs running at the PS5's 2.23 Ghz frequency => ~16.55 TFLOP (GCN). So what is the conversion factor to get from 10.28 TFLOP (RDNA 2) to 16.55 TFLOP (GCN)? Well it turns out that the additional perf per clock to reach that ratio is precisely 17%. So by this data: 1 TFLOP (RDNA 2) = 1.17 TFLOP (RDNA 1)
- AMD has already said that they are pushing to deliver a similar improvement with RDNA 2 over RDNA 1 as saw from GCN to RDNA 1. They have also confirmed that RDNA 2 will see a 50% improvement in perf/watt over RDNA 1. GCN to RDNA 1 saw a 50% perf/watt and 25% perf/clock increase. A 25% further increase in perf/clock in RDNA 2 sounds pretty ambitious and i will be more conservative. But we can use this as an upper bound.
- AMD has talked about mirroring their GPU progression to that of their CPU. They have specifically talked about increasing CPU IPC by roughly 15% every 12-18 months. The 10-15% range is typical of GPU generational transitions in the past
- Using the 25% ratio of RDNA to GCN and a 15% ratio of RDNA 2 to RDNA 1, we can calculate the equivalent amount of theoretical performance (i.e TFLOPs) for the PS5 GPU in terms of both RDNA performance and GCN performance:
PS5 TFLOP = 10.282. We can also note that it is actually easier to guessestimate the PS5 GPU performance since there is a GPU on the market very similar to it in the RX 5700. The GPU config in terms of CU count, number of shader cores, memory bus size, memory bandwidth etc is exactly a match for the PS5. At a high level, the PS5 is simply an extremely overclocked RX 5700 in terms of hardware. Now typically on PC, overclocking a GPU gives limited returns due to power issues and system design limitation that will not exist in a console. So if we calculate that the PS5's typical GPU clock of 2.23 Ghz is indeed ~34% higher than the typical GPU clock of the RX 5700 at 1.670 Ghz, we can extrapolate PS5 as being roughly 35% higher than that of an RX 5700. However, doing that raw translation does not account for RDNA 2 additional efficiencies. So if we add the 15% uplift in efficiency, we can get a pretty good idea of the PS5 GPU performance. It turns out that this projected value is pretty much identical to the TFLOP conversion factors I computed above
PS5 TFLOP (RDNA 1) = 12.09 (used to compare against RX 5700 and RX 5700 XT)
PS5 TFLOP (GCN) = 16.13 (used to compare against Radeon VII, PS4)
3. Now that we have a quantitative comparison point, we can calculate a PS5 projected performance target based on theoretical performance from comparable GPUs. For example, RX 5700 XT = 9.7 TFLOPs (RDNA 1) and PS5 = 12.09 TFLOP (RDNA 1) That puts the PS5 projected performance at ~25% higher than a RX 5700 XT. Using these calculations for other GPUs as reference points we get the following:
PS5 vs Xbox Series X = -15% (PS5 is 15% slower)4. Finally, now that we have a performance factor for some common GPUs across various AMD architectures, we can see where a projected PS5 performance will rank compared to the fastest cards on the market including Nvidia cards. I've looked at several industry aggregate sites such as Eurogamer, TechpowerUP, and GPUCheck (numerous games tested) as well as a couple of high profile games such as DOOM Eternal, Call of Duty Modern Warfare, and Red Dead Redemption 2 to look at where the PS5 performance will fall. I've done this analysis across numerous performance metrics, resolutions, and difference GPU references defined above to see if the data was consistent. The goal here was to identify which GPU currently on the market had the closest performance to a projected PS5 performance. I've highlighted the 4K rows since 4K is the target resolution for the PS5. The summery table shows which GPUs came closest to the projected PS5 performance at different resolutions. The raw results are below:
PS5 vs RX 5700 = 153% (PS5 is 53% faster)
PS5 vs RX 5700 XT = 125% (PS5 is 25% faster)
PS5 vs Radeon VII = 120 % (PS5 is 20% faster)
PS5 vs PS4= 8.76x (PS5 is nearly 9x faster)
**Note: Game performance was captured from TechpowerUp benchmark analysis using max settings at all resolutions
Key Takeaways:
So is the PS5 GPU underpowered? The data shows that the actual game performance is roughly around a RTX 2080 Super at a minimum in most cases which is currently the 2nd fastest commercially available GPU on the market! Anyone that can call that under-powered or "midrange" is...not very knowledgeable on this topic. Yes, by this same analysis the Xbox Series X would be matching or exceeding a RTX 2080 Ti which is amazing! The point here is that both consoles will have plenty of graphical horsepower and the PS5 in general is still a significant step up from anything AMD has released to date and is a generational leap over the PS4!
- General takeaway is that in most cases at higher resolutions, the PS5 performance is actually slightly higher than that of the RTX 2080 Super.
Note that the 1080p values are a bit misleading since some games are CPU bound at that resolution. Thus, most GPUs exhibit lower perf which is why the RTX 2080 Ti was the closet at 1080p. These numbers do no take into account other factors that can improve PS5 GPU performance even further such as: GPU specific optimizations, console specific optimizations, lower level driver compared to PC, I/O throughput improvements in PS5, memory subsystem etc This analysis is just a rough estimate and again is not to be taken literally in terms of actual performance in games. There are still a ton of variables and unknown factors. But it does account for known information to give a good relative performance baseline to set expectations on how much performance the PS5 GPU may possess. The answer is that it is definitely not "just an RX 5700 XT" and will likely have more performance than a 2070 Super My analysis went well beyond these websites, game titles, and reference GPUs. I presented the highlights but the overall takeaways is the same from the additional data: performance is most in line with a RTX 2080 Super.
Every should be excited but please stop spreading FUD about PS5 performance
XSX has a split ram pool and PS5's SSD cache streaming capabilities is the same thing VRS is doing but likely better
The 560 gb/s is only in ideal situations and if developers are already complaining about it's set up then it's not unified10GB dedicated for VRAM at 560GB/s bandwidth.
Why would it matter if it has split memory? It only matters if 10GB is not enough for VRAM and it has to dig in to the slower pool, but that slower pool will be for OS (2.5GB reserved) and 3.5GB for sound and CPU functions.
The most demanding games running at 4K on high end PCs today (2080 Ti) are taking up around 3-4GB of VRAM with all settings set to ULTRA, so you can see that even years down the road where this number triples, there's still room before they fill these 10GB of full speed bandwidth. PS5 is limited to 448GB/s from day ONE.
And NO, SSD ram is for fast loading and instant level loading, it will NOT replace GDDR6 functions, that's just crazy talk.
Only Idiots believe PS5 is "underpowered"
The 560 gb/s is only in ideal situations and if developers are already complaining about it's set up then it's not unified
Also, like I said same thing VRS is doing
The difference will be negligible
PS4 and Xbox One - 40%
PS5 and XSX - 15-20%
I don't know why Richard did that. I know we don't have RDNA 2 cards, but RDNA 1 card testing is completely irrelevant. In addition to the customizations in both consoles.Well, DF did a test with RDNA 1.0 cards, where they showed that 10 TF from 36 compute units leads to way less performance than 10 TF from 40 compute units, so overclocking doesn't scale very well - fo RDNA 1.0 cards.
We do not know how it works with RDNA 2.0 cards. We will see end of the year when the new cards launch, then DF will make a new test.
Dumb fuks will be dumb fuksThe PlayStation 5 GPU is not underpowered, the PlayStation 5 GPU is extremely overclocked. Sony should have sold the PlayStation 5 as a 8-9.2 TF's Console which the PlayStation 5 probably was meant to be in the first place which would be still great. But now they probably have heat issues and a variable GPU Clock instead which can't be optimal
The 560 gb/s is only in ideal situations and if developers are already complaining about it's set up then it's not unified
Also, like I said same thing VRS is doing
The same idiots that think the xsx ssd is "underpowered" compared to the PS5.
Consoles are different and shouldn't be compared.A console is by nature, an ideal situation. When would a situation NOT be ideal? when VRAM exceeds 10GB? With Witcher 3 running at 4K with everything set to Ultra not using more than 3GB of Vram, it's clear that 10GB at 560GB/s is enough even IF they decide to just use that pool and nothing else.
The same idiots that think the xsx ssd is "underpowered" compared to the PS5.
XB1's real bandwidth was 102gb/s they tried to say it was over 200 at one point but everyone figured out that you couldn't reach those speeds for read and writeDepends how much the ram bandwidth of the xsx comes into play, while the ps4 had a sizable gpu advantage and it did benefit of having a single fast pool if ram, the effective speed of the X1s ram was about 150GB/s
Provided there is bandwidth, then it should.Does performance scale linearly with frequency with that 35% improvement?
RTX 2080 Ti has 1824 Mhz average clock speed, hence 15.876 TFLOPS average.
Microsoft has the same design intention of having the right LOD at the right time and only when it's needed. That's Xbox Velocity Architecture (fancy marketing name). The idea is to have 100GB of texture that's available just in the time it's needed. No more wasting of RAM resources. You can have a vast landscape of the highest quality LOD you want but the GPU still need enough TF and/or BW to render it.XSX has a split ram pool and PS5's SSD cache streaming capabilities is the same thing VRS is doing but likely better
It's interesting that nobody talks about how the RTX 2080 Ti is not really a 13 TFLOP card because it rarely ever runs at it's max frequency . In fact, most PC GPUs never come close to reaching their theoretical max performance when running workloads. That's one of the fundamental differences between the PC platform and a console. Developers can control every aspect of the execution on a console so you can maximize the efficiency of the hardware. It's all about efficiency guys. That 13 TFLOP 2080 Ti may be only reaching a max throughput of 6 TFLOPs in even the best case (i.e. most demanding games). This is especially true when you realize that the games it is running is largely designed for MUCH lower hardware specs (1.8 TFLOP PS4 for example).
This is also why we consistently see console games that seem to punch above their weight and do things thought not possible on such low end hardware. We are implicitly comparing that to PC standards (i.e what a 1.8 TFLOP GPU can do on PC) which is wrong since it's entirely dependent on the software it runs and the software designed for a console in constructed differently than that on a PC in many ways. But that God of War game for example is able to maximize those 8 Jaguar cores and 1.8 TFLOP GPU performance to a degree that PC software generally doesn't come close to.
We'll see whose 1st party gets the better resultsMicrosoft has the same design intention of having the right LOD at the right time and only when it's needed. That's Xbox Velocity Architecture (fancy marketing name). The idea is to have 100GB of texture that's available just in the time it's needed. No more wasting of RAM resources. You can have a vast landscape of the highest quality LOD you want but the GPU still need enough TF and/or BW to render it.
I don't know why Richard did that. I know we don't have RDNA 2 cards, but RDNA 1 card testing is completely irrelevant. In addition to the customizations in both consoles.
It's still irrelevant with that much more efficiency in addition to the customizations for the PS5 and XSX.Right, they're closed platforms
Completely irrelevant? Hyperbole much? this isn't GCN vs RDNA which have a completely different setup. This is RDNA vs RDNA, it's the same architecture with version 2 adding a few more features to its set.
If you're expecting the outcome to do a complete 180 in terms of it scaling positive or negatively to CU count, then you'll be sorely disappointed, it might actually scale more so to CUs given how each CU will be doing more in RDNA2
It's still irrelevant with that much more efficiency in addition to the customizations for the PS5 and XSX.
AMD has a patent for VRS, it isn't exclusive to MS, only the DX12 VRS extension isThe point is that if RDNA 2 scales even more towards CU, then it stands to reason that XSX larger CU count will favor it. The customization is not related to the way the CUs perform, but instead on weather it has features like Variable Rate Shading, something that's exclusive and patented by MS.