Stiflers Mom
Banned
OP sounds like he needs to excuse the PS5's lack of raw power.
Theoretically, you might have found the most efficient answer as to why this thread exists.OP sounds like he needs to excuse the PS5's lack of raw power.
As the graphical improvements come along, resolution on both consoles will dip below 4k, on a dynamic basis.Question though, what is slightly better than 4K?
As the graphical improvements come along, resolution on both consoles will dip below 4k, on a dynamic basis.
I could easily see alot of games being 1800p on XSX and 1600p on PS5, and I dont see anyone being able to pick the difference just by looking at them.
They will leave it as is like they did in the PS2/OG Xbox gen or they will find ways to utilize the extra headroom like some games did in the same gen (SC: Chaos Theory)That is theoretically if the game is developed first on series X specs.....
What if a game is develop on PS5 first
Do you know what theoretically will happen?
In other words, if we were approaching it from "game design won't really change thaaat much in that short a space of time" then in a vacuum where all talent and budgets are equal, Microsoft should have the better looking/visually impressive (IQ or framerate or a mix of the two).
I love this bottleneck theory. Tell me about the bottlenecks the Series X has.I see bigger number therefore better.
Seriously, cache scrubbers, greater I/O efficiency, large and fast data decompression and CU occupancy will make a demonstrable difference.
Series X will have some advantages no doubt, but I mostly see them coming down to output res and ray tracing, yet for sheer fidelity? I see Sony having the edge there, the amount of data it can stream with the major bottlenecks gone - on tap and on the fly - is crazy.
Please show evidence that Xbox “brute forces” while PS5 “doesn’t follow the standard”. Xbox having more T Flops doesn’t mean “brute force”. Does the high PS5 clock indicate a brute force?In paper I feel MS create a Pseudo PC called Xbox Series X it's just console brute forcing to gain graphical and performance for gaming purposes
While on Sony's aiming to eliminate bottlenecks,latency with revolution hardware that doesn't follow PC standard but a way of being a true console gaming experience
That's is my judge on this..... But I feel games will be the one will show the "result" of this agenda
PS4 Pro was the lead console this gen, yet all Devs gave the game a bit extra to take advantage of the Xbox Ones extra HP.That is theoretically if the game is developed first on series X specs.....
What if a game is develop on PS5 first
Do you know what theoretically will happen?
For those that want the TLDR:
"Look, I'm not saying that either system is more powerful but the PS5 is more powerful because Sony devs praised it and also TFLOPS are just like, theoretical and stuff, it is all about efficiency and removing bottlenecks not TFLOPS. Again, not claiming that one is more powerful but the PS5 will be more powerful."
I'm Not sure why some people dont want to accept it, but the XSX is more powerful of the two.
More powerful GPU, faster CPU, faster RAM bandwidth.
The real question is will anyone really be able to tell the difference between the two.
If the XSX is 18% more powerful, and has a slightly better resolution, and a couple of frames more per second, is anyone other than DF with their pixel counting, and FPS monitoring, going to be able to notice?
I personally dont think so.
I mean since when SSD can draw more pixel and do it faster, I hope you do realise, that you still need a GPU/CPU to draw something on the screen? Yes you can do probably cool trick with faster SSD (like we've seen in Ratchet and Clank), but otherwise, powerful GPU is still needed.Really it comes down to...
PS5 will only ever be more performant than Series X if developers totally change the way they make games in a way that means performance scales linearly with SSD speeds.
It's only going to be Sony first party that take advantage of the PS5's SSD 100% (because they don't need to worry about a slower SSD target from another system) but then on the flipside, Xbox first party can do things with the GPU raw power than Sony first party needs to make up for with SSD.
In other words, if we were approaching it from "game design won't really change thaaat much in that short a space of time" then in a vacuum where all talent and budgets are equal, Microsoft should have the better looking/visually impressive (IQ or framerate or a mix of the two).
But yeah all we have right is now a battle of "My dad's TV is bigger than your dad's TV" so kinda pointless to keep banging on about it.
I love this bottleneck theory. Tell me about the bottlenecks the Series X has.
Its the same architecture. Its there for the Developers to take advantage of. If They choose not to, then we have lazy devs again lol.That's kinda the point of his post.
Sure, the Series X is theoretically more powerful. That's undeniable. And you're right the difference in that processing power may actually not amount to much, native 4K vs 82% of 4K with checkerboarding or maybe 1 billion ray bounces vs 800 million. You're right we'll probably need DF to tell us.
But the question the OP poses is that you need to consider, in addition to the theoretical maximum performance, how easily can you extract that performance. Because no computer system runs at 100% performance continuously. So for sake of argument say that on average SeriesX gets 75% then that's 9.1 Tflops but if PS5 gets 90% then thats 9.2 (numbers plucked out my arse to make the point).
The OP's point is that efficiency is just as important as max power and can equalise things. This exact scenario was demonstrated in real life with both PS3 and Dreamcast which were outperformed by theoretically weaker systems.
Its been quite a while since i read a wall of text. And even more time since i enjoyed one so much.The discussion on which next gen console is more "powerful" has been heating up lately with most believing the Xbox Series X to be more powerful solely on higher spec counts in certain categories. Yet some folks counter that with how the custom hardware in PS5 will alleviate some of it's relative performance deficit and the difference will be minimal.
Before I proceed, let's really think about what we mean by "powerful" in this context because it could mean several different things. People tend to just toss that number around and say "system X has more TFLOPs so it's more powerful" or "System Y can run at higher framerates so it's more powerful". It is an important distinction in the context of the next generation consoles since both system have advantages in different areas.
For this discussion, I want to focus on actual game performance as the goal. Meaning which system can actually process the most data in the shortest amount of time. This will yield richer worlds at higher framerates. Thus, I am getting away from the theoretical and the TFLOPS and high level specs and focusing on which system ultimately runs games with same or higher details and higher framerates.
Now of course let me state the obvious: at this point, nobody really knows which system is more powerful between the Xbox Series X and PS5. Why? Because nobody has seen both running in final hardware form up close with the same game side by side to do a comparison. So I'm not here to declare either one as more "powerful" but just to check some folks on claiming one as superior solely based on numbers on paper or video streams.
Now many people in the know including developers have said this but let me reiterate: virtually no real world game running on any system does so in a manner which utilizes 100% of that system's capability at all times. As beautiful as TLOU2 or God of War looks on PS4, it is completely incorrect to think that either of those games are extracting the maximum 1.8 TFLOPs of GPU power for any sustained period. Yes, even if the clock speeds are fixed the actual utilization is based on the software running on it. For example, I can have a 5 Ghz CPU and a 2Ghz top of the line GPU running a simple 'for' loop or simple binary search algorithm. Does that mean that the system is running at it's theoretical 14 TFLOPs while running those few lines of code in that for loop simply because it's frequencies are locked? Theoretically, I could build a 15 PetaFlop machine (15000 TFLOPS) that is several orders of magnitude more powerful than anything on the market today. But if all it could play were Wii games by design, would that be a system which is utilizing it's full potential? Would that be next gen?
The point here is something that I've mentioned several times in this forum and I think a lot of people miss. When we really think about "next gen" gaming and transitioning to a new generation it really isn't the hardware that achieve those milestones. It's the actual software/games that truly define a new generation. We don't remember the specs of the N64 and how much more horsepower it had over the PS1, but we remember how seeing Super Mario 64 for the first time took our breath away. Try as we might, few people could look at Mario 64 in motion and translate that to exactly what hardware specs made that possible and how any theoretical advantages over competing hardware is showing up in the images being rendered before in front of them. The same could be said in moving to PS2: it was seeing GT3, Metal Gears Solid 2, and GTA III that defined what "next gen" really meant for that generation. It was not a GFLOP count or marketing buzz words like "Emotion Engine". We could go on with seeing Halo for the first time, Gears of War, Uncharted 2, and Killzone Shadowfall in later generations but you get my point. But here is the question: if you didn't know the hardware specs of the system running those games, would that change how you looked at that system? In other words, if Kojima today mentioned that MGS2 on PS2 only used <1 GFLOP of performance, would you now look at the PS2 as being "weaker" than the Dreamcast (capable of a theoretical 1.4 GFLOPS) even though it clearly looked better than anything on the Dreamcast at that time?
In thinking with that, we should realize that all of this talk about TFLOPs and theoretical numbers is really moot at the end of the day and misses the point. If we understand that maximum theoretical numbers are quite meaningless in determining actual real game performance and we agree that the real world performance or demonstrative power is actual more meaningful to evaluate, then we should be focusing on which system will actually be able to deliver it's theoretical performance best to the screen. There are indeed a tremendous number of system components and variables that all have to play nice and align perfectly for a system to operate at it's maximum capacity. In truth, it almost never happens with real workloads but the systems that are perceived to be the most "powerful" are generally the ones that have come closest to it's theoretical maximums…meaning the ones that are most efficient. That truly is the name of the game…trying to remove bottlenecks and create a balanced system that can work together effectively is the really the art of designing a game console ( or any system).
I recently got into a back and forth with someone who shouted to me: Xbox Series X is clearly more powerful because "The numbers don't lie". I literally LMAO and shouted back "LOL. YES THEY DO!" There are countless examples of this and many on this forum have posted PC GPU comparisons demonstrating the lower TFLOP GPU outperforming (in real games) a higher TFLOP GPU etc. But there are 2 examples I want to remind people of in particular:
- The first and more recent example of "numbers telling lies" is with the PS3 and Xbox 360 comparison. Now on paper, there is no denying that the PS3 had a much higher theoretical performance ceiling when you factored in the Cell, it's SPUs, along with the RSX GPU. Yet, most multiplatform games ran better on the Xbox 360. Why? Because the X360 was a much more balanced system that allowed developers to extract more performance with less effort than the PS3. In other words, it's "power" was much more accessible and the system more efficient. It's unified memory, symmetrical CPU design, and larger GPU with more parallel pipelines meant there was more power on tap in the X360. This was evident in many third party games throughout the generation but was very evident in the first few years (Anyone remember Madden 07 running at 60fps on X360 vs only 30fps on PS3). But other big titles such as Read Dead Redemption, Skyrim, Assassin's Creed and many others ran at lower resolution and/or lower framerates on the PS3. One way to categorize this at a high level of abstraction (not literal figures, just an example to illustrate the point) is that 70% of the Xbox 360 was better than 40% of the PS3.
- For those old enough to remember, the second major example of this was with the original PS1 vs the Sega Saturn. People may not remember but on paper the Sega Saturn was superior to the PS1 in almost every respect! More polygon pushing power, higher pixel fillrate, more RAM and VRAM, better sprite processing, higher maximum resolution and more! Yet and still, the vast majority of 3rd party multiplatform games looked and ran better on the PS1. Games like Tomb Raider, Resident Evil, and Wipeout are just some example where the Saturn version had poorer performance or was missing visual elements altogether. Why was this? Again, the Saturn was notoriously difficult to develop on and particularly to harness it's max potential. It featured dual CPU processors that was very tricky to code and in fact most developers literally ignored the 2nd processor altogether reducing the theoretical performance of the system by a tremendous amount. The PS1 on the other hand was well balanced and easy to get the desired level of performance out of it. For developers, you got much more out of it with less effort. Again, high level abstraction description: 60% of the PS1 was a lot better than 30% of the Saturn
So how does this relate to the current discussions around PS5 and Xbox Series X. Again let me reiterate, I'm not saying that one is more powerful than the other. In fact, by my comments in this thread I cannot say that until I've seen games running side by side on both. I believe like many that both will have their advantages in different areas. But we've been hearing and talking a lot recently about how so many developers seem to be singing the praises of the PS5 using big hyperbolic words like "masterpiece", "best ever", "dream machine" etc. The general excitement from the development community around the PS5 seems tangible and there isn't that same vibe at this time around the Series X (despite the higher spec numbers). Why is that?
We've heard things mentioned about the PS5 such as it's one of the easiest systems ever to develop on, it's very easy to get the power out of it, it removes many of the bottlenecks that have existed for many years, it frees developers from design constraints that they have been working around for decades etc. These kinds of statements all point to a system that will be extremely efficient and allow developers to harness more power for less time and effort. The fact that we haven't heard the same sorts of statements around Series X lead me to believe that the PS5 is in fact the more efficient between the two.
This means that you can get much closer (still not likely 100%) to that 10.28 TFLOPs of GPU power more consistently in actual workloads. This means that you can utilize much more of those 8 Zen 2 cores to doing meaningful work that that the player will see as opposed to "under the hood" tasks around data management, audio processing etc. This means that can actually achieve near 100% of the theoretical SSD read/write speeds without the traditional bottlenecks that have existed with HDDs in games for years. This means that you can get much more efficient use out of the physical RAM allotment because there is less wasteful or unnecessary assets taking up space.
The people that truly follow what I'm saying in this thread will realize that these things are much more exciting to both a developer and end user than some higher numbers on a spec sheet. These are the things that can make a meaningful difference in the quality of games we play in the next few years. These are things that will directly improve the quality of the software, which is really what delivers the next gen experience. This is absolutely cause to sing the praises of the PS5 as many developers have done.
Unfortunately for Cerny and the team at Sony, most of the real work and genius in the PS5 design is not easy to communicate to end users. It's also not something that end users can really appreciate since it's not something they can truly understand until they see the results. And that of course will not happen right away at launch in 2020. But ultimately, there is much to be excited about with the innovations Sony is bringing in the PS5 and the level of efficiency they could have possibly achieved.
So while I am not saying the PS5 is definitely more powerful (meaning more performance) than the Series X, I am saying that it is absolutely inaccurate to say that the Series X is more powerful solely based on TFLOPs ratings and other theoretical specs. In other words, despite what the numbers say it is entirely possible that we may see many cases where games are performing better (i.e. more complex scenes and/or higher framerates) on PS5. To use my analogy above: 85% of the PS5 maybe better than 60% of the Series X (for example). It wouldn't be the first time that the numbers did not tell the whole truth![]()
You really should. Many of the people spreading misinformation about what a teraflop actually is and what it can and cannot do should read that "wall of text." After reading that perhaps some of you should look into thisI am not reading that wall of text![]()
Its the same architecture. Its there for the Developers to take advantage of. If They choose not to, then we have lazy devs again lol.
That's all very well
But 30fps doesn't sound very efficient to me
4k with higher frame rates? Or 4K with more detail and effects?
Its both RDNA 2. We arnt comparing Nvidia to AMD.No. It's not.
As the graphical improvements come along, resolution on both consoles will dip below 4k, on a dynamic basis.
I could easily see alot of games being 1800p on XSX and 1600p on PS5, and I dont see anyone being able to pick the difference just by looking at them.
Who has said the PS5 will display greater detail than the XSX?From what’s been said PS5 will display greater detail and effects on screen due to the faster SSD.
Frame rates are more CPU oriented.
Not really. The PC space will drive graphics beyond what the consoles can do over time, and for the consoles to have those games, resolutions and effects will be reduced from the top PC cards.Nah, as time goes on more power is unlocked as greater efficiencies are developed allowing for more of everything not less.
Who has said the PS5 will display greater detail than the XSX?
Its the GPU that puts the effects and details on the screen, not the SSD.
Not really. The PC space will drive graphics beyond what the consoles can do over time, and for the consoles to have those games, resolutions and effects will be reduced from the top PC cards.
It always happens.
Assassins Creed Valhalla says hi.
That one thing where the PS5 might revel in (if done correctly) is moving your direction of view in an open world landscape.
A still image of that view will probably look better on the XSEX, though.
That's how I imagine it currently.
I am not reading that wall of text![]()
But it's really not like that is it?Realistically?
It's like arguing that:
- A PC with an i9-9900k running at a faster clock speed, with faster RAM, a 2080TI, and a good SSD...
...is going be outperformed by:
- A PC with a slower i9-9900k running in boost mode, with slower RAM, a 2070 Super also running in boost mode, but equipped with a great SSD.
I don't buy it. Not for a second.
Now, will Sony's first party developers have better looking games than anything on Series X? That I do buy, and that's because DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS.