Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
If the difference is native 4k vs upscaled, it is going to make people want native.

A lot of people apparently just brush over the XB1s pathetic marketing, the fact that it was significantly weaker AND more expensive.



Oh look, it's irrelevant 1.3 vs 1.3 comparisons.

Maybe try it with 1.3 vs 1.8 next time.

They were just parroting Cernys talking points.



I guess we're at the "selectively choosing the direction of percentage calculations to downplay the difference" stage.



So again, the XBone was the superior, more powerful console. It had less CU, higher clocks. Apparently developers just decided to render their multiplats at a significantly lower resolution because... Reasons... I guess?
My problem with your arguments are:
While stating the differences in resolution/frames/graphics between One and PS4, you only take in account the GPU. As someone already told you, the PS4 had 3x faster RAM to fed that GPU much better. Even if One had an identical GPU to PS4, you would still see differences. In this case, RAM solutions are more complex and overall near, for sure is not a 2x or 3x difference in speed. However, my wild take is that PS4 would still be able to achieve 40% higher res even without the RAM factor (but not the others advantages) so in that regard I think you are correct. Problem comes with the seguent point:
Xbox One GPU had higher clocks by like 6%, so around 4 times less than the difference between PS5 and SeX to compensate for a 2x difference between the GPU. Even if clocks are variable, we simply don't know how smartishit is effective, only that is clearly there to keep the GPU at max clock without changing the power consumption. Without smartshift, the CPU could waste the power budget running at unnecessary high clocks and let the GPU downclock also because of the overall power budget. Lowering clocks on one part reduce power consumed but let the GPU power increase.
Also, yes, sorry but we need to take into account SSDs. Tell me "ESRAM LULZ" if you want, but we simply don't have example of games designed around SSDs. Now, aside from the 2x difference in raw speed (which is around 4x in ideal conditions), we don't have much idea of how Sex handles the SSD integration with all the other parts.
Cerny designed the SSD of the PS5 to be around 100% efficient, meaning 100x faster in specs, 100x faster in actual usage, while SSDs on PCs have bottlenecs that prevent them to actually use their true speed. I'm sure SeX mitigate this, but by how much? We don't know. What if the SSD is 40x faster, but the data loading can only reach 20x? You would see a 4x to 8x difference there. Mind you, both Sony and MS stated that SSDs CAN increare RAM efficiency and they are using them for that infact. If PS5 SSD has such a big advantage, you can already see how the advantage of those 10 GBs could be vastly mitigated. Also, as any other storage SSDs can be used to load on and off parts of the games, if you can do it very fast you can save GPU power because the GPU do not need to consider all the data togheter but only some at any given time (same for the RAM, that's the point). If you have a far faster SSD, you can do it better.
And there is also all the audio offload part that SeX does, but it seems only for CPU and RAM, while PS5 Tempest take care also of the GPU part. Not informed about this.
On the other hand, we have no idea of the state of the APIs in PS5. Different APIs gives different performance on the same hardware. Software can compensate for hardware, and we have no clue of PS5 software.
I honestly have little doubts that SeX will perform better because third parties aren't going to put all the effort, but if they want to they have the tools to mitigate a part of the differences. That's why I don't find totally correct the 20% more power= 20% more this and that. However, my understanding of all of this is VERY BASIC, I'm not trying to correct you, I'm trying to consider all the factors.
 
Last edited:
PS5 is not going to sustain 10 TF, that's just a marketing number. Which is theoretical anyway and has little to do with the real world. The power difference is still small.

All I care about is the noise. Sony should be called out if they can't make a quiet console. This is how many generations of noiseboxes now? Especially since MS clearly has the cooling tech sorted out.

It almost seems clear you did not watch the GDC Cerny video.
 
So again, the XBone was the superior, more powerful console. It had less CU, higher clocks. Apparently developers just decided to render their multiplats at a significantly lower resolution because... Reasons... I guess?

Your argument makes no sense.

Less CU's doesn't give you better performance over a 1.84 GPU.

1.3TF w/ less CUs and higher clock gets you better performance than 1.3TF w/ more CU and lower clock.

That better performance doesn't mean it will make it more powerful than a 1.8TF GPU.

In this case, the gap is smaller considering the higher clock and less CUs 10TF vs 12.TF


They were just parroting Cernys talking points.

So your excuse is that they didn't know this until Cerny said it?

You're getting desperate now. You really don't understand how these things work.


I guess we're at the "selectively choosing the direction of percentage calculations to downplay the difference" stage.

There's nothing selective. You're just denying clear facts.
 
But yeah, if Sony can somehow hit that $399.99 price point then Microsoft better have that Lockhart at the ready because we already saw what happened when Microsoft came out with a console $100 more expensive than their direct competition.
 
This is correct. But if all the XSX is doing is pushing 18% more pixels then outside the console warriors most people just aren't going to care.

40% last gen was, sometimes, the difference between HD and sub-HD in multi-platform games and tbh it wasn't really that big a deal for most people. Assuming that 10.3 Tflops is enough to to 4k at a stable frame rate then the extra 18% is going to be even less of a deal.

Price and games will be far more important factors as we go into the next gen.

The on paper difference this time is 16%, so too small to notice. BUT, real-world performance between the two consoles is almost certainly even smaller, because of the PS5's more customised design and absolute focus on features to remove all bottlenecks for maximum performance, instead of series X brute force approach and theoretical tflops figure (tflops is theoretical as there are so many factors that can get in the way of reaching tflops figures, and that is ignoring the fact that tflops is one part of performance of a GPU alone, and doesn't factor in all other performance metrics).

I think the difference in performance is about 5% either way and this has been alluded to by multiple developers working on both consoles.
 
Last edited:
I'll simplify this for you.

If both consoles are pushing the exact same graphics, the XSX will be doing so at a near 20% higher resolution. If both consoles are pushing the same resolution, the XSX will be capable of doing so with more graphical bells and whistles.

I'll simply this for YOU. Both machines will be able to achieve a baseline of quality and performance that will be satisfying. So the additional quality and performance of the XSX won't be of any consequence, especially considering that the PS5 will be cheaper and will feature better games.
 
My problem with your arguments are:
While stating the differences in resolution/frames/graphics between One and PS4, you only take in account the GPU. As someone already told you, the PS4 had 3x faster RAM to fed that GPU much better. Even if One had an identical GPU to PS4, you would still see differences. In this case, RAM solutions are more complex and overall near, for sure is not a 2x or 3x difference in speed. However, my wild take is that PS4 would still be able to achieve 40% higher res even without the RAM factor (but not the others advantages) so in that regard I think you are correct. Problem comes with the seguent point:
Xbox One GPU had higher clocks by like 6%, so around 4 times less than the difference between PS5 and SeX to compensate for a 2x difference between the GPU. Even if clocks are variable, we simply don't know how smartishit is effective, only that is clearly there to keep the GPU at max clock without changing the power consumption. Without smartshift, the CPU could waste the power budget running at unnecessary high clocks and let the GPU downclock also because of the overall power budget. Lowering clocks on one part reduce power consumed but let the GPU power increase.
Also, yes, sorry but we need to take into account SSDs. Tell me "ESRAM LULZ" if you want, but we simply don't have example of games designed around SSDs. Now, aside from the 2x difference in raw speed (which is around 4x in ideal conditions), we don't have much idea of how Sex handles the SSD integration with all the other parts.
Cerny designed the SSD of the PS5 to be around 100% efficient, meaning 100x faster in specs, 100x faster in actual usage, while SSDs on PCs have bottlenecs that prevent them to actually use their true speed. I'm sure SeX mitigate this, but by how much? We don't know. What if the SSD is 40x faster, but the data loading can only reach 20x? You would see a 4x to 8x difference there. Mind you, both Sony and MS stated that SSDs CAN increare RAM efficiency and they are using them for that infact. If PS5 SSD has such a big advantage, you can already see how the advantage of those 10 GBs could be vastly mitigated. Also, as any other storage SSDs can be used to load on and off parts of the games, if you can do it very fast you can save GPU power because the GPU do not need to consider all the data togheter but only some at any given time (same for the RAM, that's the point). If you have a far faster SSD, you can do it better.
And there is also all the audio offload part that SeX does, but it seems only for CPU and RAM, while PS5 Tempest take care also of the GPU part. Not informed about this.
I honestly have little doubts that SeX will perform better because third parties aren't going to put all the effort, but if they want to they have the tools to mitigate a part of the differences. That's why I don't find totally correct the 20% more power= 20% more this and that. However, my understanding of all of this is VERY BASIC, I'm not trying to correct you, I'm trying to consider all the factors.

That's a whole lot of word salad to completely misplace your own arguments about SSDs and ESRAM.

Accept it or not but Microsoft knew what they were doing with ESRAM on the XBone. That's why it wasn't rendered entirely useless while using extremely slow DDR ram for the GPU although obviously it was a work around and would have been much better off having proper GDDR memory. At the end of the day, the console performed as expected for it's computational abilities.

Then you follow up with trying to use an even slower SSD to try and make up for memory speed differences. An SSD is not ram, an SSD is not a GPU, scenes aren't rendered on an SSD. Is it going to be better than a standard HDD? Obviously, but it's not the only next gen console to be getting an SSD.

I've already commented before on this pipe dream of streaming in assets as a player pans the camera. That might work with the super slow aim super slow brain types that take 2 seconds to turn around but everyone else that can whip 180 degrees in 1/10th of a second, it just isn't going to be viable unless you like to see a scene render in front of you every single time you turn around.

I'll simply

And that's where I stopped.
Your argument makes no sense.

Less CU's doesn't give you better performance over a 1.84 GPU.

you wot???

You do realise that the XBone had less CU and higher clocks than the PS4 right? You also realise the PS4 had more tflops yes?

So unless the XBone is all of a sudden better than the PS4 because clocks your entire argument is moot.
 
Last edited:
you wot???

You do realise that the XBone had less CU and higher clocks than the PS4 right? You also realise the PS4 had more tflops yes?

So unless the XBone is all of a sudden better than the PS4 because clocks your entire argument is moot.

It did have less CUs.

So, what's your point?
 
That's a whole lot of word salad to completely misplace your own arguments about SSDs and ESRAM.

Accept it or not but Microsoft knew what they were doing with ESRAM on the XBone. That's why it wasn't rendered entirely useless while using extremely slow DDR ram for the GPU although obviously it was a work around and would have been much better off having proper GDDR memory. At the end of the day, the console performed as expected for it's computational abilities.

Then you follow up with trying to use an even slower SSD to try and make up for memory speed differences. An SSD is not ram, an SSD is not a GPU, scenes aren't rendered on an SSD. Is it going to be better than a standard HDD? Obviously, but it's not the only next gen console to be getting an SSD.

I've already commented before on this pipe dream of streaming in assets as a player pans the camera. That might work with the super slow aim super slow brain types that take 2 seconds to turn around but everyone else that can whip 180 degrees in 1/10th of a second, it just isn't going to be viable unless you like to see a scene render in front of you every single time you turn around.

And that's where I stopped.
Performed as expected compared to PS4, who also had much, much better RAM aside from GPU.
Yes, I know scenes aren't rendered on an SSD and an SSD it's not RAM, but it's NOT my word when talking about increasing RAM efficiency with SSDs, both MS and Sony stated this, it's their plan instead of putting 24 GBs of RAM while also sacrificing storage speed.
I'm not talking about loading the entire game when you turn around lol I talk about selectively loading off an on various elements to make the GPU more efficient, which is actually what RAM does by default. Any storage with enough speed can do this. In theory, PS5 should be able to focus more on what it's in the FOV than SeX without stressing the RAM, because the SSD is much better. Meaning the GPU, even if less powerful, will be able to focus more on relevant part compared to SeX GPU. PS5 is build for this starting from the SSD who has like 3 times the priority channels of a normal one, so it's able to prioritize what to load off an on much more precisely.
Again, theory, of course.
 
Last edited:
It did have less CUs.

So, what's your point?

guess what else also has less CUs.

Performed as expected compared to PS4, who also had much, much better RAM aside from GPU.

Exactly?

ESRAM did what it was designed to do, make up for having bad main memory. It was still an objectively worse way to design a processor.

Yes, I know scenes aren't rendered on an SSD and an SSD it's not RAM, but it's NOT my word when talking about increasing RAM efficiency with SSDs, both MS and Sony stated this, it's their plan instead of putting 24 GBs of RAM while also sacrificing storage speed.
I'm not talking about loading the entire game when you turn around lol I talk about selectively loading off an on various elements to make the GPU more efficient, which is actually what RAM does by default. Any storage with enough speed can do this. In theory, PS5 should be able to focus more on what it's in the FOV than SeX without stressing the RAM.

Being in RAM =/= currently being rendered. At best, the PS5 will be able to slightly use it's available but slower memory more efficiently.
 
It's becoming less amusing reading this thread .. it's kinda pathetic most of the posts.

Somehow numbers are no longer numbers...

I can see why America ended up with a president who can't accept reality .. or even accept what he said yesterday.

The day github became real ....suddenly Cu counts don't matter ..unstable clock speed is a benefit .. and unknown unexplainable hidden technologies will easily overcome and outshine cold hard facts.
 
Long time lurker and occasional poster on here... I'm trying to follow some of the technical speak - and probably failing miserably!

From what I'm reading, PS5 is a more customized system. Does this mean that the Xbox is not, and so could the PS5 be more difficult to produce games for? I'm thinking of a potential scenario similar to the Xbox 360/PS3 days where, generally game ran better on the Xbox 360 because the PS3 wasn't as developer-friendly...
 
guess what else also has less CUs.



Exactly?

ESRAM did what it was designed to do, make up for having bad main memory. It was still an objectively worse way to design a processor.



Being in RAM =/= currently being rendered. At best, the PS5 will be able to slightly use it's available but slower memory more efficiently.
Yes, being in RAM it's not equal to being rendered. That's my point.
If you can keep in storage (SSDs or RAM) more data than another system instead of rendering it all the time, you can put more details on the lesser rendering load.
And again, yes, this is also what SeX does, the idea is to increase by 2-3x the RAM space management because 16 GBs aren't a great deal by itself. My take is just that PS5 SSD can do that much better, not only because is faster by default, but because we need to consider the far more advanced data priority and the assence of bottlenecs, while we don't know much about SeX in this regard.
 
It's becoming less amusing reading this thread .. it's kinda pathetic most of the posts.

Somehow numbers are no longer numbers...

I can see why America ended up with a president who can't accept reality .. or even accept what he said yesterday.

The day github became real ....suddenly Cu counts don't matter ..unstable clock speed is a benefit .. and unknown unexplainable hidden technologies will easily overcome and outshine cold hard facts.

reminds me of the "hidden chip" inside the Xbox One... funny shit
 
Last edited:
It's becoming less amusing reading this thread .. it's kinda pathetic most of the posts.

Somehow numbers are no longer numbers...

I can see why America ended up with a president who can't accept reality .. or even accept what he said yesterday.

The day github became real ....suddenly Cu counts don't matter ..unstable clock speed is a benefit .. and unknown unexplainable hidden technologies will easily overcome and outshine cold hard facts.
Weren't you banned a while ago? I see you are back.
 
It's becoming less amusing reading this thread .. it's kinda pathetic most of the posts.

Somehow numbers are no longer numbers...

I can see why America ended up with a president who can't accept reality .. or even accept what he said yesterday.

The day github became real ....suddenly Cu counts don't matter ..unstable clock speed is a benefit .. and unknown unexplainable hidden technologies will easily overcome and outshine cold hard facts.

These things have been stated by Digital Foundry.

You must have other information they don't know.

I'm sure you can't show it because you're not speaking based on facts.
 
These things have been stated by Digital Foundry.

You must have other information they don't know.

I'm sure you can't show it because you're not speaking based on facts.

df seems fairly confident that the console with the better specs will be faster. ...

news at 11
 
Who said 10.2TF > 12TF?

At least take the time to read.

You have been constantly trying to claim that the power difference doesn't exist because clocks.

How it is his logic when it's a fact?

That is you trying to rush to the defence of
But it don't show Xbox SX to be the most powerful over all when PS5 has the rendering advantage from having a higher clock rate.

So again, just stop already.
 
I have never lowered myself to the shame of having to ask a mod to step in on my behalf ... I am the hunter never the prey.
Lol the hunter? Do you think you are cool saying this? This is a forum buddy, not Hunger Games. Mods are here to mod, they can't be everywhere so - get ready for this - people call them. Learn how to proper discuss about something, that's the only thing here. You're no hunter if mods can fuck you up with a ban, get real.
 
Last edited:
You certainly are.

Your arguments of 1.3 vs 1.3 are entirely irrelevant.

Just stop already.


Read

His

Post

Again


10.2TF from a 36CU GPU clocked at 2.23GHz wouldn't get you the same result as 10.2TF from a 44CU GPU clocked at 1.825GHz
You replied to this post and I responded

He is comparing the SAME number of teraflops.

One has 44CUs and lower clock
Other has 36CUs and Higher clock

They both equal to 10.2TF

He did not compare 10TF to 12TF in this post.


He says 10.2TF 36CU 2.23GHz is better than 10.2TF 44CU 1.825GHz.

You clearly did not read what was going on.
 
If the difference is native 4k vs upscaled, it is going to make people want native.
Both will have "4K native". Next gen is different from last as next gen will have technology for optimization of resolution and image quality.
It's just ps5 will use VRS/CBR on a larger area, but in practical image quality I doubt we will see significant difference.

Then you follow up with trying to use an even slower SSD to try and make up for memory speed differences. An SSD is not ram, an SSD is not a GPU, scenes aren't rendered on an SSD. Is it going to be better than a standard HDD? Obviously, but it's not the only next gen console to be getting an SSD.
SSD allow you to improve image quality. And the faster SSD, the more sophisticated things devs can do.
Like games are not bound only by "GPU tflops", current gen games bound heavily by memory size, which denies them to have "ultra-high quality textures on every tree", and SSD helps to solve this issue.

10.2 is not going to be > 12.1 because clocks. Just stop already.
The relevance of this difference might be much less than you imagine. This means that other parts (like audio or SSD) might actually brings more to the graphics fidelity and image quality than pure paper specs.
 
Last edited:
Last edited:
You just continue to rant with no facts. It's better to be informed than completely ignorant.

Good to know the XSX was downgraded to 44CU recently and that the XBone was always the superior more powerful console.

They are the "facts" you're trying to share.

Just stop already.
 
If your talking about this..


They make sure to say "Sonys pitch" or Cernys claim ... there is almost no editorial or fact checking..

They certainly in no way claim it's faster than the XSX.

I don't know what your trying to achieve out of that lol.

Now you're taking words out of context in an attempt to prove your case.

They made a video about SSD and how it relates to gaming. Their video was made after the XsX reveal and how the fedility and scope in the HB2 trailer could be possible with the use of the SSD.

You're trying hard to discredit any information that doesn't fit your narrative.
 
It's becoming less amusing reading this thread .. it's kinda pathetic most of the posts.

Somehow numbers are no longer numbers...

I can see why America ended up with a president who can't accept reality .. or even accept what he said yesterday.

The day github became real ....suddenly Cu counts don't matter ..unstable clock speed is a benefit .. and unknown unexplainable hidden technologies will easily overcome and outshine cold hard facts.

No need to bring up Politics in this channel.
 
You guys should watch the following video from 14:59 to 20:42. It explains why Sony chose to sacrifice a higher number of CUs for I/O optimization.

1. Accessing and storing information is the biggest bottleneck in modern computer systems.

2. Performing arithmetic operations doesn't consume nearly as much energy and bandwidth as accessing and storing the information needed to perform the operations - and accessing and storing the information that results from the operations.

3. Hence, CU count and frequency, which are proportional to the number of operations that a GPU can perform per cycle, are not as important as conveying and accessing the information associated with the computations that are performed by the CUs per cycle.

4. This is why Sony opted to allocate more of the budget for the PS5's design to its I/O system rather than the number of CUs in its GPU.

5. Hence, the XSX GPU will be hindered by the massive bandwidth consumption of the larger number of computations that its larger number of CUs perform per cycle, and it will consume much more power.

6. Hence, the greater computational capacity of the XSX's GPU will counteract itself due to its greater consumption of bandwidth and energy - and it will also be counteracted by the PS5's significantly faster I/O system.

 
You guys should watch the following video from 14:59 to 20:42. It explains why Sony chose to sacrifice a higher number of CUs for I/O optimization.

1. Accessing and storing information is the biggest bottleneck in modern computer systems.

2. Performing arithmetic operations doesn't consume nearly as much energy and bandwidth as accessing and storing the information needed to perform the operations - and accessing and storing the information that results from the operations.

3. Hence, CU count and frequency, which are proportional to the number of operations that a GPU can perform per cycle, are not as important as conveying and accessing the information associated with the computations that are performed by the CUs per cycle.

4. This is why Sony opted to allocate more of the budget for the PS5's design to its I/O system rather than the number of CUs in its GPU.

5. Hence, the XSX GPU will be hindered by the massive bandwidth consumption of the larger number of computations that its larger number of CUs perform per cycle, and it will consume much more power.

6. Hence, the greater computational capacity of the XSX's GPU will counteract itself due to its greater consumption of bandwidth and energy - and it will also be counteracted by the PS5's significantly faster I/O system.



Yeah I watched this yesterday, a great video if you can get past the calming voice over.
 
Hey you lot, good discussions at the other place and technical expertise on the subject of bandwidths.

Seems like both consoles will be bandwdith limited, get your head around this, screen grabbed from other place for discussionas its interesting - if Ps5 punches above its weight there will be meltdowns on GAF......but muh Terrafloppies ..........

If the game is below 10 GB and the CPU code / audio etc is in the fast pool, XSX will clearly be better hands down, maybe unlikely for 4K with high quality asssets ?

We wait....also sony and MS could in theory up the spec of memory to 16 if they ate the cost. Interesting...

ruOKqQD.png


eNY129G.png
 
Last edited:
Status
Not open for further replies.
Top Bottom