Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
Just imagine a game where you can be in a huge open world town, and as you open the door and immediately walk in the game will use all of its rendering and memory prowess on the inside of a building.

Right now interiors are sorely lacking in detail. But games will now be able to make every single open world game transformer into the same detail as something more confined and linear like The Order

it's going to be incredible not seeing bland interiors
Just imagine play Bloodborne 2 without loading screens.
 
No, that would be the same. The internal CU structure is the same across the two models and, architecture wise, the RB's and Rasterisers use the L2 cache shared in the Shader Array they belong to, not the internal CU storage.
You have more CU's so you have both more per CU specific cache, but more mouths to feed too (the vector ALU's inside each CU) so memory bus fetched on cache miss scenarios would be unaffected compared to a solution with less CU's.

If you were increasing per Shader Array cache as you add CU's to the Shader Array you would be mostly keeping the cache hit ratio from getting worse than a solution with less CU's, not making it better unless you went really overboard with it.
Are you claiming AMD's NAVi scaling is bullshit then?

Roadmap.jpg



How come XSX GPU's result scaled to RTX 2080 level with pro-NVIDIA Gears 5 benchmark at PC Ultra setting?

This is NAVI 10's 20 WGP for 40 CU design

2019-07-21-image-4.jpg


28 WGP version has to scale IO
 
Last edited:
The 4.8GB/s figure given by MS already accounts for BCPack (only texture streaming), so whatever he reveals its not going to affect that figure

I know the basics which you haven't refuted:
Data is spread across all chips and to access even a 1MB asset all chips must be read simultaneously
You don't know the basics when 32-bit word can't be stripe across a single 320-bit bus. It would take ten 32 bit words to fill 320-bit bus and each data element has its own memory address target. Learn combine scatter and combined gather.
 
Last edited:
All of what you said is true. But there is an additional one that is true and you didn't list, and that is;
  • The GPU AND CPU can't both reach the max frequency when the workload of both is at 100%

In reality that doesn't happen often though. But it does happen from time to time.
It can... it is not about 100% workload but the type of workload.

For example I believe AVX workload running at 100% will probably make the CPU drop clock while others workloads even at 100% probably won't.

So there are cases that even at 100% workload the CPU and GPU will maintain the base clock.
 
Last edited:
I have the exact same feelings about the PS5. Hope it doesn't hold back the XSX too much, and that they get destroyed in sales so developers can focus on the more powerful system, instead of that weak PS5.
Thats is so wrong on so many levels and a straw man at that point, and you know that, because the PS5 has a 16% weaker GPU than Xbox Series X, it isn't anywhere near the GPU gulf between the PS4 & XB1 were (it was around 40% difference), and look, everything was built from the ground up for PS4 and then scale back on the resolution on Xbox One's 1.31 TF GPU to handle the games properly.

BUT the GPU difference between the Xbox Series X and Xbox Series S is HUGE, like a generational difference "HUGE"!

And even the fact that you are sarcastically saying "I have the exact same feelings about the PS5. Hope it doesn't hold back the XSX too much" (which is really stupid and I have dismantled it completely) does highlight how valid the point I'm making (which is a very valid point) and you are actually agreeing with my point in the 1st place but it seems you felt like you wanted to make it fanboyish and personal because what I wrote hurt your feelings in some way shape or form which is something that should be dealt with on your own, facts are facts, whether you like them or not, OK? 🥰🥰
 
Last edited:
Let me piggyback on your post with actual illustration of what SonGoku SonGoku is talking about. Some people are under the impression that when a console or PC is running a game, 100% of the processing capability is used. This couldn't be further from the truth. For example, Ryzen 3700X and RTX2080S which is the equivalent of what next gen consoles are competing against.

Battlefield V - Look at the CPU utilization, hardly reaches 50% while GPU utilization is at the upper bounds of 90%. This is just an AGGREGATE of CPU and GPU ACTIVITY and if you go even lower, some parts of the GPU don't even see 60% utilization. This is what Sony is exploiting for their variable frequency strategy, they monitor each frame and look at the workload the CPU and GPU are doing and dynamically scale clock speed as well as power balance based on what the GPU and CPU are actually doing. Instead of using the old strategy of locking the clock lower. This means that they can push the GPU harder than usual because the CPU can be downclocked to maintain stable operation. No game or application ever uses 100% of the CPU and GPU.
yZf1jif.gif


Fortnite - The utilization is even lower. CPU hardly touches 40& and GPU below 70%

WD6bmmo.gif
Just a detail.

These apps only shows if the GPU units are active and not if they are really processing something.

% of utilization in GPU is way lower than what these apps shows.
Render tasks has many waiting time.
 
Last edited:
Don't understand the question.
You mentioned those percentages shown on display aren't a indicator of utilization but busy numbers
My question is: How do you quantify how busy a CPU/GPU is on percentages?
Decompression rate is different than decompression ratio and I think we are confusing the two.
Apologies for using them interchangeably. Your example of 1:4 compression ratio that would translate to 9.8GB/s output from the decompression block.
This 9.8GB/s rate surpasses what i believe to be the decompression block max throughput (~6.5GB/s)
if 6GB/s+ is the highest speed the decompressor can hit (stage 3), it means the XSX is limited by its' decompressor and at the same time the PS5 decompressor is X4 faster than the XSX decompressor, which sounds pretty ludicrous to me.
Sony did customize their SSD & I/O extensively and they need a more powerful decompressing block to handle the 5.5GB/s SSD input and decompress a bigger pool of data. Not sure if these things scale linearly?
If the XSX decompressor never goes over 6GB/s, I'm having a really hard time believing MS will ever get even close to 4.8GB/s average.
That's why i asked earlier if the 4.8GB/s figure could be interpreted as only BCPack but you made a good point against it.
However considering Textures are the majority of streamed data couldn't they theoretically reach the 4.8GB/s figure using both zlib/bcpack? Like in the quote below
So the BCPack compressed data bandwidth is "over 6GB/s", the Zlib compressed data is lower, and the weighted average is 4.8GB/s.
This checks out for the 4.8GB/s average, im just not convinced the decompressing block can handle 7GB/s+ because they were specifically talking about the hardware decompression block capabilities (not unlike Sony) and didn't say anything that could be interpreted as an average, 4.8GB/s wasn't even brought up. For this question i'd use occam's razor
I also don't think its a coincidence both had "typical" compression output and max throughput from the decompressing block. But this just context for my current stance
I know all that mate. Was just using your comment as a spring board to further elaborate on what was talked about in the road to PS5 video.
I know wasn't meant to correct you just to add further clarification so others don't get confused
 
Last edited:
Just imagine what you can do with the next Mass Effect (if the IP is not death) , will be awesome to see it.

The gif in case someone doesn't know is from Star Citizen which is maybe the first AAA game optimize for an SSD only for PC.
Yep that is exiciting.

But imagine a game based on time travel. Worlds that dynamically change around you as you play (in real time). This can be such a huge advantage to games like Control or Quantum Break

Look at below gif from move The Time Machine. Dev's could possibly create worlds that are able to do this in real time. I'm incredibly excited for NEW TYPES of games these SSD"s will bring.

tmachine-.gif


Sorry for the tiny gif
 
Last edited:
You don't know the basics when 32-bit word can't be stripe across a single 320-bit bus. It would take ten 32 bit words to fill 320-bit bus and each data element has its own memory address target. Learn combine scatter and combined gather.
You know how many bits are in a 4kb file?
32000 bits
and is less than billions of ray/s that Mark Cerny's words implied the PS5 Full RT can do.
Technically he didn't imply that, he just said what it would take for full RT. Wouldn't billions ray/s enter Pixar farms territory?
 
Last edited:
Just a detail.

These apps only shows if the GPU units are active and not if they are really processing something.

% of utilization in GPU is way lower than what these apps shows.
Render tasks has many waiting time.
Big bold red letters mate. And right after its noted that actual utilization is lower than that. But noted.
 
Last edited:
Yep that is exiciting.

But imagine a game based on time travel. Worlds that dynamically change around you as you play (in real time). This can be such a huge advantage to games like Control or Quantum Break

Look at below gif from move The Time Machine. Dev's could possibly create worlds that are able to do this in real time. I'm incredibly excited for NEW TYPES of games these SSD"s will bring.

tmachine-.gif


Sorry for the tiny gif

Or a Dragonball game with accurate moving speeds.

dfab09693ba6757e0a293e5debeae970.gif
 
Last edited:
Furthermore, I doubt CPU and GPU utilization will see a drastic jump from current gen.
That's my point though Jags are heavily utilized on consoles, i don't see this trend changing. Im sure devs will come with loads to push those cores, granted the ceiling is much higher this time around.
 
That's my point though Jags are heavily utilized on consoles, i don't see this trend changing. Im sure devs will come with loads to push those cores, granted the ceiling is much higher this time around.
Would be fun to see how the end of Gen games will look like when devs master these console.
 
Where are people getting the idea that PlayStation 5 will run games at a lower resolution than the Xbox Series X? They will both run games at 4K, but the Xbox Series X will run games at 4k with slightly higher quality textures and/ or with models that have higher numbers of polygons and/ or with more individual traces of light (i.e. ray tracing). As for frame rates, both consoles will be able to run games at 60 frames per second at their respective appropriate graphics settings.

Here's an analogy: both the RTX 2080 and the RTX 2080 Ti can run games at 4K, but the latter can do so with higher settings and/ or higher framerates.
 
Was thinking about this fast SSD, what kind of memory speed would be required for OS functions? Maybe it has been mentioned already. Is it possible that Sony could use a portion of the SSD as memory for the OS and keep the entire 16 GB ram available for games?

Also wondering if they may have big plans for game design around this SSD and it is actually a 1 TB drive in the system but only the 825 is available for users, with the left over 175 GB dedicated for developers. I guess is an 825 GB SSD a common storage size? Just seems like a weird random size to me.
 
I don't thing a mid-gen refresh is anything reliable to a new generation puscharse.
I found all mid-gen refresh useless for me and so I avoided to buy them when my day one PS4 Amateur works flawless even today without missing anything from PS4 Pro.

Said that I'm again day one PS5 Amateur and a never one PS5 Pro ;)

I should never buy Lockhart over Scarlet for a new generation but that won't happen anyway.
Really resolution is overrated compared to framerate.
I play Nioh 2 in 720p60 at my parent's place (PS4 Base) and in 1080p60 at my place (PS4 Pro with an LG OLED). Both of them are perfectly fine, very beautiful even with that glorious vibrant HDR and artstyle.
Not once, literally not even a single time during my awesome fun with it did I think to myself "jeez that GPU should push some extra dots".
4K with advanced reconstruction is the way to go I think.

To both of you, my experience with PS4 Pro (after PS4) is massive, especially when I started Watch Dogs 2 on the base then got PS4 Pro day one when released (both day one actually) the difference is way, way bigger than what you guys seem to estimate. And yes, X1X is noticeably sharper and cleaner resolution than PS4 Pro on multiplats.

Anyway, native 4K can really backfire on your face if you're using crappy assets, the grass in Halo Infinite is cartoonish to say the least, and Far Cry 2 about 12-13 years back is even comparable:

3840_HaloInfinite_E318_AncientRings-2060x1159.jpg


cn09rhfsf5o21.jpg


I think what happened here is the devs, just like Hollywood directors, think the audience aren't educated enough by enlarging small assets like the grass there to look odd, the same funny thing actually happens with far cry games when leopards sound like cougars instead of roaring.

Overall, a native 4K game you notice it quickly compared to scaled ones, and they tend to grab your attention with very crisp image. Again, that doesn't mean the graphics should look good when assets are in mediocre quality. The indoors are great in Halo Infinite, and some assets as well look insanely good, photorealistic to be precise, it's the shitty quality thrown to the mix that gets exposed pretty fast with the fact of using native 4K.

Example of the photorealistic assets, but it's to be seen if the quality isn't downgraded.

156057510.jpg


Still, from all of what I've seen, Halo Infinite is the closest thing to next gen graphics, assuming no massive downgrades will happen like in The Witcher 3, Watch Dogs, The Division. Godfall is not worth mentioning graphically, but might be a good game.
 
Last edited:
Where are people getting the idea that PlayStation 5 will run games at a lower resolution than the Xbox Series X? They will both run games at 4K, but the Xbox Series X will run games at 4k with slightly higher quality textures and/ or with models that have higher numbers of polygons and/ or with more individual traces of light (i.e. ray tracing). As for frame rates, both consoles will be able to run games at 60 frames per second at their respective appropriate graphics settings.
Depending of the sacrifices required it might be preferable to run at 10-18% lower resolution (barely noticeable if at all) and mantain graphics/fps parity
Of course there can be games where the difference in settings is unoticeable so they run bot at same resolution

Lastly don't go expecting all games to hit native 4k next gen, reconstruction techniques and dynamic resolution will be more common on both consoles as devs start to push visuals even further
 
To both of you, my experience with PS4 Pro (after PS4) is massive, especially when I started Watch Dogs 2 on the base then got PS4 Pro day one when released (both day one actually) the difference is way, way bigger than what you guys seem to estimate. And yes, X1X is noticeably sharper and cleaner resolution than PS4 Pro on multiplats.

Anyway, native 4K can really backfire on your face if you're using crappy assets, the grass in Halo Infinite is cartoonish to say the least, and Far Cry 2 about 12-13 years back is even comparable:

3840_HaloInfinite_E318_AncientRings-2060x1159.jpg


cn09rhfsf5o21.jpg


I think what happened here is the devs, just like Hollywood directors, think the audience aren't educated enough by enlarging small assets like the grass there to look odd, the same funny thing actually happens with far cry games when leopards sound like cougars instead of roaring.

Overall, a native 4K game you notice it quickly compared to scaled ones, and they tend to grab your attention with very crisp image. Again, that doesn't mean the graphics should look good when assets are in mediocre quality. The indoors are great in Halo Infinite, and some assets as well look insanely good, photorealistic to be precise, it's the shitty quality thrown to the mix that gets exposed pretty fast with the fact of using native 4K.

Example of the photorealistic assets, but it's to be seen if the quality isn't downgraded.

156057510.jpg


Still, from all of what I've seen, Halo Infinite is the closest thing to next gen graphics, assuming no massive downgrades will happen like in The Witcher 3, Watch Dogs, The Division. Godfall is not worth mentioning graphically, but might be a good game.
I understand that but for me a new generation makes the wow factor...
But that is for somebody that already has a base console... if I did not have any console and choose to buy one in 2016 then I should buy Pro over PS4.

I just don't see mid-gen upgrades as upgrades... just as a new purchase for these that doesn't have the base.
 
Last edited:
Quite the amusing doomsday view. Until we see PS5 running that Minecraft demo, we really can't make assumptions off of that. I think there would be some advantages to casting more simultaneous rays, something that a wider approach leans into.
I wonder if MS would allow Minecraft to shine on PS5 or would instead fall into shenanigans.
 
Hmmm… but what about the share option? How much RAM consumes to record 4K @ 60fps? You can't keep writing that in the SSD without destroying it after a couple of years of use (and consuming those much needed GB/s). Thus my argument again about this generation being scarce in RAM.

I don't understand why you'd think this. SDD drives have no moving parts and will last longer on average than a "regular" hard drive. there used to be more sensitivity in SSD's, but look at ArsTechnica who did an extensive benchmark in 2014 or 2015 I believe and found that even back then, SSD drives lasted WAY longer than rated. This is like saying that you'd better be careful because your RAM is going to wear out! Will it? Sure! But probably not for the next 20 years or so!

Bottom line, SSD storage is way faster and it is more reliable. No moving parts and these days, less heat. Also, you think that regular hard drives wouldn't wear out from repeated writing data to them? Of course they do!

It's fruitless to worry about the SSD's in these new consoles. Sure some will die. A small percentage of electronics give up the ghost early and are "lemons." Big deal. On average, the SSD drives will last longer than we will have the consoles.
 
Was thinking about this fast SSD, what kind of memory speed would be required for OS functions? Maybe it has been mentioned already. Is it possible that Sony could use a portion of the SSD as memory for the OS and keep the entire 16 GB ram available for games?

Also wondering if they may have big plans for game design around this SSD and it is actually a 1 TB drive in the system but only the 825 is available for users, with the left over 175 GB dedicated for developers. I guess is an 825 GB SSD a common storage size? Just seems like a weird random size to me.
You can't swap out services, I.e things that keep the o/s ticking. But you certainly can swap out all user interface components. The user interface is undoubtedly the biggest ram component when it comes to the o/s. So certainly savings can be made here.

However what's often missed is the ram buffer needed for the constant game dvr recording of gameplay. This is likely the biggest part of the reserve, and will need to remain so as not to wear down the ssd prematurely.

The above should be applicable to both consoles.
 
I don't understand why you'd think this. SDD drives have no moving parts and will last longer on average than a "regular" hard drive. there used to be more sensitivity in SSD's, but look at ArsTechnica who did an extensive benchmark in 2014 or 2015 I believe and found that even back then, SSD drives lasted WAY longer than rated. This is like saying that you'd better be careful because your RAM is going to wear out! Will it? Sure! But probably not for the next 20 years or so!

Bottom line, SSD storage is way faster and it is more reliable. No moving parts and these days, less heat. Also, you think that regular hard drives wouldn't wear out from repeated writing data to them? Of course they do!

It's fruitless to worry about the SSD's in these new consoles. Sure some will die. A small percentage of electronics give up the ghost early and are "lemons." Big deal. On average, the SSD drives will last longer than we will have the consoles.
Maybe, but certainly not conclusive if it is, or isn't a good idea to constantly write to the ssd. Of course a hybrid approach could be employed where ram buffer is smallish and dumped to ssd every few minutes as opposed to constant stream, etc.
 
Last edited:
Where are people getting the idea that PlayStation 5 will run games at a lower resolution than the Xbox Series X? They will both run games at 4K, but the Xbox Series X will run games at 4k with slightly higher quality textures and/ or with models that have higher numbers of polygons and/ or with more individual traces of light (i.e. ray tracing). As for frame rates, both consoles will be able to run games at 60 frames per second at their respective appropriate graphics settings.

Here's an analogy: both the RTX 2080 and the RTX 2080 Ti can run games at 4K, but the latter can do so with higher settings and/ or higher framerates.

I think both will do 4K for racing, fighting, sports, Indies and probably many other games. Some demanding games just like this gen. The advantage will be scaled by the resolution but difference is going to very less this time. May be some 20% less pixels on PS5 or they will use checkerboard solution. Xbox 560 gb/sec will also get negated in few years time when games use more than 10gb of RAM. PS5 due to SSD will also have good advantage in texture streaming so that will also help.
 
Last edited:
The last few pages have been one big mess. People are confusing decompression speed with compression rates and missing a lot of info regarding how these consoles really work.

Both Sony and MS have a decompression block, Sony has a single purpose decompression block which specializes in Kraken while MS has two different decompression methods in their block, one is Zlib and the other is BCPack. There are two different metrics when we talk about these blocks and they are confusing because both translate to GB/s. The first is how small the decompression method can make a piece of data. For instance, you take a 10MB image and compress it with Kraken it should result in an average around 6MB while if you compress it using BC (not BCpack because we don't have BCPack numbers yet) it would be around 5MB. Because the data becomes smaller, it can transfer faster. If we've compressed a 10MB file on one end to 5MB, transmitted it and on the other side we decompress it again, we can transfer it in half the time so that's where numbers like 8GB/s or 4.8GB/s are coming from. The other metric which also translates to GB/s is how fast the decompressor can decompress the data. If we look at our 5MB compressed file, decompressing it will take time. So for instance, if it takes 5 seconds to decompress the 5MB back to its' original 10MB, our decompressor has a 2MB/s decompression rate. The decompression rate is depended on four things - the compression algorithm (BCPack, Zlib, maybe Kraken?), the data type (image, sound, text?), specifics of the data within its' type (is the image full of black blobs or is it very noisy?) and how powerful is the hardware that decompresses the data (are we using a 2700K or a 9700k?).

So as you can see, there isn't really a fixed speed we can latch to and say "PS5 SSD is 129.2525% faster", every single frame the data that flows off the SSD will be compressed with different efficiency, a different speed. Even before I'm going to get into numbers, I think it's pretty obvious that the PS5 solution is faster, they just have too much raw bandwidth for MS to keep up with them so it doesn't really matter how anyone manipulated the numbers, even if we are talking about a very ambiguous field that's hard to slap a single number on it.

So after my extremely long opening, I want to refer to what MS and Sony had said and how it probably translates to the real world. Sony is using Kraken, developers can use whatever compression they like but if they want the PS5's decompression block to decompress the data instead of the CPU, they will use Kraken. Every compression method has edge cases, so a figure like "22GB/s" shouldn't be taken as anything other than a figure that Sony and Oodle got here and there when Kraken had hit some perfect piece of data that got compressed by 75% which quadruples the bandwidth as result. Cerny told us the typical compression will result at around 8-9 GB/s. It means that some data will run through the bus at 6GB/s, other at 14GB/s but at the end, the average is around 8-9 GB/s.

MS, on the other hand, is a bit more all over the place, we have a few figures to talk about. The first one is "Over 6GB/s", that's how fast the decompressor block can decompress data. It means that, on the average usage, the decompressor block will decompress data at over 6GB/s. It doesn't mean the data was compressed 2.5x times so the SSD bandwidth will hit 6GB/s+. It's like saying "my 9700K can decompress this file at 6GB per second", it doesn't tell us anything about the size or how well compressed is the file (was the 10MB file compressed to 9MB? to 3MB? Who knows?), just how fast the hardware can decompress it. It also means that sometimes the block will decompress at 4GB/s and on others 10GB/s, it depends on the data type and other variables we've talked about three paragraphs ago.

The second number MS gave us was 4.8GB/s, that's the equivalent to Cerny's 8-9 GB/s figure. That number is a culmination of the speed of the decompression block which acts as some form of a ceiling to the whole decompression process, Zlib which is the compression method all the data which isn't textures will be compressed in and BCPack which is the compression method all the textures will be compressed in. If we flatten the discussion to "BCPack can do 50%", then 4.8GB/s is basically a combination of the raw transfer rate of the SSD, X% of the data which will use Zlib and Y% of the data which will use BCPack. The reason MS's number is X2 the raw bandwidth isn't because BC has 50% average compression ratio, but because some data will use Zlib, most data will use BCPack and BCPack has better compression on average than BC's 50%.

So to sum it up, MS is expecting that on average data will be compressed around 50%, yielding 4.8GB/s while Sony is expecting that on average data will be compressed by 31% - 39%. It doesn't mean BCPack has 50% compression, it actually means it has better than 50% because after you average it out with Zlib which is sub-30% compression on average, we get 50% compression. The 6GB/s figure MS has thrown around should be discarded from this discussion, because it doesn't tell us much and we don't even have a number for the PS5's decompression block. BC can have edge cases that hit over 75% compression and BCPack is more efficient than BC, which means that some rare data on XSX will transfer at well over 10GB/s, but using that number will be bullshit, just like the 22GB/s number from an edge case on PS5.

Such a long post which can be summed by one simple line, XSX has 4.8GB/s typical bandwidth while PS5 has 8-9 GB/s typical bandwidth. Everything else is just noise.

Very informative post dude, and much appreciated. That clears a lot of things up, tbh 👍
 
Are you claiming AMD's NAVi scaling is bullshit then?

Roadmap.jpg



How come XSX GPU's result scaled to RTX 2080 level with pro-NVIDIA Gears 5 benchmark at PC Ultra setting?

This is NAVI 10's 20 WGP for 40 CU design

2019-07-21-image-4.jpg


28 WGP version has to scale IO

Not claiming the scaling is bullshit, it clearly allows you to add CU's to the Shader Array, it allows you to add Shader Arrays to the Shader Engine, and it allows you to add Shader Engines to the GPU and each is a replicable composable block.
Whether you need to also scale L2 size to compensate or not depends on the design, but it is possible.

Again performance is increased, you have more CU's and the resources not to starve them. Depend on how you scale and tweak them (do you have 8 smaller shader arrays and 4 shader engines or add DCU's to the existing arrays and keep the number of engines the same? Up to the semi custom partner to decide and pay for the bill and accept that in some cases sustained performance may be closer or not to the theoretical peak more often or not).
 
Yeah, from what I can gather and understand, Xbox Lockhart seems like it will be a cash grab console, a console that Microsoft made just to get more sales and make Xbox gain relevancy again.

But this made me think, doesn't that FLY IN THE FACE OF EVERY XBOX FANBOY WHO ARGUED THAT SALES DO NOT MATTER & THAT XCLOUD IS THE FUTURE?!

There's their answer right there, just a thought that came to my mind, nothing serious, what about you guys? Bo_Hazem Bo_Hazem and Rusco Da Vino Rusco Da Vino 😂😂😂
 
Last edited:
I think both will do 4K for racing, fighting, sports, Indies and probably many other games. Some demanding games just like this gen. The advantage will be scaled by the resolution but difference is going to very less this time. May be some 20% less pixels on PS5 or they will use checkerboard solution. Xbox 560 gb/sec will also get negated in few years time when games use more than 10gb of RAM. PS5 due to SSD will also have good advantage in texture streaming so that will also help.

Very reasonable observation. The differences in graphics will be minor when both consoles utilize their specific advantages.

I guess the casual gamer won't see any difference at all. This is merely a hot topic for geeks. The geeks howsoever being such a minority won't impact sales even in the slightest.
 
Very reasonable observation. The differences in graphics will be minor when both consoles utilize their specific advantages.

I guess the casual gamer won't see any difference at all. This is merely a hot topic for geeks. The geeks howsoever being such a minority won't impact sales even in the slightest.
Resolution and effects precision are difficulty to spot to the naked eye at these levels. But the PS5 advantage in feeding assets to the GPU should be obvious in scene detail if devs chooses to exploit it. Even if dismissing common sense, Sony obviously agrees otherwise they wouldn't have spent their bom on pushing their solution to such levels.

I appreciate this is not a conversation Xbox fans enjoy, but the reasonable person can see the forest through the trees.
 
Last edited:

This is very similar to the type of "questions/concerns" people were saying at the start of this gen. I believe this gen version was why didn't Sony say PS4 supported Tile Resources. The talking point that time was Mega Textures. Its a repeat, now its BCPack and VRS and Mesh Shaders. This is why i find engaging some people tiring and irritating. But why didn't he mention this thing and that thing.
Both Sony and MS have a decompression block, Sony has a single purpose decompression block which specializes in Kraken while MS has two different decompression methods in their block, one is Zlib and the other is BCPack.
Minor correction, PS5 compression block supports Kraken and Zlib
 
But the PS5 advantage in feeding assets to the GPU should be obvious in scene detail if devs chooses to exploit it.

We speak third party/multi?

As I understand it they would have to create an new adoptable/dynamic kind of engine to utilize this feature.
Can't see this happening in the near future.

And sure, we may see exclusive games on PS5 which will not be possible/portable on XSX or PC. But it will be difficult to compare notes as it where. We may compare engines, but that's very much it.

Can you elaborate?
 
I don't understand why you'd think this. SDD drives have no moving parts and will last longer on average than a "regular" hard drive. there used to be more sensitivity in SSD's, but look at ArsTechnica who did an extensive benchmark in 2014 or 2015 I believe and found that even back then, SSD drives lasted WAY longer than rated. This is like saying that you'd better be careful because your RAM is going to wear out! Will it? Sure! But probably not for the next 20 years or so!

Bottom line, SSD storage is way faster and it is more reliable. No moving parts and these days, less heat. Also, you think that regular hard drives wouldn't wear out from repeated writing data to them? Of course they do!

It's fruitless to worry about the SSD's in these new consoles. Sure some will die. A small percentage of electronics give up the ghost early and are "lemons." Big deal. On average, the SSD drives will last longer than we will have the consoles.
I've seen some of that lasting test on server class SSDs. In the early days consumer class used to last less. Maybe that has change since then. Even so, you are consuming the bandwidth needed to make up for the lack of RAM. How much of the 4.8GB/s would recording take? And of the 9GB/s? Although with a footprint of 3GB in the SX RAM for the OS, I'd say Xbox is still using the RAM to record.
 
PC master race probably laughing at us video game peasants talkin about SSD's lol

PC games will never achieve:
  • 1-second boot time
  • multiple games in suspend and resume in less than a second
  • no loading screens
  • jumping straight to a section in a game or online lobby in a second


That immediacy cannot be duplicated in a PC even if their SSD reach 10GB/s because of the inherent bottlenecks in the PC architecture.

Although, PC can chase PS5 when it comes to streaming high resolution textures and details by using an excessive amount of RAM to compensate and used as a large cache.
 
Last edited:
PC games will never achieve:
  • 1-second boot time
  • multiple games in suspend and resume in less than a second
  • no loading screens
  • jumping straight to a section in a game or online lobby in a second


That immediacy cannot be duplicated in a PC even if their SSD reach 10GB/s because of the inherent bottlenecks in the PC architecture.

Although, PC can chase PS5 when it comes to streaming high resolution textures and details by using an excessive amount of RAM to compensate and used as a large cache.
3xmlb2.jpg
 
How do you quantify how busy a CPU/GPU is on percentages?

There is no way to quantify utilization from busy numbers.
You can only say that utilization is not zero.
To know how much above zero it is you need to open the game in a frame debugger.
And most of the times it's hard to do for a production version (there are some tricks though, and some games just allow it).
 
I'll describe two scenarios to help you understand
  1. GPU & CPU running a full load maxing their respective power budgets, downclocks will prevent GPU from exceeding its power budget
  2. GPU using all of its power budget and CPU using only 80%, in the event a GPU load needs extra power smartshift can divert unused CPU power to GPU to prevent downclocking
Thank you, this explains it properly to me. The only thing I don't understand yet is why would the GPU downclock if the game running needs its full power.

Thats is so wrong on so many levels and a straw man at that point, and you know that, because the PS5 has a 16% weaker GPU than Xbox Series X, it isn't anywhere near the GPU gulf between the PS4 & XB1 were (it was around 40% difference), and look, everything was built from the ground up for PS4 and then scale back on the resolution on Xbox One's 1.31 TF GPU to handle the games properly.

BUT the GPU difference between the Xbox Series X and Xbox Series S is HUGE, like a generational difference "HUGE"!
If you would've read the next 5-10 messages, you would've seen that I was exaggerating, and not serious.

Although looking at what you've just wrote, I have to say you are pretty wrong. Obviously the power difference between PS5 and XSX isn't that big, no one should care, but the Lockhart isn't holding back next-generation. If it's purely to output 1080p, there isn't really an issue.
 
Just imagine a game where you can be in a huge open world town, and as you open the door and immediately walk in the game will use all of its rendering and memory prowess on the inside of a building.

Right now interiors are sorely lacking in detail. But games will now be able to make every single open world game transformer into the same detail as something more confined and linear like The Order

it's going to be incredible not seeing bland interiors

I think at some point we will reach a dimishing return in how big and detailed open world games could be. Like it won´t make any sense to make even a GTA5 sized game where you can enter every building and every room if there won´t be anything to do for a player just because you can. Of course you have the exploration type of players, but that is a really small part of the player base.
Yes, there will be Star Citizen where you can go to hundreds of different planets, but the developers have to give you something to do there. Just sight seeing won´t be enough and that would be wasted development costs.
 
To both of you, my experience with PS4 Pro (after PS4) is massive, especially when I started Watch Dogs 2 on the base then got PS4 Pro day one when released (both day one actually) the difference is way, way bigger than what you guys seem to estimate. And yes, X1X is noticeably sharper and cleaner resolution than PS4 Pro on multiplats.

Anyway, native 4K can really backfire on your face if you're using crappy assets, the grass in Halo Infinite is cartoonish to say the least, and Far Cry 2 about 12-13 years back is even comparable:

3840_HaloInfinite_E318_AncientRings-2060x1159.jpg


cn09rhfsf5o21.jpg


I think what happened here is the devs, just like Hollywood directors, think the audience aren't educated enough by enlarging small assets like the grass there to look odd, the same funny thing actually happens with far cry games when leopards sound like cougars instead of roaring.

Overall, a native 4K game you notice it quickly compared to scaled ones, and they tend to grab your attention with very crisp image. Again, that doesn't mean the graphics should look good when assets are in mediocre quality. The indoors are great in Halo Infinite, and some assets as well look insanely good, photorealistic to be precise, it's the shitty quality thrown to the mix that gets exposed pretty fast with the fact of using native 4K.

Example of the photorealistic assets, but it's to be seen if the quality isn't downgraded.

156057510.jpg


Still, from all of what I've seen, Halo Infinite is the closest thing to next gen graphics, assuming no massive downgrades will happen like in The Witcher 3, Watch Dogs, The Division. Godfall is not worth mentioning graphically, but might be a good game.

halo infinite will be a launch game, and issue with launch games for most of the gens is that they are not that impressive (not taking full advantage of the hardware) compared to end of gen titles, for me if games next gen can look like cut scenes in ghost of Tsushima I will be a happy man, with reduced load times and rt
 
The reason Sony is so quiet about OS features this time around is because of what happened early this gen, after mentioning system built-in games streaming at their reveal.
 
I think at some point we will reach a dimishing return in how big and detailed open world games could be. Like it won´t make any sense to make even a GTA5 sized game where you can enter every building and every room if there won´t be anything to do for a player just because you can. Of course you have the exploration type of players, but that is a really small part of the player base.
Yes, there will be Star Citizen where you can go to hundreds of different planets, but the developers have to give you something to do there. Just sight seeing won´t be enough and that would be wasted development costs.
Is randomised quests not a thing in gaming? Are all quests hand crafted? Genuine question because I'm sure there are smart people out there who could figure out how to procedurally generate side quests with a handful of then hand crafted to keep things interesting.
 
Status
Not open for further replies.
Top Bottom