• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Matt weighs in on PS5 I/O, PS5 vs XSX and what it means for PC.

Bryank75

Banned
Interested to see of they can give weight or feeling to what the character is holding? For example, feeling a sword move as you walk, feeling the reverberations of loading a clip/mag. Feeling the magic spell in your hand. Someone on YouTube,
I'm Not 100% where I heard it now, was mentioning that Sony has shared a tiny piece of info already that it's just as huge as the ssd, but either no one noticed or is talking about it. Of course, it could all be hyperbolic too. Like there's something on the console that is industry changing but no one can leak it because they'd know exactly who did it.
Yes, I heard that... I was wondering if they were talking about the real time UI... that updates all the time and tells you what you can join, what your friends are doing etc. But I don't really know..I even re-read the wired articles.
 

Quantum253

Gold Member
what makes you think that devs will use this for third party games? For a single controller on a single platform?
did they use the touchpad?
Don’t be disappointed that - after the launch window period - almost ZERO games from third party devs will make use of anything of this. ZERO.
Nintendo has had the craziest controllers for the last 3 gen cycles and third-party supported them
 

Neo_game

Member
Even then it represents a 2 TF difference. Which is 4X bigger than the .5 TF difference between PS4 and Xbox One.

That is wrong way at looking at numbers. % increase is what matters. This time the difference is even less than half of previous gen. I think Xbox will have at best 20% more pixels. Something like 1440P vs 1652P or something. PS5 will have big streaming advantage though.
 
Last edited:

killatopak

Gold Member
If Matt knows so much, then why, as a moderator, he let KLee run rampant with all the false and bullshit information? Why didn't he put him in check and allowed everyone to run with Klees narrative that the PS5 was more powerful? When in fact it isn't?
He has been a moderator but has long been isn’t. In fact he was very clear XSX was stronger before the specs was officially revealed.
 

Bryank75

Banned
That is wrong way at looking at numbers. % increase is what matters. This time the difference is even less than half of previous gen. I think Xbox will have at best 20% more pixels. Something like 1440P vs 1652P or something. PS5 will have big streaming advantage though.
Already almost 2 Tflops between PS4 pro and One X and the difference is barely noticeable. Hilarious... they picked the wrong battle.....
 

Utherellus

Member
wrap up

72218130675400446859.jpg
 
Last edited:

Psykodad

Banned
Where did you hear that PS5 was going to be the target platform for development? Wouldn't it be an issue to develop a game for a SSD solution that doesn't exist anywhere else. Would make more sense to develop on a platform that doesn't have that system and scale it for the PS5.
It scales down for all platforms. Epic literally talked it over during the tech-demo showing, it's the entire point of the technology.
 
Last edited:

Utherellus

Member
Yes, Series X has the better GPU. PS5 has faster storage.
both will have benefits, currently devs say the benefits of the faster storage is a bigger deal but we will have to wait and see.

welcome to the last 2 months.

In Forza Horizon 4 you can literally traverse highly detailed open world with 400km/h on an HDD, on 1.3TF machine. Without streaming/pop-in issues for anyone to care or notice while playing from the couch.

I/O optimizations are good for development in the first place. It saves time and resources. That's why devs are excited.

4800mb/s and 9000mb/s speeds for consumers will be BARELY noticeable. Except for loading times.
 

Utherellus

Member
Im just trying to put old 100mb/s standard vs new standard into perspective. You can install even ancient SATA SSD on PC without I/O architecture optimizations and pop-in will 98% go away.

NVME SSD with 4.8GB/s performance with custom chips? Performance will be ASTRAL. More astral on PS5, sure but its already so stupidly fast that we will have to look for streaming pop-in differences under the microscope.
 
Last edited:

ToadMan

Member
I love how you use minimization to prove your point. The GPU has a 44% difference in CUs and 18% difference in TFLOP output but the magnitude of having nearly 2TFLOPs more horespower locked is substantial.

You forgot to mention that PS5 has 20% faster clock and unified VRAM in your summary... I'm sure it was just an oversight and not the "minimization" that is so grievous to you.

2Tflops magnitude difference is irrelevant - unless you want to keep playing last gen games that is.

Games will be developed and scale to take advantage of what power is available - there won't be anything "left on the table". So it's only relative performance - not absolute magnitude difference - that is significant.
 

ToadMan

Member
Whats theoretical at this point is how much of a tangible real world difference besides loading times will the PS5 SSD provide over XSX's SSD

There's no revolution in game design that's enabled by a 5gb/s SSD that isn't possible on a 2.4gb/s too

Well you said you're looking for graphic enhancements - 8k textures vs 4k textures should fit your bill. That's what a 2x SSD speed will get you.

For gameplay - well perhaps not. Then again I can play Civ 6 on switch. It works so should I say Switch is comparable to xb1 performance because I can play the same game with the same game design? I'll leave that up to you to decide.
 

killatopak

Gold Member
Some additional stuff: Context is someone was asking if PC, specifically the absence of SSD across every PC, will bottleneck games made for PS5 and XSX.

There is 'its really fucking fast'

And there is 'its doing stuff that no one else can do or match'

The latter is the marketing and PR. No one is actually saying "the PS5 SSD is not fast"
The PS5 IO setup is literally unmatched in the consumer space.
So that means everyone else is going to be unable to do what the PS5 can do?

So you have worked with all the new technologies and products coming out and can definitively say everyone else is going to hold the PS5 back?

Because that is the level of discourse on this forum and elsewhere.
The PS5 can, again, literally do things no other consumer hardware can do.

But nothing is going to “hold the PS5 back,” that’s just stupid. The PS5 will have plenty of chances to shine.

From what I interpreted here, it seems the next gen games will still shine on the new consoles even if PC games are made with less I/O speed in mind. Basically, engines made for next gen would scale really well with all the new features the new consoles have.

I mean even if the PC space is the bottleneck, with new DX12 features on the way and new SSDs being cheaper and faster, it won’t be for long.

As for Matt’s comments about the PS5 speed not being held back, I would assume that just means engines scale really well as I’ve said earlier.

It can probably scale from PC HDD->PC SSD->XSX->PS5 if not being held back is true.

As for the inclusion of HDD, we have to remember engines like UE5 still supports Switch and Mobile which has less than ideal speeds.
 

ToadMan

Member
You're just picking metrics that favor Sony. The Xbox One vs. PS4 was just a .5 TFLOP difference. The gap this generation is 2 TFLOPS a much bigger difference than last gen. Percentage means nothing. It's just funny math at this point.

:messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: Funny math. You know, I've asked about 3 different people to show me a simple percentage calculation or the formula for processor power consumption. What's not funny is how few people seem to be able to do the simple mathematics of a ten year old.

Ps4 Pro - 4.2

Xb1x - 6.0

Exactly the same magnitude difference as PS5 - Xsex. But PS5 vs xsex is half the relative difference.

Theres a quality bump to xb1x for sure - is it night and day? Nope, and it'll be indistinguishable this coming gen.

Magnitude is irrelevant - devs scale their code and assets to the power available. If it wasn't for their efforts, we'd still be playing super Mario bros at 3000FPS.
 
Last edited:

ToadMan

Member
You didnt speculate, you made a definitive statement with nothing but your assumption to back you up.

I said "should be 64 ROPS". That is speculation based on AMD's available mid last year.

If you have alternative suggestions please present them - together with how they fit the stated Tflops.

Cheers.
 

ToadMan

Member
Google it my friend. PS5 is 50% than XB1. Do you know more than the devs? Do I have to post links?
Do you know more than everybody else on here who all says its at least 18% difference between XSX/PS5?

I'm talking with you - not google.

Surely you can present a simple calculation of your 50% number? - you used it to say I am wrong. Even a 10 year old could manage it (sorry If you're less than 10 - perhaps I'm assuming too much).
 

ToadMan

Member
Yep and 2TF is a worst/best case for MS/Sony.
MS knows something about the 'sustained-ness' of it all.

Funny some forgot that Xbox OG, 360 and One X were superbly designed and are best in their class.
Somehow MS doesnt know how to design hardware, doesnt know what is ultimate power. :messenger_pensive:

RROD and ESRAM say "hi".
 

Elog

Member
In Forza Horizon 4 you can literally traverse highly detailed open world with 400km/h on an HDD, on 1.3TF machine. Without streaming/pop-in issues for anyone to care or notice while playing from the couch.

I/O optimizations are good for development in the first place. It saves time and resources. That's why devs are excited.

4800mb/s and 9000mb/s speeds for consumers will be BARELY noticeable. Except for loading times.

I really thought we had moved on from this narrative now, i.e. that all SSDs are good for is loading times in the sense of loading screens and your executables. We now know this is incorrect.

There are obvious things we do not know, but it is clear that the bandwidth and latency that PS5 delivers allow for high resolution textures to be loaded on-demand for the gamer, i.e. more or less live you move around in a scene..

Increasing texture resolution has a huge impact on how you perceive the graphics as a viewer regardless if the basic resolution of the game is 1080p, 1440p or 2160p. The latency and bandwidth of other platforms (including PC) does not allow for this live streaming of assets. How large will this texture resolution advantage be for the PS5 over other platforms? We have to wait and see for ourselves, but not even acknowledging it is not right.
 

Lethal01

Member
In Forza Horizon 4 you can literally traverse highly detailed open world with 400km/h on an HDD, on 1.3TF machine. Without streaming/pop-in issues for anyone to care or notice while playing from the couch.

Yes, because it was made to be played from an HDD, it was totally built around it's limitations and hindered by them. A game built around an SSD. You think it's highly detailed but it could be 100x more detailed with improvements to the GPU, CPU and SSD. Instead of looking at a game and saying that looks nice so we don't need better storage look at a pixar movie and understand that games still look a bunch of nicely folded piece of paper with good art scribbled on in comparison.

For devs that don't take advantage of it, it will just mean less time spent optimizing. But there are tons of way to improve the visuals and overall design of a game when you increase the amount of data you can fetch from storage and the possibilities increase as you increase that amount relative to the size of the game. You mention that pop in goes away, But do you really think that we are gonna stop there. Now that we have power to spare we are going to use that power until it's go nothing left to give.

I'm sure in the first year of release there will be tons of games that don't make full use of it but vast improvements in storage(SSD) is an absolute revolution in not just gaming but computing in general that will cause differences as noticeable as raytracing.

And I say this as an artist that as been frothing at the mouth over the idea of getting real time raytracing in games for the last 2 decades.

Just sit back and enjoy the ride, no need to try to explain what you have no real experience with.
 
Last edited:

jimbojim

Banned
And lastly, XSX 12.1 is SUSTAINED performance at all times. PS5s spec sheet says variable for a reason. If something has variable performance, then that means performance will fluctuate

12.1 TF is a theoretical number and much power is left on the table. 10.28 is also a theoretical number, but Cerny said with variable clocks you can squezee more performance.
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
Interesting. I really wonder what the SSD means for exclusive titles. It is the most significant change with respect to last generation in terms of raw kpi increase.

Also, I doubt multi platform games will take full advantage of the PS5’s architecture in the first few years, but once PC’s can match the PS5’s bandwidth, devs could explore their options.
 
Last edited:

longdi

Banned
RROD and ESRAM say "hi".

My launch PS3 also suffered YLOD.
PS3 and 360 reliability was hampered by laws of that time, dont be sneaky!

Xbox One was the only mis-step. Under Phil, we see Xbox returning to their power roots.

Like comon guys, MS has a history of ultimate power out in their consoles.
All this talk about SSD custom design compensation, that Sony is the only great one around.... :messenger_grinning_sweat:
 
12.1 TF is a theoretical number and much power is left on the table. 10.28 is also a theoretical number, but Cerny said with variable clocks you can squezee more performance.

How is 12.1 theoretical? Its not aspirational. It's math based on a fixed HW and clock.

The XSX has boost clocks as well but instead chose to lock its power and performance profiles.
 

FranXico

Member
My launch PS3 also suffered YLOD.
PS3 and 360 reliability was hampered by laws of that time, dont be sneaky!
My PS3 also suffered an YLOD... after 8 years, not a few months. The RROD was an issue because roughly one third of launch 360s were dying within 6 months of use or less. "Laws of that time" my arse.
Pretty hypocritical to call others out for being sneaky while trying to deceive others yourself.
 
Last edited:

jimbojim

Banned
Google it my friend. PS5 is 50% than XB1. Do you know more than the devs? Do I have to post links?

Yeah, google it. PS5 SSD is 129% than XSX SSD. Isn't that striking?

The GPU has a 44% difference in CUs and 18% difference in TFLOP output but the magnitude of having nearly 2TFLOPs more horespower locked is substantial.

Yes, it has 44% more CU, if you calculate it without clock speed. Rays per sec is around 18% more on XSX. Not so much, isn't it.

XSX:
52 TMUs x 4 x 1.825 GHz ~ 379.6 billion ray ops per second

PS5:
36 TMUs x 4 x 2.23 GHz ~ 321.12 billion ray ops per second

. It was ported in a few weeks by one person, then I estimate that with DirectML it could get to 4k@near 60 fps as well.

At 4k? LOL
XSX isn't so powerful as you think it is.
 
Last edited:

killatopak

Gold Member
All this talk about SSD custom design compensation, that Sony is the only great one around.... :messenger_grinning_sweat:
I don’t think anyone is saying Xbox Series X‘s I/O isn’t great. I’m sure Phil and his team did their best to come up with the best price/performance machine they could. it’s just Sony and MS has diverged in their priorities.

I even have a sneaking suspicion why they settled for lower SSD speeds instead of something blazing fast. Of course this is just conjecture since the existence of a separate SKU still hasn’t been proven but what if Lockhart was the deciding factor here?
 

geordiemp

Member
How is 12.1 theoretical? Its not aspirational. It's math based on a fixed HW and clock.

The XSX has boost clocks as well but instead chose to lock its power and performance profiles.

Oh dear, TF on all computers is the max calculated number if all the CUs are operating at the same time at 100 % efficiency. Never happens on anything.

Its like sayng my car operates at 180BHP and can go 150 mph..... A proper benchmark is 0-60 mph in 8 seconds on a flat road, no wind or rain..

So the actual real TF applied to any task will depend on how the power is used by the system for the tasks presented and any efficiencies in applying that power.

Mabe XSX will use more of its potential TF than Ps5, maybe the other way around in benchmarks....what will be the delta, will it increase more than the paper 15/18 % (Up/down) or will the difference be the smallest we have ever seen in a generation ?

What have third party devs indicated...they would be the ONLY people that would know this.
 
Last edited:

longdi

Banned
My PS3 also suffered an YLOD... after 8 years, not a few months. The RROD was an issue because roughly one third of launch 360s were dying within 6 months of use or less. "Laws of that time" my arse.
Pretty hypocritical to call others out for being sneaky while trying to deceive others yourself.

Mine died around mid 2011. I still remember the last game i played was Nier, got from Amazon xmas sales. Barely managed to complete it before it died.

It was a lightly used 60Gb unit with PS2 hardware, purchased early 2007. I guess i could reflow it and ebay for riches.🤙

So that makes it around 3 years. fuck Sony!

I went PCMR from then on.
 
Last edited:

longdi

Banned
Oh dear, TF on all computers is the max calculated number if all the CUs are operating at the same time at 100 % efficiency. Never happens on anything.

Its like sayng my car operates at 180BHP and can go 150 mph..... A proper benchmark is 0-60 mph in 8 seconds on a flat road, no wind or rain..

So the actual real TF applied to any task will depend on how the power is used by the system for the tasks presented and any efficiencies in applying that power.

Mabe XSX will use more of its potential TF than Ps5, maybe the other way around in benchmarks....what will be the delta, will it increase more than the paper 15/18 % (Up/down) or will the difference be the smallest we have ever seen in a generation ?

What have third party devs indicated...they would be the ONLY people that would know this.

Coming from PCMR, the difference between PS5 and Series X is akin to 2070S vs 2080Ti. Yes, im suspicious of Mark 'variable' frequency claims.
Only in console warring where such silicon differences are hand waved as insignificant or can be compensated by custom I/O storage 🤷‍♀️
 

geordiemp

Member
Coming from PCMR, the difference between PS5 and Series X is akin to 2070S vs 2080Ti. Yes, im suspicious of Mark 'variable' frequency claims.
Only in console warring where such silicon differences are hand waved as insignificant or can be compensated by custom I/O storage 🤷‍♀️

You be suspicious and concerned young warrior. Mark Cerny did try to make the power control as simplified as possible, but its difficult to explain technical concepts to the less informed.

What are you concerned about, give me a laugh. Do you think predictive PID control on workload is worse than waiting to control on a slower thermal measurement. Do you understand the granularity of such control and how it would be better in the time domain ?

Can you even spell what PID stands for ? Whatever I am bored.
 
Last edited:

Frederic

Banned
Already almost 2 Tflops between PS4 pro and One X and the difference is barely noticeable. Hilarious... they picked the wrong battle.....

this is something completely different though. That was a midgen refresh.
cant compare it, when there is a completely new gen.
Also, you can still do a lot more work with 2TF's RDNA2 than you can with 500GF's of GCN, since RDNA 2.0 is much more efficient.
Anyway, we will see as soon as the games arrive then we can finally stop the bullshit ssd-Talking.
 

Utherellus

Member
I really thought we had moved on from this narrative now, i.e. that all SSDs are good for is loading times in the sense of loading screens and your executables. We now know this is incorrect.

There are obvious things we do not know, but it is clear that the bandwidth and latency that PS5 delivers allow for high resolution textures to be loaded on-demand for the gamer, i.e. more or less live you move around in a scene..

Increasing texture resolution has a huge impact on how you perceive the graphics as a viewer regardless if the basic resolution of the game is 1080p, 1440p or 2160p. The latency and bandwidth of other platforms (including PC) does not allow for this live streaming of assets. How large will this texture resolution advantage be for the PS5 over other platforms? We have to wait and see for ourselves, but not even acknowledging it is not right.

Yes, because it was made to be played from an HDD, it was totally built around it's limitations and hindered by them. A game built around an SSD. You think it's highly detailed but it could be 100x more detailed with improvements to the GPU, CPU and SSD. Instead of looking at a game and saying that looks nice so we don't need better storage look at a pixar movie and understand that games still look a bunch of nicely folded piece of paper with good art scribbled on in comparison.

For devs that don't take advantage of it, it will just mean less time spent optimizing. But there are tons of way to improve the visuals and overall design of a game when you increase the amount of data you can fetch from storage and the possibilities increase as you increase that amount relative to the size of the game. You mention that pop in goes away, But do you really think that we are gonna stop there. Now that we have power to spare we are going to use that power until it's go nothing left to give.

I'm sure in the first year of release there will be tons of games that don't make full use of it but vast improvements in storage(SSD) is an absolute revolution in not just gaming but computing in general that will cause differences as noticeable as raytracing.

And I say this as an artist that as been frothing at the mouth over the idea of getting real time raytracing in games for the last 2 decades.

Just sit back and enjoy the ride, no need to try to explain what you have no real experience with.

To answer you both: I am talking about SeriesX specifically, not PC. I just mentioned PC for the point that even ancient, unoptimized Sata SSD can give you vast difference.

So, back to SeriesX then.

Elog, It has custom, optimized I/O architecture too. With BCpack and Sampler Feedback Streaming it can very efficiently stream textures. 4.8GB/s performance is only with BCpack+Zlib compression. Sampler Feedback Streaming is what makes HUGE difference(Which Cerny didn't confirm as a PS5 feature)


Take a look at this:

61174265130537363772.jpg




Lethal01, hello colleague, Im CG Artist too.
So what are you trying to prove here? Your words were a general overview of the industry situation. And? How it correlates to my point that jump from 100mb/s to 4800mb/s is already colossal and difference between streaming capabilities of these two consoles will be microscopical?

"taking advantage of SSD" "dude, asset sizes will grow"

how much will it increase tho? 3 times? 4 times? Dont forget: There is 13-14GB Ram available(and its shared).

then you will say: "But devs will stream out unnecessary assets when you don't look at them thus freeing up memory. Assets can be up to 10 times more detailed and fidelity-rich"

So what we got here: 10 times more fidelity which is compensated by fast ssd to free up the RAM. Amazing engineering right ???????? <3


My friend, thing is here................CPU and GPU bottlenecks say fucking hello.

By your logic, devs will stream astonishingly high-detailed assets ye? Enjoy 30fps 1080p gaming on next-gen consoles then.

10TF machine streaming more detailed worlds then 12TF machine. Wow Sony outplayed MS. Truth is - game will run so poorly that you will lose all the hype.

Do you need your game to be sabotaged by high quality assets and fps/resolution smashed down? Why you ignoring CPU/GPU bottleneck aspect?
 
Last edited:

THE:MILKMAN

Member
Coming from PCMR, the difference between PS5 and Series X is akin to 2070S vs 2080Ti. Yes, im suspicious of Mark 'variable' frequency claims.
Only in console warring where such silicon differences are hand waved as insignificant or can be compensated by custom I/O storage 🤷‍♀️

Go and read Liabe Brave's post about the variable clocks over at era in the PS5 architecture thread. It is very informative.
 

Lethal01

Member
To answer you both: I am talking about SeriesX specifically, not PC. I just mentioned PC for the point that even ancient, unoptimized Sata SSD can give you vast difference.

So, back to SeriesX then.

Elog, It has custom, optimized I/O architecture too. With BCpack and Sampler Feedback Streaming it can very efficiently stream textures. 4.8GB/s performance is only with BCpack+Zlib compression. Sampler Feedback Streaming is what makes HUGE difference(Which Cerny didn't confirm as a PS5 feature)


Take a look at this:

61174265130537363772.jpg




Lethal01, hello colleague, Im CG Artist too.
So what are you trying to prove here? Your words were a general overview of the industry situation. And? How it correlates to my point that jump from 100mb/s to 4800mb/s is already colossal and difference between streaming capabilities of these two consoles will be microscopical?

"taking advantage of SSD" "dude, asset sizes will grow"

how much will it increase tho? 3 times? 4 times? Dont forget: There is 13-14GB Ram available(and its shared).

then you will say: "But devs will stream out unnecessary assets when you don't look at them thus freeing up memory. Assets can be up to 10 times more detailed and fidelity-rich"

So what we got here: 10 times more fidelity which is compensated by fast ssd to free up the RAM. Amazing engineering right ???????? <3


My friend, thing is here................CPU and GPU bottlenecks say fucking hello.

By your logic, devs will stream astonishingly high-detailed assets ye? Enjoy 30fps 1080p gaming on next-gen consoles then.

10TF machine streaming more detailed worlds then 12TF machine. Wow Sony outplayed MS. Truth is - game will run so poorly that you will lose all the hype.

Do you need your game to be sabotaged by high quality assets and fps/resolution smashed down? Why you ignoring CPU/GPU bottleneck aspect?

Just dropping a tweet from the lead graphics dev of godot.

unknown.png


I'm just tired at this point, you can push the PS5 SSD to it's limits without being bottlenecked by the gpu and I'm not talking about just asset streaming.
Nanite is expensive but Lumen is even more so.

Matt claims that the series X performance is very impressive, but going to the ps5 is a big jump.
 
Last edited:

Degree

Banned


makes sense. did anyone say something different? Of course XSX is much more powerful (better graphics and performance) and PS5 is faster (less loading times). I think it's clear now, right?

Just dropping a tweet from the lead graphics dev of godot.

unknown.png


I'm just tired at this point, you can push the PS5 SSD to it's limits without being bottlenecked by the gpu and I'm not talking about just asset streaming.
Nanite is expensive but Lumen is even more so.

Matt claims it's a the series X performance is very impressive, but going to the ps5 is a big jump.

He also says:

As I have said before, I expect the difference in third party titles to be modest, as they can’t be designed around a faster solution.

You simply can not build your game on the fastest SSD and then scale down. this just isn’t possible and will never be.
The only thing that’s possible is that scaling based on GPU.

Thats why on PC you have different settings like low, medium, ultra. Etc.
But you can NOT have different settings based SSD. It doesn’t work like that.
 
Last edited:

Utherellus

Member
Just dropping a tweet from the lead graphics dev of godot.

unknown.png


I'm just tired at this point, you can push the PS5 SSD to it's limits without being bottlenecked by the gpu and I'm not talking about just asset streaming.
Nanite is expensive but Lumen is even more so.

Matt claims it's a the series X performance is very impressive, but going to the ps5 is a big jump.

Well UE5 is another story. It's rendering methods are different from traditional.

But UE is tiny part of whole industry. PS5 will have lead in every UE game? Okey, no problem. But that like 2-3% of games.
 

Eliciel

Member

I am only going to buy a PS5 and play Xbox games on PC, but even I am voting for closing the thread based on that.
Okay, I heard you, I agree and it's all said and done, raw numbers wise the XSEX is a beast.
Let's all enjoy the games coming as much as possible on either of the consoles and call it a day. I don't see value in "Quote Wars".
By the end of the day a console is nothing but a vessel for good games. Either good games appear or they don't. That's all that matters.
 

longdi

Banned
Go and read Liabe Brave's post about the variable clocks over at era in the PS5 architecture thread. It is very informative.

It is Amd smartshift, that's all there is to it. Look where Amd deploy smartshift and you get the same answer.

The hardware is still limited by smaller die and the choice to clock up to 2.23ghz.
It is this 2.23ghz, which gives 10.3tf, that we are concerned and suspicious of.
 

Marlenus

Member
I said "should be 64 ROPS". That is speculation based on AMD's available mid last year.

If you have alternative suggestions please present them - together with how they fit the stated Tflops.

Cheers.

I don't have alternative suggestions because I simply don't know. 64 rops seems like the minimum it will have and I would expect the maximum it will have is 96 but this is gut feeling based on what I expect to be needed for 4k rendering over the course of a generation.

With the fact series X has 56 CUs and 52 active that leads me to believe that it is using 2 shader engines, although it would mean more shaders per engine than anything seen so far, another option is 4 shader engines which would mean 14 CUs per engine with 13 active in each.

The problem is neither design matches with the number of memory controllers per shader engine seen so far since navi 10 has a 256bit bus and Navi 14 is 128bit. We do see that you can disable a memory controller and that only impacts bandwidth since the 5600XT has the same shader and rop count of the 5700 but uses a 192bit bus with 6GB ram instead of a 256bit bus with 8GB.

So putting all this together the series X GPU is wholly unique compared to what we have seen so far. This makes using existing products to guess at the unknowns in the design a bit pointless. Maybe there will be more to go on when AMD releases more information about RDNA2.

I really thought we had moved on from this narrative now, i.e. that all SSDs are good for is loading times in the sense of loading screens and your executables. We now know this is incorrect.

There are obvious things we do not know, but it is clear that the bandwidth and latency that PS5 delivers allow for high resolution textures to be loaded on-demand for the gamer, i.e. more or less live you move around in a scene..

Increasing texture resolution has a huge impact on how you perceive the graphics as a viewer regardless if the basic resolution of the game is 1080p, 1440p or 2160p. The latency and bandwidth of other platforms (including PC) does not allow for this live streaming of assets. How large will this texture resolution advantage be for the PS5 over other platforms? We have to wait and see for ourselves, but not even acknowledging it is not right.

Those textures need to be processed by the GPU and PS5 has a texture fillrate deficit to series X.
 

geordiemp

Member
To answer you both: I am talking about SeriesX specifically, not PC. I just mentioned PC for the point that even ancient, unoptimized Sata SSD can give you vast difference.

So, back to SeriesX then.

Elog, It has custom, optimized I/O architecture too. With BCpack and Sampler Feedback Streaming it can very efficiently stream textures. 4.8GB/s performance is only with BCpack+Zlib compression. Sampler Feedback Streaming is what makes HUGE difference(Which Cerny didn't confirm as a PS5 feature)


Take a look at this:

61174265130537363772.jpg




Lethal01, hello colleague, Im CG Artist too.
So what are you trying to prove here? Your words were a general overview of the industry situation. And? How it correlates to my point that jump from 100mb/s to 4800mb/s is already colossal and difference between streaming capabilities of these two consoles will be microscopical?

"taking advantage of SSD" "dude, asset sizes will grow"

how much will it increase tho? 3 times? 4 times? Dont forget: There is 13-14GB Ram available(and its shared).

then you will say: "But devs will stream out unnecessary assets when you don't look at them thus freeing up memory. Assets can be up to 10 times more detailed and fidelity-rich"

So what we got here: 10 times more fidelity which is compensated by fast ssd to free up the RAM. Amazing engineering right ???????? <3


My friend, thing is here................CPU and GPU bottlenecks say fucking hello.

By your logic, devs will stream astonishingly high-detailed assets ye? Enjoy 30fps 1080p gaming on next-gen consoles then.

10TF machine streaming more detailed worlds then 12TF machine. Wow Sony outplayed MS. Truth is - game will run so poorly that you will lose all the hype.

Do you need your game to be sabotaged by high quality assets and fps/resolution smashed down? Why you ignoring CPU/GPU bottleneck aspect?

A new warrior appears - whats your original NEOGAF user name ?

You would think by now xbox discord would send over people who dont dribble

You sound like misterx.
 
Last edited:

Soulsdark

Member
Meanwhile here I am, and I couldn't give less of a fuck about who can render the most pores in a characters face.

I just want good art direction and design, this obsession with realistic graphics just seem to hold that back.
Everyone arguing about who can render photorealism the best but I just don't understand what's so exciting about it. Super ultra graphical fidelity just doesn't excite me anymore.

Especially when devs are now talking about 8k even but we're still stuck at 30fps gaming. I am just so tired of this dick measuring contest about who can make games that look the closest to real life and movies.
Who really cares which console is going to be 1% faster?
You're probably honestly not even going to notice the difference.

I can understand if you're some kind of a tech guy who's just really interested in this stuff but lets be real here not even 1% of the people arguing about this are.
 
Last edited:

THE:MILKMAN

Member
It is Amd smartshift, that's all there is to it. Look where Amd deploy smartshift and you get the same answer.

The hardware is still limited by smaller die and the choice to clock up to 2.23ghz.
It is this 2.23ghz, which gives 10.3tf, that we are concerned and suspicious of.

Sorry, but this is wilful ignorance. The post is very detailed but easy to understand. It even has a useful animated gif.

I'm not sure why you even bring up SmartShift or talk about the hardware being limited by being smaller die...?
 
Top Bottom