Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
Is he serious?? Lmaooo

KGfPM8V.jpg
I am really worried about the mental condition of that guy
 
-I dont get this, why are we expecting GPU's in general to have more CU over time as it contributes to more TFLOPS assuming that it has sufficient bandwidth over time as well.
-Does CPU frequency have less of an impact on graphics compared to GPU frequency? You could say PS5 GPU frequency increase is negligible compared to XSX GPU frequency (400mhz)

- we are not. It's easier to clock things lower and have more CUs, but there are edge cases to both. We still don't know how RDNA2 is impacted by higher clocks and how efficient it is.
- yes, of course. You could get Uncharted level graphics with potato CPUs like the Jaguars. They are not the bottleneck
- the difference between 3.8 and 3.5GHz is not 400mhz, nor is the difference between 3.6 with VRS and 3.5 (with or without a similar tech enabled.)
 
hahaha. Maybe if it had the first gen zen 1 processor you could get Uncharted 60fps!! :messenger_grinning_smiling::messenger_winking_tongue:

Maybe, but unlikely. Just look at similarly set up machines with different CPUs and you'll notice the FPS difference is not massively affected by them, unless you are trying to run Crysis on a 10 year old Celeron. You're most likely to be bottlenecked by the GPU than CPU

Edit: that said, it depends on the game. Kerbal space program is CPU hungry because of its physics calculations, so you'll see bigger improvements.
 
Last edited:
Quantum Error Dev: Cerny Is A Genius, PS5 Feels Designed with Developers In Mind; Zen 2 Is Exciting

"We feel the man is an absolute genius! It felt like the system is designed with developers in mind. We are really excited about the Zen 2 CPU, which will make things possible on PS5 that were not on PS4. Also, the Tempest audio engine explanation made us squeal like little excited kids! The HRTF and sound experience that we will be able to create for our players is truly groundbreaking with the PS5."

 
They were testing 536 last year, but stuck with 448. Probably diminishing returns and cost effectiveness. 536 would have been better, but why pay, let's say, 15% more when your real world advantage is 5%?

People are still having their mindset stuck in the HDD era. With 5.5-22GB/s SSD speeds you have 4,621-54,355% (mininum-max compared to 40.4-116.5MB/s of PS4) more data to transfer per one second, assuming both having zero bottlenecks theoretically. So you would need less cycles to transfer that data, not to mention 12 channels, 6 priority levels, and GPU cache scrubbers (less offloading/uploading= more efficient).

If 448GB/s was good for RTX2080 with loads of bottlenecks, then it should be an overkill for PS5 that's optimized like no other.

mobile01-e99a64c75802fa255fe5d46d29d9dd0b.jpg
 
Last edited:
Cerny said this himself. If you replace an HDD in the PS4 PRo with a 10x faster SSD, it only amounts to a 2x decrease in loading times.


I think that what he means by eliminating every possible bottleneck. From my understanding there is more custom hardware in the PS5s I/O solution than there is in the XSX. That is how they managed to achieve those incredibly high speeds when compared to the XSX I/O system.

That's how I understand it.
 
3.8GHz is for SMT Off only (I highly doubt any competent studio will choose to utilise this mode from the 2nd wave of games onwards).

With SMT On it's 3.6GHz (XSX) compared to 3.5GHz (PS5). Let's give the PS5 a less than fair shake and take Cerny's "couple percent" clock reduction providing 10% power draw reduction and jack that clock reduction to more than double at 5%....

The difference would still be less than the difference between the OG X1 (1.6GHz) and Base PS4 (1.75GHz) where the X1 would occasionally command an extra frame or two in CPU-limited scenarios.

Postulating a scenario where you're running heavy workloads/instructions and are CPU-limited, GPU-limited and BW-limited all at the same time (very unlikely) I think the absolute worst case scenario will be 30% less native pixels and an extra dropped frame here and there. We're talking >1800p vs ~2160p.

In general however, I expect the CPU difference to be negligible, the GPU difference (~18%) to be mitigated somewhat by the higher clocks and the end result to be mitigated further by dynamic resolutions and/or reconstruction techniques.

The dynamic clocks is a design win for the PS5, not a negative. The same device without it would be less powerful. Think of it this way, if MS implemented the same system for the XSX, it would be more powerful than it currently is, because as it stands, the XSX will not be maximising the available power draw across all workloads. If for eg. you have a power/thermal budget of 200w but in common workloads you're only using 170w because you're protecting for edge case scenarios at the same clocks, you're not utilising all the power available to you, there's 30w left on the table during those workloads.

I wouldn't be surprised if this becomes a new design paradigm going forward..
 
Last edited:
Bo_Hazem Bo_Hazem , throwing more data at a GPU increases raw bandwidth needs.

Does that translates into making a single game 5x times its original size? If the whole game is like 100GB, you have enough bandwidth to throw it all back and forth 4.48x times in one second if there is an SSD/CPU/GPU that would handle it. Correct me if I'm wrong.
 
Last edited:
Does that translates into making a single game 5x times its original size? If the whole game is like 100GB, you have enough bandwidth to throw it all back and forth 4.48x times in one second if there is an SSD/CPU/GPU that would handle it. Correct me if I'm wrong.

The game is 100GB in raw data, the game as it is generated in 3d space would be equivalent to TBs of data. The GPU doesn't just replay frames like a movie, if you use the SSD to facilitate larger worlds with extreme LOD and High Poly models your GPU has to handle that workload. The SSD is a delivery mechanism, that is all.
 
Quantum Error Dev: Cerny Is A Genius, PS5 Feels Designed with Developers In Mind; Zen 2 Is Exciting

"We feel the man is an absolute genius! It felt like the system is designed with developers in mind. We are really excited about the Zen 2 CPU, which will make things possible on PS5 that were not on PS4. Also, the Tempest audio engine explanation made us squeal like little excited kids! The HRTF and sound experience that we will be able to create for our players is truly groundbreaking with the PS5."



Are they trying hard for that dev kit
 
Quantum Error Dev: Cerny Is A Genius, PS5 Feels Designed with Developers In Mind; Zen 2 Is Exciting

"We feel the man is an absolute genius! It felt like the system is designed with developers in mind. We are really excited about the Zen 2 CPU, which will make things possible on PS5 that were not on PS4. Also, the Tempest audio engine explanation made us squeal like little excited kids! The HRTF and sound experience that we will be able to create for our players is truly groundbreaking with the PS5."


I think the real bottleneck for PS5 and XSX is not the limitations of its new hardware: it's still designing games with previous gen in mind.

Will they make Gran Turismo Sport 2/Gran Turismo 7 with PS4Pro and PS4 in mind?

Is there any game that is going to be built from the ground up using next gen specs in mind ONLY? On top of that, customizing and optimizing its game engines to target the custom features of PS5 and XsX?
 
More power more bandwidth.

Pretty much. Sony probably doesn't need more bandwidth in their system. Someone explained earlier how you need a higher bandwidth to feed more CUs. Which is why the XSX has a higher bandwidth than the PS5.

With that said the two are still very close though. Definitely not predicting a huge difference between the two where power is concerned. My mind could change with a good comparison though.
 
People are still having their mindset stuck in the HDD era. With 5.5-22GB/s SSD speeds you have 4,621-54,355% (mininum-max compared to 40.4-116.5MB/s of PS4) more data to transfer per one second, assuming both having zero bottlenecks theoretically. So you would need less cycles to transfer that data, not to mention 12 channels, 6 priority levels, and GPU cache scrubbers (less offloading/uploading= more efficient).

If 448GB/s was good for RTX2080 with loads of bottlenecks, then it should be an overkill for PS5 that's optimized like no other.

mobile01-e99a64c75802fa255fe5d46d29d9dd0b.jpg
Any console got an overkill in that area the PS5 448 GB/s has to be share between the gpu, cpu and tempest engine, we need
time to say if this will enough but in the end this year some dev should be able to express its opinions.

For XSX more of the same because even the most of his memory is faster other fragment is "very" slow so the real bandwidth
is even far also the reason of have more TF needs also more bandwidth.

I am little disappointed of that in both cases but we need time to see if this can be a problem, maybe is enough for the all
wavefronts the devs want to use because AMD improve a lot the use of bandwidth maybe not.

But well just less see recent history Xbox one has a gabarge of memory and we saw Gears 5 and Forza Horizon 4 running in that console.
 
Maybe, but unlikely. Just look at similarly set up machines with different CPUs and you'll notice the FPS difference is not massively affected by them, unless you are trying to run Crysis on a 10 year old Celeron. You're most likely to be bottlenecked by the GPU than CPU

Edit: that said, it depends on the game. Kerbal space program is CPU hungry because of its physics calculations, so you'll see bigger improvements.

Cant the Zen 2 Cores contribute to graphics the same way the CELL CPU contributed to graphics processing when developers were running into walls/limits when trying to utilize the PS3 NVIDIA GPU? The CPU cores are not for general purpose tasks like Microsoft word, photoshop, etc? We typically think of CPU contributing to physics calculations, frames per second, but anything else in terms of graphics which is 'CPU Bound'?

Your explanations are very helpful.
 
If I were MS I would pump XsX Ram to 20GB just to have consistent 560GB/s BW across the board and remove the weird bottlenecked setup they have currently.
 
Last edited:
Quantum Error Dev: Cerny Is A Genius, PS5 Feels Designed with Developers In Mind; Zen 2 Is Exciting

"We feel the man is an absolute genius! It felt like the system is designed with developers in mind. We are really excited about the Zen 2 CPU, which will make things possible on PS5 that were not on PS4. Also, the Tempest audio engine explanation made us squeal like little excited kids! The HRTF and sound experience that we will be able to create for our players is truly groundbreaking with the PS5."


Is there a way to measure your HRTF somewhere and have like a code/number that you can use it inside PS5 that would figure it out if HRTF is a universal measurement? Audiophile Audiophile help.
 
Cant the Zen 2 Cores contribute to graphics the same way the CELL CPU contributed to graphics processing when developers were running into walls/limits when trying to utilize the PS3 NVIDIA GPU? The CPU cores are not for general purpose tasks like Microsoft word, photoshop, etc? We typically think of CPU contributing to physics calculations, frames per second, but anything else in terms of graphics which is 'CPU Bound'?

Your explanations are very helpful.

Thank you

To address your question, I don't see that as even remotely possible due to the technology used by the current platforms. The PS3 was very exotic in that it had both a graphics "card" and CELLs SPEs, which could somewhat be leveraged to assist in graphical computations.

The PS4, 5 and XBOX consoles (one, X and Series X) feature X86 tech, very similar to your desktop computer. This means that there is a much more defined and clear separation between Processor tasks and GPU tasks.

So lets say this and apply it to an extreme case like Kerbal Space Program:

The CPU handles physics calculations, which you cannot downgrade. This includes not just wind resistance, gravity, etc, but also outputting that into your screen. Now, applying the above logic, you can run KBP in a dual core I3, but once you reach its computational limit for physics, you will see frame drops. This depends on how many objects you're calculating physics for.

On the opposite side, if you run it using your 8 core / 16 thread machine and assuming the game is coded to leverage that many cores and threads, you will then be able to simulate all those objects and more without incurring in frame drops. Remember, once your CPU cannot output calculations for X frames per second, frames will drop.

Now, considering your graphics processing, this is used for whatever you see on screen. Assuming the same dual core, you can have an RTX 2080 and still experience heavy frame drops due to physics. The GPU only handles graphics. Similarly, two potato GPUs with the processors above will output similarly specd graphics, but at varying FPS.

TL:DR there's a much bigger separation between CPU and GPU in X86 systems than there was with the CELL.
 
If I were MS I would pump XsX Ram to 20GB just to have consistent 560GB/s BW across the board and remove the weird bottlenecked setup they have currently.
The problem is here is the money, the GDDR6 is really expensive so even and increment 4 GB could mean dozen
millions more even more of lost if you plan sell the console without a profit only in the first month.
 
Quantum Error Dev: Cerny Is A Genius, PS5 Feels Designed with Developers In Mind; Zen 2 Is Exciting

"We feel the man is an absolute genius! It felt like the system is designed with developers in mind. We are really excited about the Zen 2 CPU, which will make things possible on PS5 that were not on PS4. Also, the Tempest audio engine explanation made us squeal like little excited kids! The HRTF and sound experience that we will be able to create for our players is truly groundbreaking with the PS5."

Wow, this is surprisingly. Things are possible which weren't on PS4.. 😎
 
Cant the Zen 2 Cores contribute to graphics the same way the CELL CPU contributed to graphics processing when developers were running into walls/limits when trying to utilize the PS3 NVIDIA GPU? The CPU cores are not for general purpose tasks like Microsoft word, photoshop, etc? We typically think of CPU contributing to physics calculations, frames per second, but anything else in terms of graphics which is 'CPU Bound'?

Your explanations are very helpful.

Cell was unlike any other CPU in that it supports CPU like instructions with its PPC and GPU like parallelism through its SPUs.

No other CPU ever is able to do such thing, that's why Cell was said to be even more powerful than current top end CPUs. It was very exotic piece of technology.

Dodkrake Dodkrake has done better job than me explaining the difference 👍
 
Last edited:
1) Unlike prevues consoles PS5 and XSX are streaming data for the next frame. That's how RAM is saved, only the data for the next frame is in RAM so we save all the buffer space. It means that both consoles save exactly the same amount of RAM and keep in memory just one frame (and lowest quality LOD). If one console can stream for the next frame x2 more data, it means the next frame will need x2 more data in memory. So higher quality assets == more memory needed.
2) Higher quality textures don't tax the GPU much, higher quality models, animations, shaders, alpha, etc. does.

1. Okay. But that's kind of tautological then. Better assets are bigger. That's why they are better. And there is no way to know what assets are needed in the exact next frame (otherwise you don't need GPU at all). So it will be more like next 10 frames or so. Or even 100 frames (which is still fast).
So the end result will be that in a specific stream timeframe (for example 1 sec) PS5 will have 2x more assets, but the resident part will be the same.
2. Shaders are small. Animations don't tax at all. Taxing things are: bigger output maps (render targets, shadow, buffers, etc.) and higher poly models. Shaders may be more taxing if they use completely new assets. For example wider deferred buffers or more texture layers.
 
Cant the Zen 2 Cores contribute to graphics the same way the CELL CPU contributed to graphics processing when developers were running into walls/limits when trying to utilize the PS3 NVIDIA GPU? The CPU cores are not for general purpose tasks like Microsoft word, photoshop, etc? We typically think of CPU contributing to physics calculations, frames per second, but anything else in terms of graphics which is 'CPU Bound'?

Your explanations are very helpful.

It's not like I'm an expert but it'll happen indirectly on PS5 because of smartshift, the CPU could run at a slightly lower frequency so if/where necessary the GPU can sustain its max frequency.

One thing to remember is that CPUs have been underutilised for a few years, arguably because of the current gen of consoles, so I think CPUs should be a lot better at doing things they were designed for ie physics, ai etc...
 
Last edited:
Thank you

To address your question, I don't see that as even remotely possible due to the technology used by the current platforms. The PS3 was very exotic in that it had both a graphics "card" and CELLs SPEs, which could somewhat be leveraged to assist in graphical computations.

The PS4, 5 and XBOX consoles (one, X and Series X) feature X86 tech, very similar to your desktop computer. This means that there is a much more defined and clear separation between Processor tasks and GPU tasks.

So lets say this and apply it to an extreme case like Kerbal Space Program:

The CPU handles physics calculations, which you cannot downgrade. This includes not just wind resistance, gravity, etc, but also outputting that into your screen. Now, applying the above logic, you can run KBP in a dual core I3, but once you reach its computational limit for physics, you will see frame drops. This depends on how many objects you're calculating physics for.

On the opposite side, if you run it using your 8 core / 16 thread machine and assuming the game is coded to leverage that many cores and threads, you will then be able to simulate all those objects and more without incurring in frame drops. Remember, once your CPU cannot output calculations for X frames per second, frames will drop.

Now, considering your graphics processing, this is used for whatever you see on screen. Assuming the same dual core, you can have an RTX 2080 and still experience heavy frame drops due to physics. The GPU only handles graphics. Similarly, two potato GPUs with the processors above will output similarly specd graphics, but at varying FPS.

TL:DR there's a much bigger separation between CPU and GPU in X86 systems than there was with the CELL.

-The CELL CPU and PS3 NVIDIA had resonance you could say in GPU output, because as I recall, Ken Kutaragi was trying to make 2 CELLS work together but couldn't so they went with NVIDIA. The CELL CPU was leveraged to assist in graphical computations
-Versus Zen and RDNA have clearly defined distinct roles (and also work well together since they are both AMD parts and part of an APU).
- I wish the Zen Cores 'assist' the GPU or mitigate for anything GPU is having trouble with the same way the SPU's did with the CELL. I am not sure how the jaguar cores were utilized. I guess the GPU's have gotten so powerful that they even do GPGPU tasks to help the CPU out.
-The Zen 2 cores are so much more powerful and faster than the potato jaguar cores, I wonder how they will be utilized?

I think it would be really cool to establish some sort of resonance-leveraging-mitigation between the CPU and GPU instead of clearly distinct and defined roles.

The GPU does GPGPU for the CPU
The CPU does Cell/SPE like stuff for the GPU

in some sort of harmonious/parallel fashion without making game development too complicated. It is like you are improving communication/compatibility between a married couple or something.
 
Last edited:
LOdjfSY.png




VM5GZBs.png



Well, we got excited about the PS5 logo, it's only fair Xbox fans should have their field day :messenger_tears_of_joy:
Both sides are stupid just bough both consoles if you cant then maybe you need to redirect your attention is how to increment your income instead
to be a cheerleader of a billion company.
 
This doesn't make sense. OF COURSE there were other optimizations. The point is, this was a very early dev kit and the likelihood is that things were improved over that. Not sure what point there is to be made here. In the end, what matters is what the retail unit performs like. Is that where you're going with this? That the retail unit may perform worse?
My point was that you can't compare the two and the state of decay 2 demo isn't representative for XSX SSD performance on next-gen games.
 
My point was that you can't compare the two and the state of decay 2 demo isn't representative for XSX SSD performance on next-gen games.

Can't we compare the Spiderman demo with the State of Decay Demo?

They are both last gen games being demoed on these SSDs so the comparison seems fair in my opinion.

The only thing is that the Spiderman demo is quite old and it was running on a low speed SSD so the final result could be even better.
 
People are still having their mindset stuck in the HDD era. With 5.5-22GB/s SSD speeds you have 4,621-54,355% (mininum-max compared to 40.4-116.5MB/s of PS4) more data to transfer per one second, assuming both having zero bottlenecks theoretically. So you would need less cycles to transfer that data, not to mention 12 channels, 6 priority levels, and GPU cache scrubbers (less offloading/uploading= more efficient).

If 448GB/s was good for RTX2080 with loads of bottlenecks, then it should be an overkill for PS5 that's optimized like no other.

mobile01-e99a64c75802fa255fe5d46d29d9dd0b.jpg
ps5 gpu will be sharing the bandwidth with the cpu and the audio chip which can take up to 20 GBps.

the cpu in your PC has the system ram all for itself. mine has 16gb of slow DDR4 ram and 8 gb gddr6 for the rtx 2080. the ps5 is effectively going to have maybe 400 GBps of bandwidth in best case scenarios. only a little more than 2x ps4.
 
Can't we compare the Spiderman demo with the State of Decay Demo?

They are both last gen games being demoed on these SSDs so the comparison seems fair in my opinion.

The only thing is that the Spiderman demo is quite old and it was running on a low speed SSD so the final result could be even better.
Yeah, and I don't believe that for one second. I could be wrong, but I don't expect that if the PS5 and XSX is loading Valhalla, that the PS5 loading time will be 6 times as fast. We'll see once the games release.
 
Yeah, and I don't believe that for one second. I could be wrong, but I don't expect that if the PS5 and XSX is loading Valhalla, that the PS5 loading time will be 6 times as fast. We'll see once the games release.

If Valhalla is primarily designed as a last gen title and wasn't modified to run on these SSDs the results should still be much better on the PS5 than the XSX. That's just due to the way the SSD is designed on the PS5.

I'm not expecting it to load 6 times faster unless the XSX has some severe bottlenecks in it's I/O system. Realistically I'm expecting around twice as fast which is about the difference between the I/O specs on paper. But with customizations that difference can be greater or smaller depending on what they are.

Anyways it should be interesting to see a comparison of Valhalla on multiple platforms. That should give us an idea on what the real differences are between the systems.
 
Can't we compare the Spiderman demo with the State of Decay Demo?

They are both last gen games being demoed on these SSDs so the comparison seems fair in my opinion.

The only thing is that the Spiderman demo is quite old and it was running on a low speed SSD so the final result could be even better.

I don't think we can, and this is something that has bothered me since the comparison came forward.

SoD 2 is loading the actual game world from scratch, including all assets in said world. Spiderman is loading a fast travel section. This implies that some assets and geometry will already be stored in RAM.

The PS5 SSD will be twice as fast, but the comparison is inherently flawed.
 
Status
Not open for further replies.
Top Bottom