no? im curious to hear what bs to save his ass will invent but I dont really want to give a clickNo comments about the L3 cache.
What a.
no? im curious to hear what bs to save his ass will invent but I dont really want to give a clickNo comments about the L3 cache.
What a.
Hitman 3?You want the simplest answer?
Series X:
10GB of GPU Optimal Memory at 560GB/s (Series X GPU is using this, its performance is great)
6GB of Standard Memory at 336GB/s (If the GPU touches any of this RAM performance WILL suffer)
Meanwhile on PS5:
16GB at 448GB/s
That is the primary reason we see PS5 winning ANY performance battles so far. It's developers having games that are organized in their memory access patterns in a certain way, and it's just plain simpler to work with the PS5's fully symmetric memory design. There's less to plan for or think about. Series X's design requires more careful planning, or use of a feature like Sampler Feedback Streaming to greatly mitigate the just 10GB of faster RAM for the GPU. If the Series X GPU touches that other 3.5GB reserved for games past that 10GB, the GPU's performance will dip because now you have a 52 Compute Unit 1825MHz GPU now trying to survive on just 336GB/s of memory bandwidth. It becomes bandwidth starved.
Some developers handle this better than others. In cases where series x and ps5 perf are identical a dev has just opted for parity (don't have an issue here so long as the game is solid on both), or the additional headroom offered by Series X allows it to JUST match what PS5 is outputting or brings it slightly below. Control is clear evidence of there being more headroom for higher performance on Series X. That additional performance headroom on Series X existing is how you get cases like in Hitman 3.
That is the reason PS5 sometimes performs better. It's down to the memory design causing the Series X GPU to be bandwidth starved in momentary blips. There are solutions to this.
We really shouldn't give this guy clicks----------
Timestamped
![]()
I think the lack of dram could cause issues with relying on fast streaming. Aren't both ps5 and Series X TLC ssds? TLC tends to have bad performance, sometimes a portion of the TLC ssd is converted into SLC to serve as cache and to improve performance. Not sure if either of the consoles are doing that.
The PS5 SSD uses a DRAM, but that is located closer to the SSD, you could see it in the teardown.I think the lack of dram could cause issues with relying on fast streaming. Aren't both ps5 and Series X TLC ssds? TLC tends to have bad performance, sometimes a portion of the TLC ssd is converted into SLC to serve as cache and to improve performance. Not sure if either of the consoles are doing that.
If you're referring to the CPU it's the same setup as on Xbox right?No comments about the L3 cache.
What a.
If you're referring to the CPU it's the same setup as on Xbox right?
The only difference is the cut down FPU.
Playstation 5's processor is a custom one based on the Zen 2 architecture from AMD, though according to some of my own information, the difference primarily between the Zen 2 cache of the Desktop processors and the PS5 is the unified L3 cache. I was told that this cache is only 8MB for the PS5, but the unified design helps drastically reduce latency.
"Hitman 3? You see that's a solid prove, an undisputable fact, the war is over, wider margin of improvement is possible without any doubt in every game on series X, who said otherwise denies the only true truth. The rest of multiplat? Meh, means nothing, early tools, bad optimization, rushed port, Sony easy peasy tools, nothing to discuss there."Hitman 3?
Series X ~ 9.7TF rdna1
![]()
AMD do have RDNA 2 cards that are narrow and fast tho.absolutely true.
now tell all the advantages of a wider gpu and you will understand why all the market and gpu maker are going that route )
Wow, big props to MBG for owing up to his part in spreading some of the RDNA 3 "secret sauce" rumors; tbf he always said it was just stuff he was hearing from other people and warned viewers to take it with a grain of salt. He actually went over his part (intentionally or not) in it and rather thoroughly, too.
Quite better than RGT tbh, who just kind of skimmed over any specifics that were challenged/debunked with the die shots (skipping right over Infinity Cache and just saying "a lot of cache", for example). Tho I think he's recovering from being sick so they might not feel like clarifying their 'sources' and such right now.
Now just waiting on MLID
Proelite Lol probably wouldn't go that far, but he definitely rush right over any IC commentary with a quickness.
Hopefully the channels that were spreading that negativity about the PS5 also come clean and apologize. For a moment I thought the die shot was extremely bad news for the PS5 but there's actually positive things it that I've missed due to the FUD.
Lesson here is that youtube channels spread FUD in general, sooner or later.
After spending some time trying to understand the cache differences in Ryzen.
I realized something about the PS5.
Notice how in Zen 2 the Global Memory Interconnect separates the two Zen 2 core clusters.
![]()
While in Zen 3, nothing is separating the Core clusters.
![]()
The same can be said for PS5, nothing is separating the Core clusters.
Pretty similar to Zen 3.
So my question is after seeing Core complex 1 basically touching Core complex 2.
Can Core complex 1 access Cache from Core complex 2 making it Unified Cache?
![]()
The same can't be said for XBSX, as the Core cluster are separate like in Zen 2.
![]()
After spending some time trying to understand the cache differences in Ryzen.
I realized something about the PS5.
Notice how in Zen 2 the Global Memory Interconnect separates the two Zen 2 core clusters.
![]()
While in Zen 3, nothing is separating the Core clusters.
![]()
The same can be said for PS5, nothing is separating the Core clusters.
Pretty similar to Zen 3.
So my question is after seeing Core complex 1 basically touching Core complex 2.
Can Core complex 1 access Cache from Core complex 2 making it Unified Cache?
![]()
The same can't be said for XBSX, as the Core cluster are separate like in Zen 2.
![]()
If it was a unified cache you would have all 8 MB as one chunk and the cores surrounding it. We've known since Summer of 2019 that it was 2x4MB when either AMD or Sony leaked benchmarks of the SoC on the net, why do people keep going on about this?
I miss your underboobs... I mean, this is a surprisingly reasonable assessment.No, it's not running on magic. We need to stop this now. There are a few very clear possibilities here and all of them are with info we've known for several months now:
[For PlayStation 5]
1: The PS5 GPU cache scrubbers are being leveraged2: The PS5 GPU's faster clock speed benefits more current-gen engines or engines affected by clock speeds more than wider GPU designs3: The PS5's I/O memory subsystem is more robust both on paper and in practice with keeping and feeding data to memory as needed
[Against Series X]
1: There is a larger bandwidth penalty on the segmented fast/slow memory pools than anticipated (either due to design or due to software OS/kernel and/or GDK API settings that have to be optimized)2: The I/O memory subsystem is suboptimal either out of design or due to lack of feature availability (DirectStorage is not readily available for PC developers until quite later in 2021, so many 3P devs might not be leveraging that part of XvA. And if they aren't leveraging that, they also aren't leveraging SFS)3: The CPU is not fast enough to keep up with drawcalls for GPU while also needing to handle data transfer between the fast/slow memory pools and some overhead (1/10th of a core according to MS) for the XvA I/O memory subsystem when SMT is enabled (this can be "solved" by simply clocking the CPU higher, if the thermals allow for it, through a firmware patch)4: The GPU clock is 405 MHz slower than PS5's plus lacks hardware cache scrubbers, which can have a negative impact on game engine that are built with narrower GPU designs in mind and more reliant on faster GPU clocks (majority of current-gen game engines are this way)
...if we start to see consistent performance in favor of Series X later in the year, then those advantages can simply be explained as:
[For Series X]
1: The system is able to maintain data transference between fast/slow memory pools optimally2: GPU-bound memory has a 112 GB/s bandwidth advantage over PS5's GDDR6 memory bandwidth3: The GPU is wider, so more work can be issued and processed simultaneously per issue compared to PS54: The I/O memory subsystem has all components readily available and in use; even if I/O memory subsystem performance inisolation still favors PS5, the gulf could be at a point where the difference is not large enough to negatively impact the otheraforementioned advantages (higher bandwidth in GPU-optimized pool, 100 MHz faster CPU clock, lowered overhead of hardwareresources by system in general, etc.)Also worth adding that while Series X's audio might be somewhat less sophisticated than Tempest Engine, it also requires less memory
bandwidth most likely. Meaning less of a squeeze by the audio and that leaves even more for the GPU, under intense audio workloads. Of
course, all of these points are IF the performance metrics swing in favor for Microsoft longer-term; currently that is not the case.
I'm not sure why you act like the guy is fighting, do we need to act like the seriex is somehow in a league of its own in therms of performance compared to the PS5? Bown down to statues of the Spencer?Good old Colonel James it's nice to meet you again after more than 10 years and see you still fighting for your team .
Proven wrong by what? just to get me a culture about it
That Hitman 3 comparison was done in a cutscene. Not gameplay."Hitman 3? You see that's a solid prove, an undisputable fact, the war is over, wider margin of improvement is possible without any doubt in every game on series X, who said otherwise denies the only true truth. The rest of multiplat? Meh, means nothing, early tools, bad optimization, rushed port, Sony easy peasy tools, nothing to discuss there."
Basically.
At least wait to see more games?
Maybe things could change, who knows, but Jeez that's ridiculous all that evangelion for Hitman 3. Furthermore Hitman 3 specs required are ridiculous for RAM and CPU, that could also explain why they can push so much the GPU on series X.
Of course it can (like XSX CCX). The only question is what is the latency of doing such a thing? It was already possible doing that with a Jaguar albeit with much higher latency.Can Core complex 1 access Cache from Core complex 2 making it Unified Cache?
The 128bits theory doesn't fit.
The remove of FADD doesn't fit.
They don't know what AMD/Sony did here.
People that knows are in NDA.
I think it's very possible that this can have some latency implications. Very nice post.After spending some time trying to understand the cache differences in Ryzen.
I realized something about the PS5.
Notice how in Zen 2 the Global Memory Interconnect separates the two Zen 2 core clusters.
![]()
While in Zen 3, nothing is separating the Core clusters.
![]()
The same can be said for PS5, nothing is separating the Core clusters.
Pretty similar to Zen 3.
So my question is after seeing Core complex 1 basically touching Core complex 2.
Can Core complex 1 access Cache from Core complex 2 making it Unified Cache?
![]()
The same can't be said for XBSX, as the Core cluster are separate like in Zen 2.
![]()
The 128bits theory doesn't fit.
The remove of FADD doesn't fit.
They don't know what AMD/Sony did here.
People that knows are in NDA.
FMAC maybe?..
PS5 has DRAM cache for the SSD.I think the lack of dram could cause issues with relying on fast streaming. Aren't both ps5 and Series X TLC ssds? TLC tends to have bad performance, sometimes a portion of the TLC ssd is converted into SLC to serve as cache and to improve performance. Not sure if either of the consoles are doing that.
I miss your underboobs... I mean, this is a surprisingly reasonable assessment.
I don't get why some dev just not leaks the beans. Not like they can do anything about it.
The 128bits theory doesn't fit.
The remove of FADD doesn't fit.
They don't know what AMD/Sony did here.
People that knows are in NDA.
The 128bits theory doesn't fit.
The remove of FADD doesn't fit.
They don't know what AMD/Sony did here.
People that knows are in NDA.
Boobs and booties help keep a man well grounded
They'd probably get blacklisted out of the industry if they're under NDA, and publishers would probably want to avoid them too. Not worth the risk unless you're already planning to leave the industry altogether or are retired.
Indeed, that is one of the few things posted in this thread that is actually worth something lol.I think it's very possible that this can have some latency implications. Very nice post.
Indeed, that is one of the few things posted in this thread that is actually worth something lol.
Yes they maybe reduced the latency between both CCXs. On PS4 the CCXs could read each others L2 cache but it was with much higher latency (190 cycles).I think it's very possible that this can have some latency implications. Very nice post.
But Hitman 3 is performing better on PS5. What the hell you are babbling about.
After spending some time trying to understand the cache differences in Ryzen.
I realized something about the PS5.
Notice how in Zen 2 the Global Memory Interconnect separates the two Zen 2 core clusters.
![]()
While in Zen 3, nothing is separating the Core clusters.
![]()
The same can be said for PS5, nothing is separating the Core clusters.
Pretty similar to Zen 3.
So my question is after seeing Core complex 1 basically touching Core complex 2.
Can Core complex 1 access Cache from Core complex 2 making it Unified Cache?
![]()
The same can't be said for XBSX, as the Core cluster are separate like in Zen 2.
![]()
Yeah but Hitman 3 is also only 30fps on One X vs 60fps on Pro. What does that indicate?Nah it doesn't, cut it out. Lower resolution and lower quality shadows, whereas Series X is Native 4K. A single section in the entire game dipping on Xbox when the rest of the game runs at a flawless 60fps with the exception of that benchmark cutscene DF used that drops on all platforms qualifies as the PS5 running the game better? Don't go making up stuff now to make yourself feel better. Stick to the facts.
This is something I've wondered too. Seems like it could help lower latency since all 8 PS5 CPUs are adjacent and uninterrupted.After spending some time trying to understand the cache differences in Ryzen.
I realized something about the PS5.
Notice how in Zen 2 the Global Memory Interconnect separates the two Zen 2 core clusters.
![]()
While in Zen 3, nothing is separating the Core clusters.
![]()
The same can be said for PS5, nothing is separating the Core clusters.
Pretty similar to Zen 3.
So my question is after seeing Core complex 1 basically touching Core complex 2.
Can Core complex 1 access Cache from Core complex 2 making it Unified Cache?
![]()
The same can't be said for XBSX, as the Core cluster are separate like in Zen 2.
![]()
It only took one game out of the sample not conforming to the trend for the hybernators to go berserk to the same old nonsense. It's like trying to flip the reality of the situation on its face - that's, the PS5 being the best console for multiplatform games performance on average (without even resorting to experience differentiators like the controller). It's more desperation than straw clutching, something these folks are very familiar with over these past 7 years. These folk are working very hard to reverse the narrative reversal that occurred at the start of the gen (not even with DF's aid can they reserve the damage - but some are oblivious to that fact, or the fact that even DF is taking reputational damage in the process). It's funny to witness how hard they're working at it.... see how far they take the rope with which to hang themselves. Then again they clearly don't care how they're perceived, as it should btw. Life is too short, specially on the internet. Likewise for those that clown them to oblivion for their clownery - all is fair.Yeah but Hitman 3 is also only 30fps on One X vs 60fps on Pro. What does that indicate?
The game has a bizzare setup, hanging ones hat on it as the definitive benchmark is flawed. And even then the performance delta is there, XsX is clearly aiming beyond it's capability in parts.
Hitman 3?
Series X ~ 9.7TF rdna1
![]()
Yeah but Hitman 3 is also only 30fps on One X vs 60fps on Pro. What does that indicate?
The game has a bizzare setup, hanging ones hat on it as the definitive benchmark is flawed. And even then the performance delta is there, XsX is clearly aiming beyond it's capability in parts.