Interesting tweet, a lot to translate... below an extract from the full tweet translated below.
Infinity Cache is a method that boosts the available efficient bandwidth even further by boosting the available bandwidth at resolutions up to 1440p, and the cache scrubber is a method that minimizes cache misses, so AMD also has Infinity Cache in RDNA2, and Cache Scrubber in RDNA3. I plan to go to the concept.
I don't quite understand everything he's saying exactly, but it sounds like.
1) Sony still maintained the RDNA1 as the primary skeleton,
2) MS engineers also claimed that Xbox was 100% RDNA2 because it puts the CU unit from RDNA2 as a sign,
3) The PS5 was based on RDNA2 because of VRS and mesh shaders,
4) In addition to not supporting RDNA2 at the ISA level as a feature level, there was nothing wrong with supporting RDNA2 in the micro-architecture design.
This part is really interesting.
1) ESRAM, Cache Scrubber and Infinity Cache has the same purpose through different methods.
2) ESRAM is a kind of scratchpad (manual cache) concept, so unlike traditional EDRAM, it is not automation-based, but rather difficult to develop, so in the early days of ex1, ESRAM skips and DDR3 only uses 720p to support terrible resolutions such as 720p.
3) Cache Scrubber is also a scratch pad method, but it still has differences from ex1 ESRAM. If the automation is closer to the advanced side, the initials are rather well performing, but the cache scrubber is only used if the GPU minimizes the cache miss when during a high clock, rather than ddr3 being a bandwidth answer to the ESRAM, but the bandwidth is already sufficient for GDDR6. In this situation, it is possible to be present at the same time as ESRAM, because it is not necessarily a case of assisting.
4) The infinity cache is also closer to the evolution of GameCube/Wii's 1T-SRAM rather than manual cache methods such as ESRAM/Cache Scrubber, even though Nintendo has high bandwidth in the past.
5) Infinity Cache on RDNA2 and Cache Scrubbers in RDNA3.
Some shit I don't understand, maybe someone can help.
"On the other hand, Microsoft is going completely the opposite of Sony, but if Sony is caught in the sub-incompatibility and has to go to the lower end, Microsoft, on the contrary, has to maintain a similarity to the server side, so it can only be limited to wide. One of the most important factors on the server side is the concept of the same electricity ratio is never going to work. So the dependence on compute shaders is higher than the PS5."
And I'm going to end it with this, to much tweets to go through.
1) Cache scrubber = minimize cache hit
2) Build at ultra-fast I/O throughput = SSD performance is also performance, but GPU data transfer speed is also built at high speed
3) If you're prototyping Sony's own custom RDNA3 in combination,
4) MICROSOFT has a filler that goes in the opposite direction, and it's a CU unit custom that goes towards CDNA. I customized it on the INT4, INT8 side and it's even better than PC RDNA2 in this direction, which is the direction... It is currently an AMD server-side GPU.
5) The design of the CDNA prototype is also the same as the PS5, so if there is a difference between the AMD future architecture, if the PS5 follows the RDNA branched as the "gaming" architecture, MICROSOFT will follow the CDNA branded as the "server" architecture. He is Vega's immediate successor.
P.S.
I don't know who the guy is and don't take any of this as fact.
I'm posting this as it an very interesting piece of information.
Edit: adding his bit about the PS5's CPU.
1) So once I was able to do it with a high clock, I had to keep it with a minimum area, so the first thing I had to do was use the CPU, and I had to knife the AVX256 unit in Nnoa by 128 bits, and fadd.
2) The native architecture level is said to have been eaten at a level of FPU that would only be "further expanded re-regulation MUL+ADD". The cutoff was made after keeping the minimum sub-exchange level, but the Zen2 base didn't go anywhere, so the CPU is generally too weak in the older generation, so you have to want a GPGPU 100%.