I found this gem on the said Twitter post, the lack of self-awareness is astounding.
This is just a theory, a theory which (funnily) wasn't pieced up until the point AMD confirmed PS5 is based on RDNA2 on their financial day. It emnated from Beyond3D by a poster R600 who used to post here.
We have no idea when the testing on those Oberon revisions were done, zero, zip, nada. Also, the suggestion that variable frequency is a solution Sony slapped up in December after reveal of Series X is laughable. Something so intricate to the SoC which also plays into the overall design/cooling of the box is worked from the very beginning. Matt on Era said the same, that thing was supposed to run fast all along.
Anyways, me and you have been over this. I can't be arsed re-visiting Github all over again because somehow it magically fits into different criteria over time. The context on Github can never be fully acquired because the data itself was lacking, all these suggestions on what the timeline of the development was and is just guess-work, it won't lead to any fruitful discussion.
I never said nor implied variable frequency was something Sony just came up with in December; all I mentioned was that at some point between then and the presentation they managed to increase the clock further. Whether that was their intention the whole time is unknown. It also doesn't necessarily mean they weren't already using variable frequency in the earlier revisions, for all we know they could have. It isn't me doubting the system was never meant to run fast: we already knew it was aiming for that once the first Oberon revision info leaked, as a lot of us looked at 2 GHz and thought "damn, that's fast!".
The question is more, was 2.23 GHz planned from the very beginning, and taking Cerny's own words at Road to PS5 I'm inclinded to say "No". Why? Because at the presentation he himself says that they were having issues getting higher clocks on "the old strategy", so that indicates at some point they were using a fixed frequency strategy with PS5 (most likely the Ariel chip), and at some point (most likely with the Oberon revision), they were able to determine and shift towards a variable frequency strategy. However, given the fact the first two Oberon revisions both show a 2 GHz clock, I'm actually inclined to believe that they managed a push beyond 2 GHz most likely with the third revision, since it's the one revision we didn't get a clock spec on, and the log date for that revision is December 2019.
So I'm actually inclined to believe that, yes, a fast clock was always intended, but 2.23 GHz was probably not decided at the outset of the system's development. Rather, that might've been decided upon as a goal to hit once they got more info on the Smartshift technology for RDNA2 and had a means to test it out, meaning they had to wait until the first Oberon silicon came in order to test it, since Ariel was RDNA1 and therefore did not have Smartshift support built-in. Seeing the effectiveness of Smartshift and knowing they couldn't pursue variable frequency as efficiently without it, the team probably began testing to what limit they could push it (combined with their cooling system) while maintaining chip logic, which for their efforts resulted in a 2.23 GHz clock, which was likely arrived at around December of last year (again, if you want to take the log dates for the testing data at face value and I see no reason not to).
The thing with Github is that yes, it was incomplete info, but it WAS accurate to some degree, especially when you look at circumstantial information, speculation etc. that was coming about over those months. Even speculation from other sources sometimes seemed to fit in line with it. As for the timing of R600's (I'm assuming you're insinuating he's Absolute Beginner, because that's where I got the iGPU stuff from) theory on AMD's financial day results....what of it, honestly? I mean, we were all speculating and trying to fit a puzzle together. I don't necessarily care if they insisted beforehand that one or both systems were RDNA1 and tried changing tone when the AMD event happened. That doesn't necessarily make the theory a bad one, and again, it fits in line with much other speculation and timeline with the Ariel and Oberon chip revisions, as well.
I mean what else is there really to say. Was Github perfect? Of course not. But was it arguably the closest to a full picture we had on both system's chips in terms of data? Definitely. You can argue that it didn't mention things like shader counts, ROPs totals, cache amounts, or anything about the CPUs, but we both know the ONLY thing most people were focusing on at that time were teraflops, so lack of that info simply didn't matter and it doesn't invalidate the Github leaks or testing data, either. And that stuff still provided us more info on the GPUs for both systems than most of the insiders, aside from Tommy Fisher's XSX "guess" (which was virtually spot on), and HeisenbergFX4's somewhat rare 10.5 TF figure for PS5 (which is arguably closer to what we know of PS5 now than Github was, tho again we knew the CU count well before that and the chip still ended up as a 36 CU GPU rather than a 48 CU one).
I agree that going on about it won't necessarily add anything to discussion, but I think it's important to at least acknowledge the Github leaks and the testing data in their proper context. They had more relevance to the system specs than some disregarding comments from guys like Matt implied, even if, yes, that meant we needed to theorize ourselves and try piecing things together. Which is normal for next-gen console speculation discourse anyway. Just gotta give it to Github, Komachi, Rogame etc. for this one, and some of the insiders like Tommy Fisher and Heisenberg in hindsight for either getting one right or getting very close to the other. But this doesn't mean the insiders were wrong on everything else, they got things like the PS5 SSD right, for example. And that is still related to next-gen console specs. Both sides got things right or very close to right.
Truth. The 20% number was very... convenient. The primary GPU audio function will be RT audio, but the ray casting operates on a separate pipeline from the general shader work. I doubt there will be a game utilizing 20% of the GPU compute for audio.
I think some of the misconception comes from people thinking MS have divulged all the XSX specs when in truth that is not the case, and chances are we will not know ALL the specs for either console by the time they launch. We never do xD.
That distinction you pointed out is for sure important, PS5 could do thousands of simple PS4 VR era sources and hundreds of more complex/advanced sources. Dolby didn't make the distinction so you might be onto something but regardless of Dolby being capable of handling hundreds of complex sources is the inherent physical limitation that makes 3D Audio with 100s of sources difficult (if not impossible) to implement in multiple speakers setups instead they use Virtual Surround to approximate it but it won't be as advanced (less sources?)I assume. Both Cerny and Dolby mentioned this limitation
For sure. Im not up to speed with their audio block capabilities but same principle applies if their solution can be used with non proprietary headphones or TV speakers.
I was talking about Dolby in general not meaning to take a dig at XSX.
DF is alluding to hardware acceleration, nvidia calls this hardware capability TSS.
Bespoke just means dedicated hardware in contrast with a entirely software solution, this "bespoke hardware" capability will be present in every RDNA2 card. There are different tiers of support: software, hybrid and hardware
What's unique about XSX is the setup supporting it: SSD, I/O, CPU, GPU & APIs or Velocity Architecture as MS calls it.
You and
T
Trueblakjedi
could both be right on this, FWIW. Both systems are using default templates of RDNA2 for RT that will exist at silicon level for all RDNA2 products, and both systems might have also made modifications to the CUs (since RT in RDNA2 is tied to the CUs), via alterations or other affixed silicon, that are more custom for their specific attempts.
That can play into the setup for implementing the hardware/software stack for things like RT. As well, both Sony and MS have already alluded to features in their systems that are custom to their platforms and may/may not see implementation in the PC side depending on how things shape out. In MS's case we already know the DX12U for XSX will be customized for the console specifically, so it doesn't rule out the chance of bespoke (as in, custom) hardware at the silicon level for handling some of those API features (same as with PS5, though obviously it's using versions of GNM and GNMX).