Ultimate is not a revision. It's all the features that were not part of the feature level but supported by some DX12 GPUs now part of the DX12 Ultimate which includes everything in the spec.
There may be things in the spec that are refined but again what are you looking for? Why tie that refinement to the 2x figure?
If that spec clearly explains what SFS is, if that spec clearly states what it does and MS claim "SFS gives you 2x multiplier because you only need to load part of the texture". Why assume that this multiplier is from some unknown method or something secret that other GPUs don't have?
Why not just take it at face value and realise SFS is the method described already in the spec and offers this multiplier by loading only part of the texture like PRT+ when compared to whole texture ? Why does this stupid idea that this is some super secret sauce that offers 2x or 3x performance just for the XSX need to exist? It's a stretch based on wishful thinking.
Hey look, I didn't bring up any 2x multiplier or any of that stuff. I just said I thought DX12 Ultimate was a revision of DX12, adding more to it. The way you describe it makes it seem more like DX12 had a roadmap and basic 12 was the first part of that roadmap. Ultimate is the next part, fulfilling more of the specification that Basic didn't, due to whatever combination of market and development features.
In terms of what performance benefits it brings, the truth is we don't know yet. However, I'm willing to take MS's 2x to 3x claim at face value because, again, I'm an optimist and generally will take what the engineers at MS and Sony are claiming unless real-world results on their hardware end up forming a pattern of performance that betrays their claims. But we're not at that point yet, for either, because the consoles aren't out yet.
There could be an implementation of these DX12U features on XSX's Velocity Architecture that provide the performance they claim, I don't see much of a reason to cast doubt on their claims this early on. If actual performance falls short, then it'll fall short, and can be acknowledged as such. But for the time being, we should at least provide them the benefit of the doubt that seems to be afforded to Sony.
The chip in the GitHub Leak wasn't RDNA2 tough since it lacked hardware based raytracing.
It very likely was RDNA2, the RT would not have been needed enabled for Ariel iGPU profile testing, because Ariel was an RDNA1 chip. References to Navi 10 were likely towards the Ariel iGPU, as well.
At this point it's more odd to question Oberon not being the PS5 chip versus accepting it is, because there is no proof of any other chip matching up to the PS5 specs as we know them. Same way how Arden is very likely pretty much the XSX chip. The differences in terms of active clocks and active CUs between PS5/XSX and the Oberon/Arden chips can be rationalized through historical precedent (Morpheus APU having parts of its devkit chip disabled for running PS4 regression compatibility, Scorpio APU having all CUs active for devkit and 4 disabled for retail unit. Both of this match up almost exactly with trends of the Oberon and Arden chips, respectively).
Ask yourself from 6th gen onward if we've ever gotten within 5-6 months of new system launches having ZERO info on actual chips of said upcoming systems? It's not happened, because we've always had some concrete info on next-gen system chips by then. There are no other options for PS5 and XSX; Oberon and Arden are their respective APUs.
Neither has to be downclocked for the other to reach maximum clock speeds. Developers do not need to choose between the two.
For the final/retail system, yes. But the devkits currently use "profiles", which hard-set one component at a lower power setting so the other can operate at a higher power setting, in both cases affecting the frequency/clocks on said components.
Again, have to stress this is only the case for the devkits, Cerny's said the retail system will effectively automate the power shifting on its own, in the background.
Was that quote about needing to throttle back CPU to sustain max GPU clock a misprint? I forgot what dev it was, I could search the thread. That quote seemed in line with the other “power profile” comments too, I thought?
It wasn't a misprint; the devkits use power profiles, as you mentioned. And the power profiles hard-set certain parameters in terms of power load settings for CPU and GPU (and maybe other components too like the audio processor).
In that context the comment about throttling CPU to sustain max GPU clock wasn't really a misprint, since at least some devs are probably doing that right now with devkits. But the final retail system should have implemented the variable frequency stuff fully, and the process automated, so devs won't need to set things to hard power profiles (tho they need to manage their code to ensure they stay within power budget ranges, of course).
I don't get the damage control.
You can't excuse one fan base saying their console is RDNA 2 and the other is RDNA 1.5 or RDNA 1. Fact is, both are labeled RDNA 2 and there's no excuse for them to say this other than to make the PS5 look weaker.
I'm not excusing anything; just saying not everyone who says the systems aren't full RDNA2 is trying to insinuate they are RDNA1. The fact of the matter is they are both custom GPUs that will use as many of the RDNA2 feature set as deemed required. And we're already hearing the systems may have some RDNA3 features, that doesn't mean they are RDNA3 (TBF I am a bit weary on the RDNA3 rumors but we'll see).
And that's not the point. They're saying it's not a 9.2TF console and it will reach 10.2TF "sometimes" in certain situations. Based on the numbers you just provided, the 6.9TF number would be smaller based on what they're saying and it's simply not true.
You're maybe taking the 6.9/8.1 numbers out of context. When Sony and MS give their TF numbers, they're speaking in theoretical terms. The higher the theoretical, the more headroom there is for actual real-world performance to reach. If the architectures are the same (as they are the case here), then that ratio stays the same between the GPUs, which will generally reflect in the numbers.
Again I suggest watching that NXGamer video (and the latest one on the SSD I/Os while at it); they bring up the somewhat poor throughput utilization in real-world application terms with the PS4 and XBO GPUs. That number should be MUCH higher with the PS5 and XSX GPUs, but the point is that we'll likely never see a PS5 game actually "really" hit 10.275 TF even at max utilization, just like how we'll never actually "reallY' see a XSX game hit 12.147 TF at max utilization, in real-world game scenarios. But both systems should hit very close to those theoretical maximums, even higher than the numbers NXGamer gave IMHO.
SSD was going to be talked about regardless of the TF count. It's a major factor next gen and it's leaps faster than what's on the XsX. It's something that really separates both consoles while the TF and CPU figures are very close.
Again though, they're paper specs, and MS gave sustained numbers whereas Sony did not clarify if theirs are sustained or peak. And there are two components to the SSD I/O: hardware and software. PS5's solution has the hardware advantage, but we could be in a situation where XSX's could end up with the software advantage and while that wouldn't close the delta on that front, it would shrink it to a notable amount.
This is just speculation though, because we don't have enough info the SSD I/O in the systems yet. Yes, I know that sounds ridiculous given what we actually already know, but in the scope of ALL the tech that goes into even just the SSD I/O component, there's lots of crucial stuff we don't know yet. Officially, anyway.
There's also the fact that you can't look at deltas without considering the context of what the numbers actually reference. I'll put it like this; let's say the SSDs are like Subaru sedans and the CPUs/GPUs are like supercars. PS5 has a higher-end Subaru sedan and XSX has a lower-end one. CPU-wise let's say the PS5 has a Lamborghini Gallardo and the XSX has a Lamborghini Murceilago. And in terms of GPUs, the PS5 has a Ferrari F50 and the XSX has a Ferrari Enzo.
Those are very rough comparisons but work with me here. We've got a three-part F1 circuit race and PS5 has its three cars and XSX has its three cars. Now the PS5's Subaru is going to generally beat XSX's Subaru but we all know the Lambos and Ferraris are the stars of this F1 circuit race, they are simply performing on a completely different level. And they are both absolutely demolishing the Subarus. Now PS5 and XSX's Lambos and Ferraris may be relatively closer in performance than their Subarus, but the one with the higher-performing Lambos and Ferraris is still going to win.
And we're using an F1 circuit race example here because we're talking about overall performance of these things testing a multitude of their capabilities in combination within practical terms, not quick-burst performance of just select features. That's the scale these things really fit in when it comes to the overall architectures and where the components fall in place in terms of priority levels.