it says advanced node not 7nm+And what is 7nm+ stated to RDNA 3?
That explanation makes no sense.
it says advanced node not 7nm+And what is 7nm+ stated to RDNA 3?
That explanation makes no sense.
Nothing is stopping the PS5 to be 9.2TFlops RDNA2 though.
50% perf/watt improvement over RDNA 1 goddamn. That's how they managed to put 12 goddamn Tera Flops on XSX. That's insane.
I must say I'm impressed.
so if Microsoft co-developed RDNA 2.0 technology then how would that work out for the PS5? forgive my ignorance lol
What happened to the post that @Ralf made about "Sony people " not being human ?
sorry you have to deal with this ❤
Its not that. Its that people think Ariel = Oberon. Its not true, never was. I talked about this for months, hence I am called cult leader. Ariel was known since December 2018. Ariel was known as GFX1000, it was Navi 10 derivative, RDNA1 chip.
Oberon is later chip, its 2nd APU, and entire Github repo for Oberon was made up of Native/BC1/BC2 tests which are done on Ariel iGPU testlist. Therefore, since Ariel was Navi 10 Lite, you couldn't have ray tracing and variable rate shading running Ariel's testlist.
People are trying to discredit a leak without understanding basics about it, its frustrating.
Quote was The latest Microsoft DXR 1.1 API was co-architected and co-developed by AMD and Microsoft to take full advantage of the full ray tracing architecture.
If Sony not involved, be interesting to see if the API better than what Sony has developed for their console? Sony not a software developer, so we don't know if they have the same tools to replicate and match the performance?
Uh, Sony is a software developer and has an extensive API network with XDev, ICE, and plenty of others that do fantastic work and toolsets.
What year is it?
Although that actually asks an interesting question about why XSX's clocks are "so low" if RDNA2 efficiency is so much better.]
API for hardware ray tracing, that costs money. This is not a graphic driver.
So a 225w RDNA1 GPU would only need about 112w RDNA2 FOR the same performance.
No.Does this mean AMD 12TF > NVIDIA 12TF?
Or to put simply Next-Gen > 2080Ti?
how does this make github dead?
to quote Colbert
I believe 50% over 50% gives you 75% but aí can be wrong.That's the api(software side ). Sony probably will use a custom api
Hi guys i'm back from the future once again!
13TF
7GB/s SSD
24GB RAM
450$
PS5 reveal 23rd of march
And one really insane feature you guys have no idea about.
Disclaimer/S
Jesus tapdancin' Christ, that Colbert dude is insufferable.
How many times in how many ways can Github literally be shot down as not entirely accurate of anything, but you refuse to give up?
This all reminds me of The Last Samurai, wherein the Github clingers are being smacked down by Ujio with a wooden sword at every turn, but refuse to stay down.
i said long ago, they are in m$ pocket.Digital Foundry also has once again shown their stripes also but you know we need to put faith in them bcus they are the authority figure, neutral party in all this8tf baby from inside sources my
![]()
Holy shit dude where have you been the last 7+ years especially with the design with the PS4?They have an entire team of competent architects working on the damn thing.
They have their own in-house APIs, game engines, toolsets, and everything under the sun as any other platform.
Are you being purposely obtuse, or?
No.
12TFs in RDNA = 12TFs in RDNA2.
But RDNA 2 consumes less than RDNA.
That is what Perf/watt means.
Amd says ps5 is rdna2 and navi 2x , colbert and proven will disagree and say github with rdna 1 and navi 10 is the ps5![]()
I don't think they're in microsofts pocket but their analysis is severely lacking if it doesn't involve counting pixels.i said long ago, they are in m$ pocket.
API was co-architected and co-developed by AMD and Microsoft! You think Sony just magically on its own will generate an API that matches? Microsoft worked with AMD to get the best performance from the ray tracing on the chip. Engineering performed on the silicon pipeline and new coding needs to be formatted here to bring about gains with FPS with raytracing on. This not an overnight job, API likely in development for a year or more.
as far as I know we didn't have clock speeds in the leaks for the Series X, maybe they are higher than we think? maybe Microsoft chose 1.8ghz or 1.9ghz and less active CUs for better yields?
Nothing is stopping the PS5 to be 9.2TFlops RDNA2 though. Why would having RDNA2 mean it has to be 12TFlops?
They co-developed DXR RT API for PC and Xbox, not RDNA 2. Sony will use their own RT API.so if Microsoft co-developed RDNA 2.0 technology then how would that work out for the PS5? forgive my ignorance lol
SlimySnake
Does 12TF now seem kinda conservative/weak with the revelation of 50% better perf/watt???
Nobody was expecting anything close to those gains
Now that PS5 is shown to be RDNA2, it's still likely we won't know the official TF number until Sony and Cerny reveal it.
I mean the 50% perf/watt doesn't related with Arch IPC improvements.Actually I am not sure about this one. I think 12 RDNA2 TF are actually better than 12 RDNA TF in terms of gaming performance.
I would need to take a deeper look into this. I don't like comparing Teraflops too much. I am more into gaming performance benchmarks.
But if I am not mistaken then those 12 TF RDNA2 should be genuinely impressive depending on what architectural changes actually happened going from RDNA -> RDNA 2.
I don't like saying stuff like 12 TF RDNA2 is like 15 TF RDNA, but we will have to wait and see how the benchmarks for the RDNA2 AMD cards are going to turn out.
Teraflops itself aren't the important thing here, but the architecture. Like the 5700 XT (9.7 TF) is as fast as the Radeon VII (13.44 TF)
Now that PS5 is shown to be RDNA2, it's still likely we won't know the official TF number until Sony and Cerny reveal it.
Excellent analysis, thanks.Found a very good post (among lots of others) over on Beyond3D that should put the Ariel/Oberon stuff in much better perspective for people who don't understand what the Oberon tests were actually testing. Post is from user AbsoluteBeginner
This is basically the best working case for Oberon that can also fit what most insiders have been saying, too. That Oberon's tests were running regression tests set to Ariel iGPU. Since Ariel was a RDNA1-based chip, it did not have RT/VRS built into it. Even if Oberon has RT/VRS (in fact it's pretty damn guaranteed now after today's AMD Financials thingy), they would not be enabled for running Ariel iGPU regression; even users here like R600 mentioned this months ago.
It also would indicate that the Oberon tests that have been datamined so far do not tell everything about the chip. They may or may not mention the chip CU count (IIRC the first Oberon stepping listed "full chip" with its log), but we've already seeing later steppings change the memory controller to increase the bandwidth to the chip. We don't know if Oberon has an extra cluster of CUs disabled on the chip with later steppings beyond the very first one, but I'm thinking if there were, they would have been from the 2nd stepping onward, and I would think something like that'd call for a chip revision instead of just another stepping, but I dunno. Even so, we don't know how many additional CUs are present, if present.
And something else to consider: I saw some people mentioning AMD mentioned "multi-GHz GPUs" during a segment for GPU products and systems releasing this year? Did that happen? If so I don't think they would mention the phrase if they weren't talking 2GHz or greater, and we know Oberon has a clock at 2GHz. And now we practically know PS5 is RDNA2 which has upwards 50% more efficiency versus RDNA1. That would obviously also shift the sweetspot northward, too, which makes an RDNA2 chip at those clocks a lot more feasible. It's still something maybe crazy, but not as crazy as a lot of people were thinking before today's news, eh?
Although that actually asks an interesting question about why XSX's clocks are "so low" if RDNA2 efficiency is so much better. Either the 50% claim over RDNA1 is AMD PR talk, or MS felt no need to push the clock higher and chose guaranteed stability at a cooler GPU clock. However, that obviously also means they went with their design in the case of upping the clocks if Sony outperformed them on GPU front regarding TFs. The fact they seemingly have gone with a 1.675GHz - 1.7GHz clock on an RDNA2 chip (with the sweetspot probably shifted a good bit northward from the 1.7GHz - 1.8GHz of RDNA1) might hint that they are fairly certain they have the stronger of the two machines, but the question is now by how much? (also I kinda shamelessly took the idea of XSX clocks and their indication of anything relative to PS5 from another post over there, but I thought it was worth thinking about).
So yeah, there are still a lot of unknowns, but given Oberon E0 was tested into December of last year, I'm pretty much 100% sure Oberon is the PS5 chip. However, I'm also pretty much 100% sure we haven't really seen a benchmark testing for Oberon, just the Ariel iGPU profile regressed on Oberon, meaning we haven't seen the entirety of the chip (I think this is exactly why Matt also said "disregard it" in reference to Github, because it wasn't testing the full chip or even much anything of the chip outside of Ariel iGPU). And that's the fun part, because it can run a wide gamut. However, I think, knowing RDNA2 efficiency and XSX's pretty "tame" GPU clock, and the fact high-level MS and Sony people would know a lot more about each other's systems than any of us, that might signal MS is comfortable with the lower clock because they're fairly certain they at least have the bigger chip. Whether that means PS5 is 36/40 or (like a die estimate from a few months ago speculated) 48CUs, or maybe even to the very low 50s, is unknown.
That's why I've been rolling with 48CUs as Oberon's actual size, and they'll probably disable four for yields. @ 2GHz that actually hits around 11.26TF which is better than my earlier numbers, even. It does kinda depend on Oberon's full size being 48 however, and if they can actually keep the 2GHz clock stable because that is probably still a tad north of RDNA2's upper sweetspot range.
Either way I think we can ALMOST certainly put the 9.2TF PS5 talk to rest now, but funnily enough today's news just reaffirms the datamines, the leak and even the insiders if there's more to Oberon in terms of CUs than the initial test that showed 40 as the "full chip" (which, to be perfectly fair, could have just been referencing the Ariel iGPU profile, since Ariel is a 40CU RDNA1 chip). And being 100% fair, while I do think MS clocking XSX as low as it is (1.675GHz - 1.7GHz) is both odd and maybe indicative they're comfortable they have a performance edge over PS5, Oberon could also be a 58 or 60 CU chip if we're being honest, because again there's the whole butterfly thing and 18x3 gives you 54. So it could be more a case MS knows they have an advantage right now but Sony could have upped performance and then you get MS responding by having headroom to push their clocks higher.
Or it could even be a case that maybe MS don't know as much about PS5 as some think but they might know Oberon is also a big chip, and they want to see for certain where PS5 actually lands by throwing 12TF out there. So if PS5 reveals their number and its the same or somewhat larger, MS can enable an upclock on the GPU to match or surpass that. And I would think they have already tested the GPU at higher clocks by now just in case that type of scenario plays out. That's the other way to see their announcement from last week, anyway.
But again, it all hinges on what Oberon actually fully is, and we'll only know for sure if another benchmark test gets datamined that isn't running the chip on an Ariel iGPU profile. Which maybe could come this week, or within the next few weeks. Hopefully soon. If it does and we still see it's a max 40CU chip, then it's time for people to accept that. If it' a larger chip, but at around 48CUs, then they could either be running it with 4 CUs disabled or all 48 on and that would get them between 11.26TF - 12.28TF @ 2GHz, aka virtually identical to XSX. If it's even larger, like a 60CU chip, and they're running at @2GHz even in that case, then it just means MS can upclock the XSX at a rate they've already internally tested as a contingency plan to close the performance gap because anything beyond 2GHz with a console-like form factor is probably gonna melt silicon.
Thing is, all three of those scenarios have an even chance of playing out, and we're only going to get a better, fuller indication a few weeks from now. Don't throw away one of those possibilities even if you prefer another, because there honestly isn't a very strong reason to throw any of these scenarios out of the window just yet.
but we CAN throw out the idea PS5 isn't using RDNA2, that much is essentially official.
SlimySnake
Does 12TF now seem kinda conservative/weak with the revelation of 50% better perf/watt???
Nobody was expecting anything close to those gains
That is a possibility. Another possibility is that 56CUs is the full chip and they have 52 active? Because if it's a 60CU chip and they have say 8 CUs disabled, that feels a bit like leaving performance on the table IMO. Might as well have gone with a smaller chip in that case.
In any case they do have the headroom to upclock depending on where PS5 actually lands, but if in case an example Oberon's a fat chip similar to XSX but at 2GHz, well that would seem insane to me until seeing AMD double down on RDNA2 efficiency (I hope it's that efficient for their sake because Nvidia is NOT playing around xD).
Again, where does it mention it's above 9.2?![]()
AMD Project Lead Reconfirms PS5 Will Support Hardware Raytracing - PlayStation Universe
AMD has reconfirmed that both the PS5 and Xbox Series X will support hardware raytracing.www.psu.com
Boom![]()
Not for an APU in a console box, no way. It explains it much more for keeping the temperature with the TDP down.
This sounds oddly familiar.
Lawrence Julius Taylor
Gpu was confirmed to be navi 2x which is big navi.9.2 tf is from navi 10 in github based on rdna 1 which amd confirmed its not ps5Again, where does it mention it's above 9.2?
Sony works closely with AMD as well, they're the same vendor. so I am more than positive they co-developed their own API too.
This is the DX12 crap all over again.
![]()
![]()
My mistake I didn't meant to reply to your postAgain, where does it mention it's above 9.2?
But the XSX APU is huge, unless they are severely underclocking it then 14TF seems possible
Yeah thats explains a lot. seX will be not a power hungry as I thought. RDNA 2.0 will explain a lot in fact.No. 12 TF is super impressive. 50% perf/watt for RDNA2 makes it even more impressive (and much more logical from a power consumption standpoint)
Otherwise the Xbox Series X would be way too power hungry
![]()
![]()
![]()
PS5 and XSX will be base on navi 2x ?
Off Topic so forgive me and show me mercy! But have you heard anything about a Stargate SG1 reboot?We done with the juvenile "xbots" and the like in here?
"You betcha MoW!"
Thank you.
Gpu was confirmed to be navi 2x which is big navi.9.2 tf is from navi 10 in github based on rdna 1 .