You misunderstand 6nm vs 5nm.
6nm is slightly optimised 7nm. 5nm is nearly twice the transistor density of 6nm. The MCDs are around 55 million transistors per mm2. The GCD is 138 million transistors per mm2.
Now obviously, SRAM will not scale with new nodes as well as logic, so you are sort of right, that it won't be that big of a shrink, but it would still be substantial.
But also each of the MCDs and GCDs have to waste die area on chip-to-chip interconnects, which takes up a fair bit of space.
So in actuality If N31 was monolithic it would be around 450mm2. Which is rather small - especially considering the amount of space taken up by cache.
The fact that the silicon is buggy, and draws way too much current at given voltage to maintain its clockspeeds is a completely separate physical design issue. It has absolutely nothing to do with the architecture itself, which is perfectly fine.
This is evidenced by AIB card reviews which, if you add more power can add an extra 15-20% more performance just by lifting average clock frequencies from 2600MHz to 3200MHz.
The potential is there. You can see what AMD was aiming for, but they fell short of their targets. Which means in simple terms their silicon execution was not good enough.
I don't know about Nvidia schooling AMD engineers hard. AMD and Nvidia went for completely different strategies.
And to clear a few things up.
- AMD do have dedicated RT hardware, they just aren't spending as much of their transistor budget on it as Nvidia.
- AMD don't have fixed function ML accelerators because it doesn't matter that much for gaming. Yes FSR2 isn't quite as good as DLSS in image quality, but its damn close and is hardware agnostic. And if you think DLSS3's hit to image quality is an acceptable way to improve framerate, then you have absolute no right to complain about FSR2's image quality
Nvidia is indeed in a league of their own with the resources they have. Using ML to micro-optimise transistor layout to maximise performance and minimise power is exceptional stuff. However, you also need to understand that Lovelace is nothing remarkable from them as far as architecture is concerned. Every GPU architecture Nvidia has made since Volta is just a small incremental update on Volta. Turing is Volta + RT cores and a dedicated INT pipe. Ampere is Turing + double FP32. Lovelace is a die-shrunk Ampere with a jacked up RT core. If you actually look at the structure of the SM (Streaming multiprocessor) it hasn't dramatically changed since the shift from Pascal to Volta. Small incremental updates. Not unlike what AMD was doing with GCN, but just executed much more effectively with a much better starting point.
RDNA2 from AMD was successful, because it was basically RDNA1 on steroids. Optimised in physical design, fixed some hardware bugs, and clocked to insanity.
RDNA3 is effectively a completely new architecture in every possible way. The CU's are completely redesigned. The individual vALU's have been completely redesigned. The front end and geometry has been redesigned. The command processor has been streamlined, and shifted from a hardware scheduler to a software scheduler (iirc) like Nvidia. On top of this they have disaggregated last level cache and memory controllers. Very ambitious on a number of different ways. They aimed big and they failed.
If Nvidia schooled AMD at anything, its execution, not necessarily at architecture.
But more than their hardware, their true strength is in software. Software is the reason AMD doesn't make larger chips, because they have no answer to Nvidia's software.
Let me be clear. AD102 is not designed for gamers. Nvidia absolutely do not give a shit about gamers. The fact that AD102 is blazing fast at gaming is a nice side bonus for them. AD102's true target is semi-professionals and professionals. Their RTX 6000 Lovelace is where Nvidia really make money, and that is all built on Nvidia's software. CUDA and everything that plugs into, OptiX for rendering etc.
AMD doesn't have Nvidia's market incumbent to be able to dictate the way the market moves. DXR was built by Microsoft and Nvidia together, but RTX is essentially a black box. Its proprietary software. Nvidia is clever and sends software engineers to developers to help build RTX to most efficiently use their hardware. Even now AMD could dedicate a shit load of transistors to RT like Intel do, but that is no guarantee that it will be the same level of performance. AMD at present does not have the resources to do what Nvidia does, which is why they open source a lot of their stuff. They make up for their lack of resources, by providing a huge amount of detailed documentation and information so developers can do things easily themselves. However, at the end of the day, nothing is easier than having some guy from Nvidia come and do it for you.
And to be clear. I'm fairly sure AMD could make a huge 600mm2 GPU, and cram in a whole bunch of transistors for RT. They could borrow the Matrix/ML cores from their CDNA products. They could toss that all into a huge GPU. But without the software stack it would be pointless. As I said before Nvidia can justify that massive 600mm2 AD102 GPU because they intend on selling most of it to Pros in the form of the $1600+ 4090 and the $8000 RTX L6000. That helps them recover the money.
Now tell me, who the fuck would buy a $1600 AMD GPU, if it doesn't have fully functional CUDA alternative, or OptiX alternative?
Would you?
No you would expect them to charge a lower price, even if it performed exactly the same as Nvidia in gaming, let alone all the pro use-cases. So how can they make back the money spent on such an expensive GPU? They can't spend all that money, sell it at a loss or thin margins. Its not sustainable, and it won't help them compete long-term.
AMD's software is shit, which is where the real problem is. Their hardware engineers are doing just fine, barring this blip with N31. Problem with software, is that it takes time to develop. ROCm is progressing, but its still far behind CUDA. HIP exists, but its still just shy of CUDA in rendering. HIP-RT is in development with Blender, but its still far from release. Once that software stack is up and running and able to deliver something useful to professionals, then and only then will AMD start to actually dedicate valuable leading edge silicon to stuff like AI and RT.
You can talk about Intel. But Intel is a massive company that's even bigger than Nvidia. In fact, they've been doing hardware RT since before even Nvidia with Larrabee. And they also have a ton of very talented software engineers on payroll building out OneAPI for their compute stack. Again, you're high if you think ARC is designed for gamers. ARC exists as a platform to build Intel's image in GPU so they can enter that space in datacentre and pro use cases. See: Ponte Veccio.
The market situation really not that simple.
By the way, I'm not making excuses for AMD or The 7900 family. They're mediocre products that are better value than the 4080, because that's a terrible product. You really should not be buying either. But if you go ahead and buy a 4080 because you're disappointed by the 7900XTX, then you really have no right to complain about prices or the state of competition. You're just feeding the beast and nothing will ever change, so long as you keep feeding it.
The market is what gamers made it. We are past the point of complaining now, we engineered these circumstances.
I know I'm guilty of be impressed with AMD/ATi GPUs in the past, congratulating them on a job well done and then going ahead and buying an Nvidia GPU when they inevitably discounted it.
If instead of providing empty platitudes to Radeon products for being great value for money, the PC gaming community; myself included, actually bought Radeon products. Maybe AMD wouldn't have been so cash starved during the whole of GCN and they would have been able to compete better against Nvidia.
But its too late now.
I made a mistake. Collectively as a community we made a mistake giving Nvidia 80% of the market. And now we have to pay.
AMD have realised that they too must make money to compete. So why would they slash prices, take less profit per unit sold, and also sell fewer units than Nvidia? That's a great way to go out of business. Now they'll sit content being second best, and just ride Nvidia's coat-tails and position themselves 10-20% below Nvidia's prices. Because why not?
The only way we can change anything, is to stop being a bunch of lunatic magpies, and just not buy any new GPU.