Oh no, he had to drop the texture size.
That's not a big deal by itself... But if you spend more than $500 on a card, one has to ask if the hassle of optimizing settings because you lack VRAM is really such a great idea.
I don't think that VRAM will be a huge issue because you'll run into performance issues before you run into VRAM bottlenecks, dropping texture quality will lead to a negligible difference in visual quality.
I have heard this kind of argument for years. But I found it to be strictly not true in my personal experience, as someone that keeps cards for quite a long time. The first few cards that I had 20 years ago, had this issue. They were too slow to make use of their additional RAM. The latest three, they all ran out of VRAM while they could still have performed well enough.
I still think the consoles have something to do with it. During the X360/PS3 time, the consoles had abysmal RAM compared to the PC. And even then, my RAM ran out on my graphics card before the GPU core was truly a limit. Right now, it's only going to get worse. It is not unreasonable to expect 10GB of RAM as the baseline console experience. If you want your PC experience to be superior, it's better to have more.
Better safe than sorry still applies.
That said I do wish they would have gotten these cards into the 12GB territory, on the other hand I think AMD went overboard and I'd rather they had cut back and made the cards a bit cheaper.
I don't think 4GB less would have made that much difference on their pricing. The majority of the cost is going into the GPU die, if we focus on the actual product and ignore marketing, the supply chain etc...
And I think their hardware configuration is what made them decide this. AMD most likely had to choose between 8GB and 16GB for their 6800 series card, so they went for 16.
nVidia had to choose between 10GB and 20GB for their 3080. They went with 10GB. (likely because GDDR6X is expensive and power hungry).
Weird RAM configurations can be a pain to optimize. The only hardware that I know of that has such a thing is the Xbox Series S/X. Both nVidia and AMD will avoid such a configuration on their own cards like the plague. So they will choose a multiple of whatever their memory bus can handle in the most simple way.