What is the source of this data? Also, what is the frame rate at which the RTX 3090 renders the game when using the settings chosen for the RTX 3080?
Multiple YouTube videos.
Are you referring to me as the bigot here from the few previous posts I made? Just @ me rather than try to side eye me and smirks with your «friends »
I was not targeting you personally. But there are the ones that are constantly in here trying to shove their opinions down others' throats. You're not of them, in my view. Note that that can always change
I doubt this forum kept archive from that far out but yea, from 2004 to 2016, you would probably find my name in many AMD threads. It’s not a « plot » or a tactics like you seem to allude to. I simply grew out of it, AMD is probably the biggest hardware cult there is. I used to always hang on rage 3D too, and here on this very forum, analyse the ATI flipper to find that tiny advantage over other architectures. Always shitting in Nvidia for their proprietary tech. In the end, I realized all of this makes no sense, it’s a moral high ground, but who the fuck cares.
That's the exact issue. People don't seem to care that GPU prices have gone up astronomically, exactly because of nVidia's price gouging and being anti-competitive. But by all means... Let people stop caring. Soon we'll be paying over $800 for mid range cards.
As an electrical engineer with some basics of semiconductors, I wanted to ask a simple question : How come Nvidia even survived this battle in rasterization? If you can answer that, please go ahead.
I'm not exactly sure what you mean by this question. If you mean that they survived what you called 'the biggest hardware cult there is', the answer is simple;
AMD is not the biggest hardware cult there is. To many people, only nVidia exists in the graphics space.
The majority of people I know never heard of the R9 290X, the HD7970, the HD5850, the X1950 Pro, the 5700XT... When I told a friend (who is a programmer and a gamer) that I got a good deal on an R9 Fury, she looked at me confused and said; "what is that?". Then I was confused and said that it's an AMD graphics card. She replied with 'Oh... Is that what they're called?' I remember that conversation clearly, because, I expected her to know that at least they existed. And she was talking about getting the GTX 980 at the time.
nVidia has more mind share and is known by pretty much every gamer that is a anything but a mobile gamer. ATi/AMD/Radeon only gets traction when their CPUs are doing well, and even then, it remains obscure compared to nVidia.
How nVidia got in this position? One is definitely better marketing. The other is anti-competitive practices. And I'd actually place good products in the last place, because the other two have caused people to buy nVidia, independently of whether they were good products or not. Prime example; Fermi.
If AMD went with an RT hybrid solution, and as per their patent, they want a simplified version to save on silicon area/complexity at the cost of RT performances? Fine. Like i said earlier, it’s a legit decision.
If they have accelerated ML math integers in the pipeline rather than dedicated ML cores to again save on silicon area, fine. It can probably crunch enough for some upscaling or texture AI upscale like that Xbox developer wants to do.
But with these sacrifices, I expect then that they should be godly in rasterization. Somehow, either that fizzled on AMD’s side or Nvidia’s double shading pipelines was a surprise to them.
If you compare AMD to AMD, you will notice the huge jump they actually made. And if they can keep doing those jumps, sort of like they are doing with Ryzen... Let's just say competition will be good.
I have my theory that the SRAM solution ate too much silicon area that should have been used for more CUs. These cards should have been on HBM2. Other wise, all these sacrifices serve just the SRAM implementation. Not sure it was the right move.
I wish a tech site would dive in these architectures and explain it better.
There's a balance between bandwidth and CUs. AMD went the infinity cache way, likely because it is cheaper than using HBM, while achieving similar results.
I don't think the performance drop at 4K is a bandwidth limit either. It's more that nVidia's cards scale down poorly at low resolutions because of the massive amounts of parallel FP ALUs. AMD's cards used to scale better than nVidia's at higher resolutions, because they had a lot of unused CUs at the lower resolutions.
I don't know if you've noticed, but the design philosophy of AMD and nVidia has kind of flipped.