Thugnificient
Banned
Would like to preface with this. TFLOPs as a measure of gaming performance is complete and utter bollocks.
Exhibit 1: NVIDIA rates their 3080 as having 30 TFLOPs of power but being only twice as fast as the 2080 which using their official clocks has 10.77 TFLOPs.
The first problem is NVIDIA massively undersells their clock speeds when it comes to Turing, rating the 2080 at only 1710MHz boost when most retail units have no issue reaching 2000MHz.
To further complicate matters, the compute performance equivalent of RDNA isn't even 1:1 with Turing which in turn isn't 1:1 with Ampere, making TFLOPs as a measurement of performance utterly pointless.
How do we solve this? Well we can't really but the best we can do is use known differentials to rate the different cards and add them up. To make this easy, let's use RDNA2 as a baseline.
Series X: 12.2 TFLOPs said to be roughly equivalent to the 2080.
2080: 12.2 TFLOPs
2080 Ti: 15.3 or roughly 25% faster than the 2080.
3080: 20.74 TFLOPs or 60 to 80% faster than the 2080. I used 70% as an average.
Moving on to RT now.
NVIDIA initially used gigarays as a metric boasting about 10 gigarays/s for its 2080 Ti.
Microsoft came out later, obviously using different metrics claiming 380 billions intersection per second for its Series X. We know by now the metrics used weren't the same at all.
Recently, NVIDIA ostensibly used AMD's metrics to brag about their RT performance using a similar method to Microsoft whom claimed the Series X has the equivalent of 25 TFLOPs worth of raster(12)+RT(13) performance.
They said their 2080 Ti has 34 RT-TFLOPs.
Now it gets even more nebulous because it's very unclear how NVIDIA got those numbers as they also claimed "58 RT-TFLOPs" for the 3080.
John from DF said Minecraft on the Series with RT hovered between 30-60fps at 1080. The RTX 2060 with DLSS on gets 60fps at 1080p and it was said 1440p with DLSS worked out OK.
With that said, it would seem the RT performance in the Minecraft demo on the SX isn't too far away from the RTX 2060 without DLSS. We could be generous and put it in the ballpark of an RTX 2070.
What's the takeaway in all of that? If the Minecraft demo is even remotely representative of the RT performance of RDNA2, I'm afraid it'll get stomped by Ampere because it's already losing out to Turing. Do keep in mind, this demo was put together in one month by a single engineer so we might get better performance.
Though as it stands, I would be shocked if the RT performance of RDNA2 came anywhere near Ampere because as far as we are aware as of now, they are worse than Turing.
Discuss.
Exhibit 1: NVIDIA rates their 3080 as having 30 TFLOPs of power but being only twice as fast as the 2080 which using their official clocks has 10.77 TFLOPs.
The first problem is NVIDIA massively undersells their clock speeds when it comes to Turing, rating the 2080 at only 1710MHz boost when most retail units have no issue reaching 2000MHz.
To further complicate matters, the compute performance equivalent of RDNA isn't even 1:1 with Turing which in turn isn't 1:1 with Ampere, making TFLOPs as a measurement of performance utterly pointless.
How do we solve this? Well we can't really but the best we can do is use known differentials to rate the different cards and add them up. To make this easy, let's use RDNA2 as a baseline.
Series X: 12.2 TFLOPs said to be roughly equivalent to the 2080.
2080: 12.2 TFLOPs
2080 Ti: 15.3 or roughly 25% faster than the 2080.
3080: 20.74 TFLOPs or 60 to 80% faster than the 2080. I used 70% as an average.
Moving on to RT now.
NVIDIA initially used gigarays as a metric boasting about 10 gigarays/s for its 2080 Ti.
Microsoft came out later, obviously using different metrics claiming 380 billions intersection per second for its Series X. We know by now the metrics used weren't the same at all.
Recently, NVIDIA ostensibly used AMD's metrics to brag about their RT performance using a similar method to Microsoft whom claimed the Series X has the equivalent of 25 TFLOPs worth of raster(12)+RT(13) performance.
They said their 2080 Ti has 34 RT-TFLOPs.
Now it gets even more nebulous because it's very unclear how NVIDIA got those numbers as they also claimed "58 RT-TFLOPs" for the 3080.
John from DF said Minecraft on the Series with RT hovered between 30-60fps at 1080. The RTX 2060 with DLSS on gets 60fps at 1080p and it was said 1440p with DLSS worked out OK.
With that said, it would seem the RT performance in the Minecraft demo on the SX isn't too far away from the RTX 2060 without DLSS. We could be generous and put it in the ballpark of an RTX 2070.
What's the takeaway in all of that? If the Minecraft demo is even remotely representative of the RT performance of RDNA2, I'm afraid it'll get stomped by Ampere because it's already losing out to Turing. Do keep in mind, this demo was put together in one month by a single engineer so we might get better performance.
Though as it stands, I would be shocked if the RT performance of RDNA2 came anywhere near Ampere because as far as we are aware as of now, they are worse than Turing.
Discuss.