• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD RX 6000 'Big Navi' GPU Event | 10/28/2020 @ 12PM EST/9AM PST/4PM UK

But there is only 10% difference between 3080 and 3090 and the latter is the full GA102.
How could 3080Ti fit into this? What chip would it be based on?

3080ti is rumoured to be based on GA102, disabled 4 SMs and with 12GB gddr6x, probably same price as 6900XT.

Right now the 3090 looks completely stupid price/performance wise for gamers so Nvidia need something to counter it.

Same reason they are using an even further cut down version of GA102 as 3070ti to combat 6800 as it makes the 3070 completely redundant for only $80 more.
 

notseqi

Gold Member
But there is only 10% difference between 3080 and 3090 and the latter is the full GA102.
How could 3080Ti fit into this? What chip would it be based on?
Won't happen today. 3080 cards, in a way, aren't even out yet, apart from the enthusiast part enjoyed here.

3080ti... dunno. Upping VRAM/clocks/driver performance, deactivated 3090s?
Why did Nvidia preemptively lower prices? Something will happen. Or it has already happened.
 

-COOLIO-

The Everyman
it's 10gb gddr6x vs 16gb gddr6. I wonder what will matter more... probably 3080 will be too slow by the time games require that vram anyway
My gut tells me that 10gbs should be fine. Both consoles and enthusiast PC gamers are targetting the same 4k resolution, and the Xbox series x is working with only 13.5gbs of memory for the devs to use. Only 10gbs of which runs at 560gb/s. The other 3.5 will probably be used for non graphics tasks
 

nemiroff

Gold Member
Looks very promising so far, and enough so for me to see myself seriously consider one of their RDNA 3 cards (I need my RTX fix this round). It's great to see AMD as a agile player now, nipping at Nvidia's heels in many aspects. Good stuff.
 
Last edited:

Rikkori

Member
Keep this in mind regarding RT performance: Ampere is faster than Turing by quite a bit BUT in actual games (that aren't path traced) the difference is almost non-existent (compared to just the straight power increase). So just because in a PURE RT scenario it's faster DOESN'T MEAN you will actually run games with RT faster - because they're still hybrid rendered!

Disclaimers out of the way, 6800 XT RT = 2080 Ti RT performance (Pure)


See tests here for Ampere vs Turing:

 
Last edited:

GHG

Gold Member
So what's this about ray tracing?

For any existing games with only RTX support will. The developers need to go back in and add support that is compatible with AMD's RDNA 2 GPUs?

Or will you be able to play metro exodus with ray tracing on out of the box with an RDNA 2 card at launch?
 

FireFly

Member
Keep this in mind regarding RT performance: Ampere is faster than Turing by quite a bit BUT in actual games (that aren't path traced) the difference is almost non-existent (compared to just the straight power increase). So just because in a PURE RT scenario it's faster DOESN'T MEAN you will actually run games with RT faster - because they're still hybrid rendered!
Control and Metro Exodus see the 3080 widen the gap with Turing when RTX is enabled, and these are not path traced titles:


I suspect in mixed titles, the 6800 XT will drop to be more in the middle of the 3070 and 3080.
 
So what's this about ray tracing?

For any existing games with only RTX support will. The developers need to go back in and add support that is compatible with AMD's RDNA 2 GPUs?

Or will you be able to play metro exodus with ray tracing on out of the box with an RDNA 2 card at launch?

Initially we all just assumed they were using Direct X Ray Tracing so the same code would work regardless.

But it seems that might not be the case, it is not 100% clear right now but it seems like Nvidia might have done some customizations to DXR for their RTX technology/code.

Almost all of the RTX games on PC at the moment were sponsored by Nvidia in some way, normally offering marketing support, engineering support (possibly even coding it for the developers) and other technical support.

This implementation may not be compatible with AMD or Intel standard DXR API calls/cards. So in the end the "RTX" games may not support AMD or Intel cards for RT right now.

The developers if they are inclined, may have to add the DXR support as a future patch/update, essentially needing to recode it.

There may also be some contract terms that prohibit for a limited time (or indefinitely?) implementation of DXR if they are RTX titles. Possibly delayed 6months to a year most likely.

There is also the possibility that smaller developers who Nvidia helped implement RTX for free may not have the engineering resources/budget/manpower to implement DXR themselves or may see it as not worth it for the time/money involved.

That appears to be where we sit right now but we should know more in the coming week or two most likely.
 

Rikkori

Member
Control and Metro Exodus see the 3080 widen the gap with Turing when RTX is enabled, and these are not path traced titles:


I suspect in mixed titles, the 6800 XT will drop to be more in the middle of the 3070 and 3080.

Nah, look at the vid. Metro Exodus - You can see there's about 30% difference between 2080 Ti & 3080, which is in line with power difference between the two. For Control it has a bit extra, but it's still 36% (non-RTX) vs 40% (RTX On).

The difference in RT power simply doesn't jump out unless path-tracing.
 

notseqi

Gold Member
Ridiculous post. DLSS provides image quality that matches a target resolution but that is provided at a lower resolution, and the subsequent boost in frame rate is immense. Control at native 4K with ray tracing fully enabled runs at about 25 frames per second on an RTX 3080; however, via DLSS Quality Mode, the frame rate jumps to the mid fifties and low sixties.
I think I found a better way to phrase it: I don't mind you playing RT early access but don't shit on me for not playing a game I don't care about.

Which a lot of Nvidia guys already have been doing here.
 
Ehhh I think they've done an amazing job with the Infinity Cache to close the gap and weakneses of a 256-bit bus but I'm calling BS on 1664 GB/s "effective bandwidth".

Most likely best case scenario or marketing nonsense, having said that what they do have seems really impressive, I guess this was the best way for them to try to market to non engineers and the average gamer, I'm still calling BS though.
 
Where are the ray tracing performance benchmarks?
This is all we have for now

Nvidia 3080 looks to be 33% faster at ray tracing compared to the AMD 6800 XT (which should get even worse for the 6800 XT when the 3080 uses DLSS).

 
Last edited:
So.... it seems like the rumored clock speeds were wrong. Very wrong.

With rumors of 2.5GHz, the reality isn't even close.

6800XT - Game Clock 2015 ( boost clock of 2250 )
6800 - Game Clock 1815 ( boost clock of 2105 )

And AFAIK the game clock is the REAL clock, as in that's the number that can actually be sustained in game. But it will be interesting to see where it lands when people get to test these cards for themselves.

Here's the way from Steve from Gamer's Nexus describes AMD's Boost Clock ...

"The peak opportunistic clock. Boost clock in AMDs spec sheet could mean for a BLIP, for a couple milliseconds and under optimal conditions"

So AMD's boost clock is BS and barely worth mentioning, but it does look nice on a spec sheet.

And what might this say about the PS5's "real" clockspeed?

At this point it doesn't look very likely to me that the PS5 can actually spend much time at all at 2.23Ghz. It may only be able to "BLIP" upto that clock for a few milliseconds at a time. And when these new GPUs are out and tested I'm gonna be that NONE of them will be able to hold 2.23GHz for any length of time worth mentioning. And if they can't hold that clock speed there's NO reason to believe that the GPU in the PS5 can either.

It sure does look like Microsoft gave us the Game Clock while Sony gave us the Boost Clock.

AMD's new RDNA2 GPUs seem to range between 1815 and 2015 for actual sustained performance.

Giving the PS5 the benefit of the doubt, and giving it the upper range of 2015mhz, the PS5 actual and sustained TFLOP count would be 9.28 TFLOPS.

Still a bit too early to know this for a fact, but the facts that we have now are definitely pointing STRONGLY in this direction.
 

Ascend

Member
Initially we all just assumed they were using Direct X Ray Tracing so the same code would work regardless.

But it seems that might not be the case, it is not 100% clear right now but it seems like Nvidia might have done some customizations to DXR for their RTX technology/code.

Almost all of the RTX games on PC at the moment were sponsored by Nvidia in some way, normally offering marketing support, engineering support (possibly even coding it for the developers) and other technical support.

This implementation may not be compatible with AMD or Intel standard DXR API calls/cards. So in the end the "RTX" games may not support AMD or Intel cards for RT right now.

The developers if they are inclined, may have to add the DXR support as a future patch/update, essentially needing to recode it.

There may also be some contract terms that prohibit for a limited time (or indefinitely?) implementation of DXR if they are RTX titles. Possibly delayed 6months to a year most likely.

There is also the possibility that smaller developers who Nvidia helped implement RTX for free may not have the engineering resources/budget/manpower to implement DXR themselves or may see it as not worth it for the time/money involved.

That appears to be where we sit right now but we should know more in the coming week or two most likely.
And that is why I dislike nVidia. It's basically gameworks all over again. RT is still too heavy in terms of performance to be really viable. You need a 4K card to run RT respectably at 1080p... So it's still a niche feature. But I really don't like the closed off approach nVidia takes.
 

notseqi

Gold Member
"The peak opportunistic clock. Boost clock in AMDs spec sheet could mean for a BLIP, for a couple milliseconds and under optimal conditions"

So AMD's boost clock is BS and barely worth mentioning, but it does look nice on a spec sheet.

And what might this say about the PS5's "real" clockspeed?
Save peak clock output in InfinityFabric or store it driver side rolling hard on similar scenes being repeated, less rendering for hard and repeating details? I don't know shit but I'd integrate something like this for the next ~60frames
 
And that is why I dislike nVidia. It's basically gameworks all over again. RT is still too heavy in terms of performance to be really viable. You need a 4K card to run RT respectably at 1080p... So it's still a niche feature. But I really don't like the closed off approach nVidia takes.

I don't know if that's fair.

Nvidia literally pioneered this. They were the first so it's not exactly surprising that they tuned everything that they were doing toward THEIR GPUS.

Why would Nvidia spend any time trying to make sure that the way their they did raytracing would be compatible with unknown future AMD and Intel GPUs?
 
If Sony's RT is indeed the same RDNA2 implementation then it's DOA. It's 100% dependent of CU count as it has 1Ray Accelerator per CU. That would make the XSX have a 45% advantage, although it runs at slower clocks making it a lower advantage overall. Still I'd guess 30% or more.
 

SantaC

Member
This is the liar that claimed that big navi would boost to 2500mhz. I called him out but he wont answer.


What a shitty dude. Just made some numbers up for attention.
 
Last edited:
Save peak clock output in InfinityFabric or store it driver side rolling hard on similar scenes being repeated, less rendering for hard and repeating details? I don't know shit but I'd integrate something like this for the next ~60frames

What you said is complete nonsense. You just threw a bunch of technology buzzwords at me.

You could write a decent Star Trek episode "We'll channel the tachions through the deflector array to charge the Klingon time crystals." :messenger_winking:
 
This is the liar that claimed that big navi would boost to 2500mhz. I called him out but he wont answer.


I'll believe this when I see it reported in MSI afterburner. Because all the other Tweets about RDNA 2s clock speed have proven themselves to be BS at this point.
 
Last edited:
This is the liar that claimed that big navi would boost to 2500mhz. I called him out but he wont answer.


He was discussing AIB partner models with an Overclock.

The boost limit in the BIOS for the AIB models he was talking about was set to 2500+ and he said game clocks were between 2.3-2.4Ghz

It is heavilily implied he works for Sapphire (AMD exclusive partner) or at least has access to their cards. (Tester? Friends with someone? who knows))

We will see what AIB cards clock in at once they are revealed. I wouldn't count him out just yet, he seems to be quite credible in the Twitter tech circles.
 

DonMigs85

Member
Was thinking of defecting to Ampere from my old RX 570 but I think I'm sticking with AMD. 6800 XT seems like the better buy since it's only $70 more than the 6800. And I'm just sticking with my 75Hz FreeSync monitor until it breaks lol
 

notseqi

Gold Member
I don't know if that's fair.

Nvidia literally pioneered this. They were the first so it's not exactly surprising that they tuned everything that they were doing toward THEIR GPUS.

Why would Nvidia spend any time trying to make sure that the way their they did raytracing would be compatible with unknown future AMD and Intel GPUs?
I'm not hating but the 'why now' argument is weird to me: people are paying for incomplete work and therefore it's still esrly access. I know that DLSS kinda negates that but we would not be here, in my mind, if Nvidia didn't get bonked hard that one time on 4k output. Just do good 4k on any architecture but no, they are trying to get you on RT. And wouldnt you have expected it on the morning of the third day...? DLSS arrived.

Every anti-aliasing tech has been put on us because there just is stuff these cards cant do, which would be ok to me. But no, we need shit magic, all the time.
 

SantaC

Member
I'll believe this when I see it reported in MSI afterburner. Because all the other Tweets about RDNAs clock speed have proven themselves to be BS at this point.
Well it is BS because AMD themselves said 2250mhz max. So why would AIBs go 250mhz over that? Not possible.

He was just making up numbers to get popularity. He is a fucking nobody.
 
Last edited:

GHG

Gold Member
If Sony's RT is indeed the same RDNA2 implementation then it's DOA. It's 100% dependent of CU count as it has 1Ray Accelerator per CU. That would make the XSX have a 45% advantage, although it runs at slower clocks making it a lower advantage overall. Still I'd guess 30% or more.

Yeh you mentioned it but frequency plays a part.

The comparison is pretty much 1-1 on the 6000 series cards because the clock speeds are similar across the board however the PS5's GPU is higher so it makes a direct comparison more difficult. With that said the Series X should have an advantage here if Microsoft can get their SDK in order.

This is the liar that claimed that big navi would boost to 2500mhz. I called him out but he wont answer.


What a shitty dude. Just made some numbers up for attention.


So.... it seems like the rumored clock speeds were wrong. Very wrong.

With rumors of 2.5GHz, the reality isn't even close.

Wait for AIB card announcements.

All the leaks we had in the lats couple of weeks originated from AIB's or people who managed to get their hands on AIB samples.

AMD reference cards almost always clock in lower than what the AIB's manage to squeeze out.
 

notseqi

Gold Member
What you said is complete nonsense. You just threw a bunch of technology buzzwords at me.

You could write a decent Star Trek episode "We'll channel the tachions through the deflector array to charge the Klingon time crystals." :messenger_winking:
As I said, I don't know shit and I'm waiting for benchmarks.
 

GHG

Gold Member
Well it is BS because AMD themselves said 2250mhz max. So why would AIBs go 250mhz over that? Not possible.

He was just making up numbers to get popularity. He is a fucking nobody.

I don't think you're understanding how this works. Wait a couple of weeks before going on the offensive.

AMD only ever quote guaranteed frequencies. I have a 3900x that boosts to 4.7 on multiple cores (in a single CCX) and 4.8 on a single core out of the box, no overclocking. According to you that shouldn't be possible because AMD quote 4.6 single core boost for my CPU.

AIB's always have factory overclocked cards. Dependant on the quality of the silicon 2.5 doesn't sound like a massive stretch.
 

GHG

Gold Member
What benchmarks?!
You don't need benchmarks to state that.

True. What I mean though is whether it can actually be achieved in real world gaming scenarios or whether it's simply theoretical based on the solution they've come up with. It's quite a big claim.
 
Top Bottom