• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD RX 6000 'Big Navi' GPU Event | 10/28/2020 @ 12PM EST/9AM PST/4PM UK

duhmetree

Member
I think the Radeon launch success/failure will hinge on the reviewers.

Think back to Ryzen, it only seemed to get popular after a variety of Youtubers started to switch to Ryzen for their own actual builds. Up until that point that most of them were like "yeah ryzen is great! (but I'm still using an i7/i9)" and it all felt a bit hypocritical. Encouraging people to buy X while using Y because they want that sweet sweet affiliate cut. I get it but yeah it's obviously more reassuring if they use it themselves without being forced.

Real proof of the pudding will be whether influencers actually started using Radeon themselves.
This is the first iteration. A really nice foundation to build off of. People took notice with Zen. Zen 2 was when the 'switch' started. Zen 3 is domination.

nvidia better not take it likely. AMD will only get better. Crazy to think we have Intel joining the fray as well
 
Last edited:

llien

Banned
Pretty sure most games at least higher budget titles are going to have RT going forward.
Yeah.
A checkbox of sorts.
Remind me how that works in World of Warcraft.
That amazing effect of dropping framerates for barely any visual effect.

Amazing part of the DXR story is that Huang hasn't learned from "How NVidia has killed OpenGL with greed" story, most, if not all, "RT games" out there use NV proprietary crap.
 
Last edited:

RaZoR No1

Member
Who thought that AMD will close the gap / overtake its competition in CPU and in GPU? What a comeback!

Btw: Is there a page or can someone explain me in easy words, why the Infinity Cache is such a big deal? I understand the concept of faster memory = better performance, but what's so special about Infinity Cache even if it is so small. Isn't the normal VRAM still slowing the GPU down? What can you fit in there and why just start now with these kind of techniques?

Additionally it would be interesting to see the first benchmarks with other CPU than Ryzen 5000 / 500 Chipset, to know how good the GPU's are.
 
Last edited:
That's fair. Everything I'm saying is still early and subject to change. We won't truly KNOW until the GPUs are out in the wild and tested.

But at this point, IMO, it looks like all of those sky high clocks are just fantasy.

Today, AMD announced their new GPUs with game clocks (sustained performance) at 1.8 to 2.0 GHz - this is NORMAL. GPUs for years have been able to hit these frequencies and hold them.

2.1GHz has been the upper end, but possible.

2.2Ghz - I don't think there has been ANY GPU made so far that can sustain this clock. Even a watercooled GPU can't quite do this.

So when we hear rumors of RDNA2 GPUs that can run at 2.2 to 2.5GHz, this represents a range that was only possible with liquid nitrogen so far.

I'm not saying that these clocks are impossible to hit without LN2 (clocks get faster over time, and eventually we'll get there). But I am saying that if a GPU says 2.4 GHz on the spec sheet but in reality can only blip upto that frequency for a few milliseconds - that just doesn't count for anything IMO.

they revised the hardware pipeline to hit higher clocks. like they said in all past presentations.

hell even with RDNA1 i could OC to a sustained 2.1Ghz on air (left plateau is graphics test 1 right plateau is GT 2 inbetween are loading times):

5700xtocmaxk9kt4.png


why the hell shouldn't they be able to hit a somewhat sustained 2.2 when even the PS5 in a power limited envelope can do it?
 
If Sony's RT is indeed the same RDNA2 implementation then it's DOA. It's 100% dependent of CU count as it has 1Ray Accelerator per CU. That would make the XSX have a 45% advantage, although it runs at slower clocks making it a lower advantage overall. Still I'd guess 30% or more.

if those CUs run >20% more cycles in the same time budget, it's exactly the same difference as with TFlops.
 

longdi

Banned

Amd own footnote

Boost Clock Frequency is the maximum frequency achievable on the GPU running a bursty workload. Boost clock achievability, frequency, and sustainability will vary based on several factors, including but not limited to: thermal conditions and variation in applications and workloads. GD-151

‘Game Frequency’ is the expected GPU clock when running typical gaming applications, set to typical TGP (Total Graphics Power). Actual individual game clock results may vary. GD-147
 
Last edited:
Amd own footnote

Boost Clock Frequency is the maximum frequency achievable on the GPU running a bursty workload. Boost clock achievability, frequency, and sustainability will vary based on several factors, including but not limited to: thermal conditions and variation in applications and workloads. GD-151

‘Game Frequency’ is the expected GPU clock when running typical gaming applications, set to typical TGP (Total Graphics Power). Actual individual game clock results may vary. GD-147


AMD-Radeon-RX-5700XT-Series-Specs-E3.jpg



yet :


5700xtocmaxk9kt4.png



watch some hardware unboxed reviews and you will see that a lot of AIB 5700xt's actually run rather constantly around the promoted boostclock (yes this is somewhat game depended +/- 75Mhz)

not saying this must automatically the case for RDNA 2. but i din't think it's all that unlikely
 

Panajev2001a

GAF's Pleasant Genius
True , but for RT it’s still a tangible difference, as RT is power heavy.

I am not sure how what he said is true and yet not true. One console can shoot more parallel first bounce rays and the other can more quickly complete the ray intersection and shading steps and shoot secondary rays faster. Whenever there are dependent calculations the higher clockspeed (of more than just the CU’s) will help.

We are still in the 18% overall TFLOPS difference and on one side you might have clockspeed being in some rare instances throttled down while on the other you will have the same amount of key shared HW (rasteriser, triangle/primitive setup, ACE’s and HW Scheduler, geometry engine, ROPS, etc... all running at a significant clock difference even in the cases where there might be a couple percent reduction in clockspeed).
 

00_Zer0

Member
Yeh you mentioned it but frequency plays a part.

The comparison is pretty much 1-1 on the 6000 series cards because the clock speeds are similar across the board however the PS5's GPU is higher so it makes a direct comparison more difficult. With that said the Series X should have an advantage here if Microsoft can get their SDK in order.





Wait for AIB card announcements.

All the leaks we had in the lats couple of weeks originated from AIB's or people who managed to get their hands on AIB samples.

AMD reference cards almost always clock in lower than what the AIB's manage to squeeze out.
Did AMD say AIB partner cards are launching on Nov. 18 or just AMD editions? Any indication when AIB partners are allowed to unveil their cards and specs?
 

rofif

Can’t Git Gud
My favorite part about this video is when he goes over the infinity cache he immediately wonders what bullshit graphical feature Nvidia might come up with that intentionally tanks performance on the AMD cards.
since when nvidia is developing games? It's up to devs to use these features and as nvidia owner right now, I don't care how it runs on amd. I just want good looking and running games.
 

ZywyPL

Banned
I think anyone still believing/hoping for the 2.5GHz clock is setting up himself for a huge disappointment. Maybe with a custom loop or at least a tripple slot cooling the cards will be able to maintain that ~2.2GHz clock, which is still pretty damn high, but that's about it.
 

notseqi

Gold Member
What you said is complete nonsense. You just threw a bunch of technology buzzwords at me.

You could write a decent Star Trek episode "We'll channel the tachions through the deflector array to charge the Klingon time crystals." :messenger_winking:
You're welcome to answer this at a later point:
If you render a scenery at a much higher FoV than set/allowed by the game I imagine a way to save that (somewhere) for usage a few frames later. Kind of like cameras taking a higher FoV in and being able to knock out the sides to reduce shaky scenes.
I threw one AMD-buzzword at you btw.
 

marquimvfs

Member
since when nvidia is developing games? It's up to devs to use these features and as nvidia owner right now, I don't care how it runs on amd. I just want good looking and running games.
Oh, c'mon. Nvidia plays dirty since we all can remember, like when they forced ubisoft to remove DirectX 10.1 from ass creed 1 to make it compilant with some stupid "the way it's meant to be played", or something like that, just because it was playing way better on AMD hardware. The list is immense, like the tesselation fiasco, the physx launch and so on. They always do something to sucker punch AMD and everybody knows it, albeit it's heavily protected by NDA.
 
Last edited:

GHG

Gold Member
You are just overestimating your 3900x.

No, I'm not overestimating anything, I know exactly how my chip behaves. Some people have managed to get their 3900x's to 4.7ghz 24/7 all core overclocks with the right cooling setups and voltages.

If your chip is struggling then you can do the following:
  • Make sure your motherboard is up to scratch - a good x570 motherboard with strong VRM's is required. I have an x570 unify. You also want a motherboard with 2 8 pin CPU power connectors.
  • Make sure both of the above mentioned power connectors have a cable connected, some people say "just one is enough". Yeh one is enough to get the thing up and running but the CPU will not perform optimally.
  • Self explanatory but make sure you're using a good cooling solution - a beefy air cooler or a good AIO with correct contact over the chiplet areas is required (not all coolers offer this, particularly on the AIO side so it needs to be checked) :
ryzen3k300.png

  • Power plans - if you are running the default windows ryzen power plan then you're on the road to nowhere. Try one of 1usmus's power plans and see how your chip behaves.
If you've tried all of the above and your chip is still not performing well frequency wise then sorry you lost the silicon lottery. But just because your chip behaves that way it doesn't mean everyone else's will, nor does it have any implications for how RDNA 2 GPUs will behave frequency wise, especially when we are shooting in the dark at this stage. Let's revisit and discuss once the test results are out in the open.

Did AMD say AIB partner cards are launching on Nov. 18 or just AMD editions? Any indication when AIB partners are allowed to unveil their cards and specs?

I don't think AMD have said anything yet but based on leaks from AIBs both the 6800 and 6800xt will launch with AIB card availability whereas the 6900xt will be AMD reference only for now.
 
Last edited:

supernova8

Banned
This is the first iteration. A really nice foundation to build off of. People took notice with Zen. Zen 2 was when the 'switch' started. Zen 3 is domination.

nvidia better not take it likely. AMD will only get better. Crazy to think we have Intel joining the fray as well

So I guess you would consider RDNA2 to really be more like a Zen Plus rather than a Zen 2 equivalent in GPUs? If so, that means RDNA3 will be the Zen 2 moment.
 

Md Ray

Member
If an 80 CU RDNA 2 can sustain over 2 GHz & boost up to 2.25 GHz then I have no doubt in my mind that PS5's 36 CU RDNA 2 (-55% fewer CUs) can sustain 2.23 GHz frequency only varying depending on the workload. Remember Sony/AMD didn't design the chip to throttle the frequency based on the silicon temperature as it would create an inconsistent experience for the players. For e.g. a PS5 user playing in a hot room would experience worse frame-rates/resolutions due to thermal throttling than those who are playing the same game in a cold environment.

EDIT:
On the topic of this discussion - I would like to add that the RDNA 1's sustained frequency is almost always higher than the advertised 'Game Frequency' according to Steve from Hardware Unboxed, IIRC. So expect sustained frequency on the PC GPUs of RDNA 2 to also be between Game Frequency and Boost Frequency. That's a very good news all around.
 
Last edited:

Md Ray

Member
So RDNA 2 is 7nm according to AMD just like RDNA 1 and they show RDNA 3's slide, which says it's in design, to be using "Advanced node". What does that mean? 7nm+ or 5nm?
 

Serianox

Member
since when nvidia is developing games? It's up to devs to use these features and as nvidia owner right now, I don't care how it runs on amd. I just want good looking and running games.
I've got zero issues with Nvidia partnering with different game devs so they implement either new graphical options developed inhouse by Nvidia or features like Ansel as long as it's just to make it's cards more attractive to consumers by virtue of said additions. I don't like it when Nvidia does said partnerships and then uses them to harm it's competitor like locking gpu accelerated Physx to nvidia cards for no non bullshit reason for a long time. On that note i'm really curious if the AMD cards that support raytracing are going to be able to play the current crop of games that support it through RTX without it having to be patched in by the developer.
 
Now that the dust has settled, something to note about the 6900XT is that the benchmarks had SAM + Rage Mode enabled.

So Rage Mode is a better version of automatic overclocking, can be done in the Radeon control panel without voiding warranty.

This means it should be a conservative OC compared to an AIB factory OC or a manual OC of the reference design. But it should probably increase the TBP by some small amount. Still likely behind the 3090 in TBP, nice that every day gamers can use this feature to gain a few extra % of performance for "free" essentially without needing to know how to OC their card.

The SAM stuff seems like a really cool synergy feature, granted this will depend on things like RAM speeds/quality in your MOBO, which tier of Ryzen chip you have and as we know is only supported on the 5000 series Ryzen and seemingly 500 class MOBOs, but it seems to give a nice performance boost for free pretty much.

If you remove these two features then the 6900XT is slightly behind the 3090 compared to the official slides so this will likely be represented in 3rd party benchmarking/reviews. Just to set the expectation correctly.

Granted, the fact that it is able to nip at the heals of a 3090 level of performance for $500 less is pretty amazing. What is even more crazy is that it is doing it with 50 less watts of power draw than the 3090.

These cards (6000 series) seem to have a lot of OC headroom, you might ask "well why didn't AMD just do what Nvidia did and push the clocks/TBP close to max for that extra performance?"

Well it seems that AMD wanted to stay under 300w power draw target. They wanted the efficiency crown and they likely designed the reference models based around the properties of their reference cooler, which once you go above 300w might end up running hot and loud.

So if you can push these cards up to 3090 levels of power draw with an OC they would likely pretty much match them, it will be interesting to compare the top OC'd AIB 3090 to the top OC 6900XT (one AIB partners are allowed make them). The 6900XT might end up pulling ahead if there is as much OC headroom as implied.

Plus if you do happen to have the latest Ryzen/MOBO in your rig too you can gain a nice bit of extra performance for free with this cool new SAM stuff.

I wonder how much extra perf Rage Mode would give on an already aggressively OC'd AIB model?

Anyway, really interesting times ahead of us! Just don't expect non Rage Mode+SAM 6900XT to match the 3090, without those enabled it will be a tiny bit behind most likely.
 
AMD got good cards but failed in the most important aspect... the price.
The RX 6800 should be competing with the RTX 3070 in the price range, instead it's 80-100€ more.

Greedy AMD.

Well look at it like this, you can buy a 3070 for $500 with 8GB VRAM, or for just $80 more you can get an extra 10-15% performance along with double the VRAM at 16GB.

The 6800 makes the 3070 completely redundant, it can't compete on performance and has only half the VRAM for only an $80 saving? I mean I think it would have been better if AMD priced the 6800 at $550 to really bring the heat but $580 is still reasonable.

Why would someone buy a 3070 knowing it has only 8GB VRAM in 2020/2021 when for only $80 more you get double VRAM and go up almost an entire tier in performance?
 

fermcr

Member
Well look at it like this, you can buy a 3070 for $500 with 8GB VRAM, or for just $80 more you can get an extra 10-15% performance along with double the VRAM at 16GB.

The 6800 makes the 3070 completely redundant, it can't compete on performance and has only half the VRAM for only an $80 saving? I mean I think it would have been better if AMD priced the 6800 at $550 to really bring the heat but $580 is still reasonable.

Why would someone buy a 3070 knowing it has only 8GB VRAM in 2020/2021 when for only $80 more you get double VRAM and go up almost an entire tier in performance?

AMD could kill the RTX 3070 with a 500€ 6800.
As for now, we have to wait for reviews to compare both cards in several games to see if that extra RAM makes a difference.
 
Now that the dust has settled, something to note about the 6900XT is that the benchmarks had SAM + Rage Mode enabled.

So Rage Mode is a better version of automatic overclocking, can be done in the Radeon control panel without voiding warranty.

This means it should be a conservative OC compared to an AIB factory OC or a manual OC of the reference design. But it should probably increase the TBP by some small amount. Still likely behind the 3090 in TBP, nice that every day gamers can use this feature to gain a few extra % of performance for "free" essentially without needing to know how to OC their card.

The SAM stuff seems like a really cool synergy feature, granted this will depend on things like RAM speeds/quality in your MOBO, which tier of Ryzen chip you have and as we know is only supported on the 5000 series Ryzen and seemingly 500 class MOBOs, but it seems to give a nice performance boost for free pretty much.

If you remove these two features then the 6900XT is slightly behind the 3090 compared to the official slides so this will likely be represented in 3rd party benchmarking/reviews. Just to set the expectation correctly.

Granted, the fact that it is able to nip at the heals of a 3090 level of performance for $500 less is pretty amazing. What is even more crazy is that it is doing it with 50 less watts of power draw than the 3090.

These cards (6000 series) seem to have a lot of OC headroom, you might ask "well why didn't AMD just do what Nvidia did and push the clocks/TBP close to max for that extra performance?"

Well it seems that AMD wanted to stay under 300w power draw target. They wanted the efficiency crown and they likely designed the reference models based around the properties of their reference cooler, which once you go above 300w might end up running hot and loud.

So if you can push these cards up to 3090 levels of power draw with an OC they would likely pretty much match them, it will be interesting to compare the top OC'd AIB 3090 to the top OC 6900XT (one AIB partners are allowed make them). The 6900XT might end up pulling ahead if there is as much OC headroom as implied.

Plus if you do happen to have the latest Ryzen/MOBO in your rig too you can gain a nice bit of extra performance for free with this cool new SAM stuff.

I wonder how much extra perf Rage Mode would give on an already aggressively OC'd AIB model?

Anyway, really interesting times ahead of us! Just don't expect non Rage Mode+SAM 6900XT to match the 3090, without those enabled it will be a tiny bit behind most likely.

Yes I agree, reference 6900XT is behind the 3090 by 5% or so I estimate, but AMD seemed to have capped performance at a level where they match that overpriced card whilst being more efficient. Crazy that when you consider they were a generation behind recently.

So be prepared to see AIB 6900XTs that are faster than 3090 but draw similar ridiculous amounts of power (350-400W).
 

ZywyPL

Banned
So I guess you would consider RDNA2 to really be more like a Zen Plus rather than a Zen 2 equivalent in GPUs? If so, that means RDNA3 will be the Zen 2 moment.

Zen2 was just OK if you ask me - too many unnecessary cores/threads, while too little actual performance, but Zen3 is where AMD is finally having an edge, in both single and multi-threadded performance, and what's most importantly - in the actual games. And seems like RDNA architecture is following the same foot steps as Zen - RDNA1 was meh, much better than the GCN predecessors, but still far behind the competition, while the second one is where the gap really narrows, the only checkbox left is the RT performance, which I'm sure AMD will focus on with RDNA3 cards.
 
RDNA1 was meh, much better than the GCN predecessors, but still far behind the competition

Was it though?

Granted they did not release a card to compete at the high end, I'll give you that, but the 5700XT launched at the same price as the 2060 (non super I think?) with performance on par with a 2070 for a much cheaper price.

This actually forced Nvidia to release super refreshes and drop the price.

Recently Hardware unboxed released a video benchmarking the 5700XT with the most recent drivers (September 2020 drivers).

In this video the 5700XT gains additional performance and matches the 2070Super across the benchmark suite in performance. The fine wine effect is real.

I think that was a fairly impressive first step given how badly AMD's previous GPUs were performing. And don't try to rewrite history, it was incredibly competitive with its equivalent Nvidia tier GPU. You can look at the reviews at the time to see that. Not only that but in little over a year from release it has gained additional performance to compete almost perfectly with the 2070Super. I would say that is pretty impressive.
 
Last edited:

notseqi

Gold Member
When ppl try to tell you how great DLSS is remember it's hype:

UeykcQW.jpg
I was impressed how the upscale in Death Stranding looks from 240p and up. 240p+DLSS is of course not a real use case and the differences become quite underwhelming at higher resolutions, in your example some lines look downright blurry when they shouldn't or wouldn't be. Pass.
 

Papacheeks

Banned
Amd own footnote

Boost Clock Frequency is the maximum frequency achievable on the GPU running a bursty workload. Boost clock achievability, frequency, and sustainability will vary based on several factors, including but not limited to: thermal conditions and variation in applications and workloads. GD-151

‘Game Frequency’ is the expected GPU clock when running typical gaming applications, set to typical TGP (Total Graphics Power). Actual individual game clock results may vary. GD-147

AIB's go beyond reference all the time.
 
Btw nobody has talked about VR performance yet yes? I wonder how the 6800XT will compare to the 3080 in a VR environment.

Well VR is pretty much just normal Raster performance so it should perform in line with normal performance we see for both these and Nvidia cards.

Almost nobody includes VR games when performing benchmarks right now.
 

Orta

Banned
Dont care about raytracing. I saw the watch dogs video and there was barely any difference.

Will buy the 6800xt card.

Yep. It looks nice but I absolutely couldn't give a bugger if its in my game or not. We've been conned into believing it'll somehow make games better.
 

Papacheeks

Banned
He was wrong as fuck about ampere

Are you talking about this video:


Because outside of the actual shroud description which was going off of leaked engineering samples that did have 3 fans on them backed up by _rogame, he got a lot right on the actual numbers of memory bandwidth, cuda cores. But it was for the wrong chip, it was for the 3070.

I mean when people leak stuff they are going off of things they see in documentation or in a database listing. So seeing a database entry with a model can be wrong in how we interbit which model it is. His information on Big navi was same as Paul's. So they either talk to one another or have the same sources.

Because both were right on -$50 on flagship, both were on the money in spu's, memory config, and moore's law teased infinity cache or something like that way before paul confirmed and broke the story.
Your using an instance of him getting second hand info, that super obscure. His rasterization estimate is close as well in terms of pure rasterization performance over turing.

Even in his video he hints at the uptick in ray tracing on mid range aka 3070 and below beating close the the top stack of turing. Which he was right.
You can't take every word verbatim when watching his "leaks". Because thats what they are, leaked information that you have to sift through. He doesn't claim it's all 100% accurate. But he was pretty on the money in his number estimates performance wise, and listed close to what the 3070 has for cuda cores.
And all of this was off of non-final engineering samples.

Paul from Red tech gaming seems to have better vetting for his info, but even mentions in a lot of his videos Tom from Moore's law.
So do what you want with that. if you think he's 100% full of shit then more power too you. But I actually pay attention and dont take what he says as 100%, and use that to form some idea of what the cards will be.

And if you take that info and process it in that way, he's usually pretty damn close. He literally nailed everything in that Radeon presentation almost 6 months ago. Paul legit confirmed a lot of it with more concrete detail.

But of course go ahead and shit on the guy for leaking, second hand information 4-5 months before any concrete info came out.
 
Last edited:
Top Bottom