• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD RX 6000 'Big Navi' GPU Event | 10/28/2020 @ 12PM EST/9AM PST/4PM UK

ZywyPL

Banned
Was it though?

Granted they did not release a card to compete at the high end, I'll give you that, but the 5700XT launched at the same price as the 2060 (non super I think?) with performance on par with a 2070 for a much cheaper price.

This actually forced Nvidia to release super refreshes and drop the price.

Recently Hardware unboxed released a video benchmarking the 5700XT with the most recent drivers (September 2020 drivers).

In this video the 5700XT gains additional performance and matches the 2070Super across the benchmark suite in performance. The fine wine effect is real.

I think that was a fairly impressive first step given how badly AMD's previous GPUs were performing. And don't try to rewrite history, it was incredibly competitive with its equivalent Nvidia tier GPU. You can look at the reviews at the time to see that. Not only that but in little over a year from release it has gained additional performance to compete almost perfectly with the 2070Super. I would say that is pretty impressive.

Yeah but people who wanted that kind of level of rasterization-only performance already bought 1070/Ti/1080/Ti years ago, and many of them didn't look at Turing GPUs either, since the rasterization performance was on par with Pascal, while the RT performance was seriously lacking. For anyone with a 5-7yo hardware who was really forced to do an upgrade already in that period of time the RX 5700/XT were a good value, but for everyone else, it was simply LTTP.



When ppl try to tell you how great DLSS is remember it's hype:

UeykcQW.jpg


What are the FPS tho? because that's DLSS' purpose. I agree that the initial hype was overblown, or rather, the feature didn't delivered what was promised, I was even one of the first who heavily criticize NV for what I thought back then was nothing but a poor gimmick instead of simply doubling the RT cores, but its current state doubles the performance with just a few drawbacks here or there, if any, depending on the title, and now AMD is working on similar solution as well, not without a reason, it's simply a more efficient approach in terms of tranzystor budget/die size, instead of just stacking up the CUs.
 

BluRayHiDef

Banned

Article said:
AMD has not been eager to share details on the ray tracing performance of its Radeon RX 6000 series at launch, but as it turns out, the manufacturer has revealed some details that can be directly compared to the NVIDIA Ampere series.

AMD Radeon RX 6000 series feature Ray Accelerators. Each Compute Unit carries one Accelerator:

amd-ray-accelerator.png


These units are responsible for the hardware acceleration of ray tracing in games. The RX 6900XT features 80 RAs, RX 6800XT features 72, and RX 6800 has 60. The same Ray Accelerators can be found in RDNA2 based gaming consoles.

The Radeon RX 6800XT has 72 RAs while GeForce RTX 3080 has 68 RT Cores. AMD has conducted tests using Microsoft DXR SDK tool called ‘Procedural Geometry

AMD said:
Measured by AMD engineering labs 8/17/2020 on an AMD RDNA 2 based graphics card, using the Procedural Geometry sample application from Microsoft’s DXR SDK, the AMD RDNA 2 based graphics card gets up to 13.8x speedup (471 FPS) using HW based raytracing vs using the Software DXR fallback layer (34 FPS) at the same clocks. Performance may vary. RX-571

A Redditor NegativeXyzen tested ASUS RTX 3080 TUF performance and reported an average framerate of 630:

NVIDIA-RTX-3080-DXR1.jpg


We have also ran the benchmark on the GeForce RTX 3080 with stock clock and power targets using a precompiled executable file. The RTX 3080 scored 635 framerate on average from three 1 minute runs.

This means that GeForce RTX 3080 is around 33% faster than Radeon RX 6800 XT in this particular benchmark. OF course, there are few things that need to be taken into the account such as the testing platform and DXR benchmark resolution (AMD did not clarify, hence we used default).



NVIDIA GeForce RTX 3080 DXR performance, CapFrameX, i7-8700K @ 5.0 GHz, DDR4-3200

The stock GeForce RTX 3090 has a framerate of 749, according to the screenshot posted by Hassan Mujtaba (Wccftech). This means that RTX 3090 is 59% faster than AMD RX 6800XT in this DXR test.

NVIDIA-GeForce-RTX-3090-DXR.jpg
 

Senua

Member
Are you talking about this video:


Because outside of the actual shroud description which was going off of leaked engineering samples that did have 3 fans on them backed up by _rogame, he got a lot right on the actual numbers of memory bandwidth, cuda cores. But it was for the wrong chip, it was for the 3070.

I mean when people leak stuff they are going off of things they see in documentation or in a database listing. So seeing a database entry with a model can be wrong in how we interbit which model it is. His information on Big navi was same as Paul's. So they either talk to one another or have the same sources.

Because both were right on -$50 on flagship, both were on the money in spu's, memory config, and moore's law teased infinity cache or something like that way before paul confirmed and broke the story.
Your using an instance of him getting second hand info, that super obscure. His rasterization estimate is close as well in terms of pure rasterization performance over turing.

Even in his video he hints at the uptick in ray tracing on mid range aka 3070 and below beating close the the top stack of turing. Which he was right.
You can't take every word verbatim when watching his "leaks". Because thats what they are, leaked information that you have to sift through. He doesn't claim it's all 100% accurate. But he was pretty on the money in his number estimates performance wise, and listed close to what the 3070 has for cuda cores.
And all of this was off of non-final engineering samples.

Paul from Red tech gaming seems to have better vetting for his info, but even mentions in a lot of his videos Tom from Moore's law.
So do what you want with that. if you think he's 100% full of shit then more power too you. But I actually pay attention and dont take what he says as 100%, and use that to form some idea of what the cards will be.

And if you take that info and process it in that way, he's usually pretty damn close. He literally nailed everything in that Radeon presentation almost 6 months ago. Paul legit confirmed a lot of it with more concrete detail.

But of course go ahead and shit on the guy for leaking, second hand information 4-5 months before any concrete info came out.

Sorry are you his boyfriend? Remember the DLSS 3.0 shit? Ampere ray tracing performance to be 4x turing? ray tracing should'nt lower performance much anymore yadda yadda



gFHMxfV.jpg
 

Papacheeks

Banned
Sorry are you his boyfriend? Remember the DLSS 3.0 shit? Ampere ray tracing performance to be 4x turing? ray tracing should'nt lower performance much anymore yadda yadda



gFHMxfV.jpg

I take it you didn't pay attention to his videos. DLSS.2.1 isn't 100% done. Like he knows what they officially are going to call it? He meant the update to DLSS, which instead of calling it 3.0 they called it 2.1.

But you do you.

He literally had same info as paul and paul was right on the money. If he's so full of shit and doesn't know what he's talking about he wouldn't have NX GAMER, ex Intel/Nvidia engineers on his podcast.
 
Last edited:

diffusionx

Gold Member
I'm really glad that this series of cards seems to be competent on both performance and power usage - now we see why Nvidia decided to rush out a paper launch of the 3080 and drop the price significantly. I am interested in seeing the benchmarks for the 6800 XT and would consider getting one over a 3080.

If AMD isn't outright lying in their 6900 XT slides, too, the 3090 looks like a total nonstarter especially at its price.
 
Last edited:

GHG

Gold Member

So just a bit faster than turing but slower than ampere when looking only at RT performance?
 

Senua

Member
I take it you didn't pay attention to his videos. DLSS.2.1 isn't 100% done. Like he knows what they officially are going to call it? He meant the update to DLSS, which instead of calling it 3.0 they called it 2.1.

But you do you.

He literally had same info as paul and paul was right on the money. If he's so full of shit and doesn't know what he's talking about he wouldn't have NX GAMER, ex Intel/Nvidia engineers on his podcast.

What the driver level inject DLSS into any game with TAA shit? Dayum DLSS 2.1 is a beast!!!!

and just lol at the second part, NX gamer what an honour
 

llien

Banned
jaggies and shimmering
Maybe there is a curse affecting people using GPUs by certain manufacturers, dunno dude, I've been gaming on lower than native most of my life (heck, poor me).
Even now, PS4 + GoW is a 1080p thing, I game at 4k on a 65" HDR screen and it looks amazing to me, from 2-3 meter distance.
 
Yeah it seems that Nvidia will definitely have a clear win when it comes to Ray Tracing performance.

However having checked out what smart people are saying on tech twitter there seem to be a few things worth noting:

1. Most benchmarks and titles for RT currently are optimized for RTX hardware as simply put there was no other RT capable cards.

2. While in pure path traced games the Ampere cards will likely pull ahead a lot, most games use hybrid rendering and will continue to do so. Here the gap does not present itself as large in most practical scenarios.

3. Nvidia's RTX seems to be based on DXR 1.0, where as AMD's solution seems to rely on DXR 1.1 which seem to have different implementations. DXR 1.1 would potentially benefit AMD cards, of course this could be nonsense but just stating what I've seen mentioned.

4. Console games, especially XSX games will be optimized for AMD's RT solution and thus porting to PC should be relatively easy and possibly allow AMD to close the gap a little in these titles?

5. Super Resolution, AMD's rough answer to DLSS is rumoured to be slightly worse in terms of quality but faster in performance. If this is true, and if the only "real" way to play RT without tanking FPS is to use DLSS/Super Res then this could potentially close a little bit of the performance gap in RT.

Of course, take all of the above with a grain of salt. Even if most or all of that comes true I still think Ampere will have the overall RT performance advantage this generation regardless. But it is worth thinking about as things progress.

The problem is right now the RT games available are optimized for RTX cards/solution so, if they even work on AMD cards right now or not is up in the air, but assuming they do, this would make AMD's RT performance appear even worse again than it actually is so I can understand why AMD may be hesitant about showing RT performance just yet.

In the next few weeks we should learn more about how the RT landscape looks in terms of compatibility and performance. It also could potentially look a bit different in 1-2 years time again so it will be interesting to see how this all develops.

At the moment though, Nvidia have and for this generation at least will maintain a RT performance lead. If RT is most important factor for you and you need the best RT performance available right now then Nvidia is the best bet for that.
 
So RDNA 2 is 7nm according to AMD just like RDNA 1 and they show RDNA 3's slide, which says it's in design, to be using "Advanced node". What does that mean? 7nm+ or 5nm?

My guess is 5nm, as TSMC said that is on schedule. Before RDNA3 they can have a refresh of top SKUs late summer possibly (and will fill out rest of product stack before that).
 

FireFly

Member
Are you talking about this video:


Because outside of the actual shroud description which was going off of leaked engineering samples that did have 3 fans on them backed up by _rogame, he got a lot right on the actual numbers of memory bandwidth, cuda cores. But it was for the wrong chip, it was for the 3070.

I mean when people leak stuff they are going off of things they see in documentation or in a database listing. So seeing a database entry with a model can be wrong in how we interbit which model it is. His information on Big navi was same as Paul's. So they either talk to one another or have the same sources.

The core stuff he got wrong about Ampere was the raytracing performance being several times faster, DLSS 3.0 with wider compatibility, the best cards being on 7nm and VRAM compression.


Basically he got all of the core features wrong. Ampere offers only marginal improvements in raytracing efficiency, not the huge advance he claimed. And such a huge advance was hard to believe anyway given how much raytracing still relies on conventional shader performance.

Regarding Infinity Cache, RedGamingTech broke the news on September 11, and the MLiD video was October 1st.




In the RedGamingTech video there is a ton of super specific info that is verified by the AMD presentation.
 

VFXVeteran

Banned
In the next couple of years both Nvidia and AMD will have new cards with more performance....I don't see the problem here. I think AMD delivered today.
AMD and Nvidia will not have totally new technology in 2yrs. I don't think so. If that were the case, there would be no need to buy GPUs every 2yrs as their differences would be so dramatic it wouldn't make sense.
 
The core stuff he got wrong about Ampere was the raytracing performance being several times faster, DLSS 3.0 with wider compatibility, the best cards being on 7nm and VRAM compression.


Basically he got all of the core features wrong. Ampere offers only marginal improvements in raytracing efficiency, not the huge advance he claimed. And such a huge advance was hard to believe anyway given how much raytracing still relies on conventional shader performance.

Regarding Infinity Cache, RedGamingTech broke the news on September 11, and the MLiD video was October 1st.




In the RedGamingTech video there is a ton of super specific info that is verified by the AMD presentation.


Yep RGT has proven to be super reliable, especially with the Infinity Cache. He said it would be 128mb on Sept 11, which was spot on also.

Moore's is a better presenter but he got seeded bad info by one of his Nvidia sources with the RT and DLSS info being BS! His info seems to come after RGT, tweets or Videocardz stuff.
 
Interesting, it looks like the initial batch of AIB partner cards are simply going to be the reference design with different box/logo?

Matches some rumours from before reveal that AMD were holding back info from AIBs.

We will probably have to wait until Jan+ maybe for "real" AIB designs with OC which I believe they are testing right now.
 

Papacheeks

Banned
The core stuff he got wrong about Ampere was the raytracing performance being several times faster, DLSS 3.0 with wider compatibility, the best cards being on 7nm and VRAM compression.


Basically he got all of the core features wrong. Ampere offers only marginal improvements in raytracing efficiency, not the huge advance he claimed. And such a huge advance was hard to believe anyway given how much raytracing still relies on conventional shader performance.

Regarding Infinity Cache, RedGamingTech broke the news on September 11, and the MLiD video was October 1st.




In the RedGamingTech video there is a ton of super specific info that is verified by the AMD presentation.


Moore's law talked about memory efficiency long before for RDNA 2. In realtime performance correct, but when your leaking stuff that far back do you think the information is being based on engineering samples and not locked clocks?

All of his stuff in terms of the percentages if you look are close to what "NVIDIA" showed in their presentation. Not actual real world bench marks, but their own metrics. Which is probably what he had information wise at the time from internal people.

There are two different things being talked about and being compared. He had leaked info 6 months ago on ampere, was given estimate information on performance percentage over turing. Which if compared to nvidia's presentation was actually really close to what they were touting from "Promotional" material.

He also was estimating a good amount depending on the info given at the time. He broke the Samsung 8NM. He said it could have been 7nm but wasn't sure. But his claims on what happened with the whole Samsung debacle was 100% accurate.

Down also to what their performance equated to compared to TSMC's 7nm process and gains.

The whole metric you and others are using are tests being done by gamers for specific games, how would tom know any of this in final drivers in how these would act 6 months ago from a database info dump?

He was making educated guesses with info he had at the time which over the course of finalizing cards and getting final drivers ready changes.
 

Senua

Member
Moore's law talked about memory efficiency long before for RDNA 2. In realtime performance correct, but when your leaking stuff that far back do you think the information is being based on engineering samples and not locked clocks?

All of his stuff in terms of the percentages if you look are close to what "NVIDIA" showed in their presentation. Not actual real world bench marks, but their own metrics. Which is probably what he had information wise at the time from internal people.

There are two different things being talked about and being compared. He had leaked info 6 months ago on ampere, was given estimate information on performance percentage over turing. Which if compared to nvidia's presentation was actually really close to what they were touting from "Promotional" material.

He also was estimating a good amount depending on the info given at the time. He broke the Samsung 8NM. He said it could have been 7nm but wasn't sure. But his claims on what happened with the whole Samsung debacle was 100% accurate.

Down also to what their performance equated to compared to TSMC's 7nm process and gains.

The whole metric you and others are using are tests being done by gamers for specific games, how would tom know any of this in final drivers in how these would act 6 months ago from a database info dump?

He was making educated guesses with info he had at the time which over the course of finalizing cards and getting final drivers ready changes.
5EyncI4.png
 

Irobot82

Member
Interesting, it looks like the initial batch of AIB partner cards are simply going to be the reference design with different box/logo?

Matches some rumours from before reveal that AMD were holding back info from AIBs.

We will probably have to wait until Jan+ maybe for "real" AIB designs with OC which I believe they are testing right now.
Asus is already showing off three custom models, one being liquid cooled.
 

FireFly

Member
Moore's law talked about memory efficiency long before for RDNA 2. In realtime performance correct, but when your leaking stuff that far back do you think the information is being based on engineering samples and not locked clocks?

All of his stuff in terms of the percentages if you look are close to what "NVIDIA" showed in their presentation. Not actual real world bench marks, but their own metrics. Which is probably what he had information wise at the time from internal people.

There are two different things being talked about and being compared. He had leaked info 6 months ago on ampere, was given estimate information on performance percentage over turing. Which if compared to nvidia's presentation was actually really close to what they were touting from "Promotional" material.
Really? Nvidia achieved a 4x speedup in raytracing did they? They have DLSS 3.0 which works on any game with TAA? They have a VRAM compression technology which compensates for Ampere's smaller VRAM footprint? Maybe you can point me to these parts of the Nvidia presentation, because I must have missed them.
 
I just woke up. Did AMDO what NVIDIDIN'T?

Well they delivered a competitive lineup of cards vs Nvidia.

6800XT
72CU
16GB
3080 perf
$649
128MB Infinity Cache
300w

6800
60CU
16GB
3070/2080ti + 10-18% perf
$580
128MB Infinity Cache
250w

6900XT
80CU
16GB
Nipping at the heels of 3090
128MB Infinity Cache
$999
300w

AMD also showed some cool stuff like SAM, where if you have a Ryzen 5000 series CPU it will grant extra performance to a 6000 series GPU.

There is a slight auto overclock called "Rage Mode" which should grant an extra 1-2% ish performance gain for "free".

Smaller chips than Nvidia. They have RT features, but likely slower than Nvidia here.

They are working on Super Resolution (DLSS competitor) with MS I think but haven't demo'd it yet, will likely launch Dec/Jan (possibly later?) as a driver update.
 
Last edited:

Makoto-Yuki

Gold Member
that 6800XT is really tempting but i'll wait for reviews.

need to see how it'll run on a non-AMD CPU system. also i don't have fond memories of amd driver support. it will also be tough giving up DLSS.
 
that 6800XT is really tempting but i'll wait for reviews.

need to see how it'll run on a non-AMD CPU system. also i don't have fond memories of amd driver support. it will also be tough giving up DLSS.

Well the benchmarks were shown without the SAM stuff that boosts RX6000 series cards if you have a 5000 series Ryzen, where it seemed to match 3080.

With the SAM stuff turned on it was ahead of 3080. So reviewers will most likely be benching on Ryzen 5000 series CPUs from now on.
 
Lolwhat?
2080Ti + 18%, which is roughly 3070 + 18% perf.

Not the original 3070ti based on GA104, the "new" 3070ti based on a cut down GA102.

Not announced yet but should be soon enough and probably available early-mid next year.

I'm extrapolating performance here as the GA102 3070ti will likely compete with the RX6800 once released.

Or maybe they will call it 3080-lite or something like that, who knows. But that is going to be the direct perf competition for RX6800
 
Last edited:

duhmetree

Member
So I guess you would consider RDNA2 to really be more like a Zen Plus rather than a Zen 2 equivalent in GPUs? If so, that means RDNA3 will be the Zen 2 moment.
I wouldn't think its 1:1 scenario.

But I do believe that their next major iteration ( navi31? RDNA 3 ) will take A CHUNK of market share from nvidia. The hardware has caught up to the competition already. Software is still lagging behind. It should all come together next iteration IMO

edit - to add to it. Zen3 will dominate the field. Most CPUs going forward will be Zen3 IMO. When there's a synergistic effect when paired with Navi, that becomes even more tantalizing
 
Last edited:
Hmm there's some interesting things that AMD did in that presentation

Their 6800 as 3070 was benchmarked only with Ryzen 5000 using their frames boosting technology. So unless you go full new AMD system with 500 series mobo (which most of existing Ryzen owners won't have) it's very likely that they deliver very similar performance per dolar as 3070.

6800XT had higher advantage over 3080 at 1440p compared to 4K numbers - is it possibly an effect of cache being too small ? It might be interesting as games grom from ps4 assets to ps5 era assets (aka scenario which most AMD fans will brag out how future proof 16GB or ram is).

6900XT was shown with Ryzen 5000 and Rage mode which appears to be automatic overclocking mode. So claimed advantage in performance per watt might be actually be not true as we don't know what power 6900XT consumes in that mode.


On the other side we don't know overclocking headroom it's possible that AMD GPUs will overclock better than Ampere which pretty much is maxed from factory by power limits.
 

GHG

Gold Member
Oh ok thats only a 6800xt. Holy shit.

Some background on this:

a custom ASUS ROG STRIX Radeon RX 6800 XT part can boost as high as 2.5 GHz when running 3DMark 11. Interestingly, the engineering sample Schur talked about seemed to spend a lot of time at high clock speeds. In an update to his original post, Schur claimed that the ROG STRIX Radeon RX 6800 XT was running at clock speeds higher than 2.3 GHz 85 percent of the time.


Pretty funny if true when you consider the discussions that have taken place over the last couple of pages.

bUt iT tOoK a wAtEr cOoLeR is what they will say now.
 

Md Ray

Member
Some background on this:




Pretty funny if true when you consider the discussions that have taken place over the last couple of pages.

bUt iT tOoK a wAtEr cOoLeR is what they will say now.
I wonder what the clock speeds of 40 CU and 36 CU part (6700?) will be like.
 
Last edited:
  • Thoughtful
Reactions: GHG
Hold on, DXR RT shadow on PC? Not available on next-gen consoles? PC exclusive?

It's the first time I'm hearing DiRT 5 having ray-tracing.

No idea, not familiar with this game myself. Maybe it will be added to consoles in a patch?

Maybe it has it already but just never mentioned?
 

Ascend

Member
Hold on, DXR RT shadow on PC? Not available on next-gen consoles? PC exclusive?

It's the first time I'm hearing DiRT 5 having ray-tracing.
It's the first time I'm hearing about it too. It seems like they are now starting their campaign regarding their 'fancy' features, including ray tracing.
 
Top Bottom