• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD RX 6000 'Big Navi' GPU Event | 10/28/2020 @ 12PM EST/9AM PST/4PM UK

Bo_Hazem

Banned
You were never going to get an AMD card. Stop lying.

AMD has knocked nVidia fanboys out of their comfort zone. Imagine how vastly it'll be supported when AMD is dominating the gaming console market, and please don't mention the nvidia's leftover in the Switch.

Every tech available on consoles will simply get boosted for team red on PC!
 

onesvenus

Member
amd-radeon-rx-6900-xt-spec.jpg
All the announced cards have a 1:1.6 ratio between CUs and ROPs.
The assumption that XSX has only 60ROPs and the PS5 has an advantage there could be wrong.
 
i was already going with Ryzen for my next CPU, and am pretty tempted by the 6900. only bad thing for is that i have a gsync monitor that i'd also have to upgrade
 
All the announced cards have a 1:1.6 ratio between CUs and ROPs.
The assumption that XSX has only 60ROPs and the PS5 has an advantage there could be wrong.

Just stop while you're ahead (or behind)

Interestingly, Microsoft has also indirectly confirmed that there’s only 64 ROPS for the Xbox Series X too. This was long guessed by myself based on the leaked Arden test data from GitHub, but obviously it was possible that this would differ from the final production hardware.

But, with the GPU specs for “GPU Evolution” Microsoft confirms that there are 116 GigaPixels / second for its performance. This figure can easily be reached by taking 64 ROPS and multiplying it by the clock frequency (so 64 x 1825) and this, of course, comes to the 116. This ROP figure then would be the same as Sony have for the Playstation 5 GPU.
 

nani17

are in a big trouble
I'll believe when I see it. With no dedicated AI cores, it won't be as good as Nvidia's solution.

Yes AMD Ray tracing won't be as good as nvidias not a change but they really did an amazing job today. As an nvidia owner I have to admit they done great.

You also have to factor in that their CPUs and GPU are in consoles meaning it easier for developers to include Ray tracing across all yes it's going to be differences but I could be easier
 

Ascend

Member
I'll be honest... I was expecting;
6800 @ $499
6800 XT @ $599
6900XT @ $799

The 6800 price I can live with, because it is also faster than I expected.

The 6800XT is just a tad too high in my book, but I can't really fault them. It competes very well with the 3080, but is cheaper... I wanted $100 cheaper, but we got "only" $50 cheaper. It's still good though, but if it was $599 I would for sure get it, taking into account that good AIB cards cost a bit more. Now, with it being $649, I don't know what the AIB prices will become.

The 6900XT I will never buy at that price. If it was $799 it would be the first time that I buy a top tier card, but alas, not going for it. It still kills the RTX 3090 though, which is good for us.
 
XsX does use direct x 12 ultimate, but the compute units of the XsX have been customized at the hardware level to accelerate ray tracing.

The questions is, do these new cards have hardware level ray tracing via:

1) modified compute units
2) dedicated RT cores

I dont know the answer


  • it's a RT coprocessor inside the same cluster of the CU as the texture mapping units.
  • it's doing exactly the same steps in the RT pipeline as the RT core in nvidias archs does. nv is not doing any step of the RT pipeline in its RT core that amd isn't doing in it's coprocessor!!!
  • therefore it doesn't matter if it's inside or outside the compute unit
  • (it might be better if it's inside the CU because in theory that should use less energy for the same work and safe die space)
  • AMD could also have named it RT core (if it isn't taken)
  • infact sony is calling it (iirc) intersection engine and amd intersection unit (edit: in it patents. going by the newly relewased spec sheet they are calling it Ray Accelerator now )

that info is all out there in the fucking raytracing patents



------------------------------------------------------------------------



on the topic of raytracing performance. as of now we only have two (maybe questionable) datapoints.

first one is port royal (RT benchmark in 3D mark) from leaks:

01-Percent.png


18% down on the 3080. that's not brilliant. it should be considered that drivers might not be there, where they need to be. igors datapoint is 5415 points in 4K port royale. my gigabyte 3080 gaming OC also does around 5400 points at 4K. that said, there is no standardised 4k test in port royal (port royal standard runs at a lower res). so we can't be sure if the results are really comparable.


second datapoint is from a chinese leaker board that's probably full of AIB employees and 3rd party engineers. it's a theoretical performance comparison of the different intersection testing types:

EjhhJR2UcAITXen



so make of that what you will. but i think we will get lots more info in the coming hours / days.
 
Last edited:

kiphalfton

Member
Seen a lot of comments regarding the 6900xt bring so much better than the 3090... but it only has 16gb GDDR6 VRAM compared to 24gb GDDR6X. It's not really apples to apples, since nvidia still has better tech. I can still see why people would buy the RTX 3090.
 

Alexios

Cores, shaders and BIOS oh my!
FInally high end competition again after 3 like generations, whether they come out on top or not. Even if they suffer with RT (but how often will that be used as anything more than an add-on paid by NV, like PhysX back in the day, when the game/art was mostly made without it in mind and the standard current pipelines instead, when consoles are AMD based themselves so will likely settle for minimal RT) that's a great development, espcially if they can keep improving like they did with CPUs Zen 1 to Zen 3. Hope these show to be even better value once prices/stock settles form third parties as well for both companies. I'm still keeping my 1080 for now.
 
Last edited:

Papacheeks

Banned
Out performs my ass, its 8gb less, worse raytracing, no dlss competiter. that's the reality. 3090 is straight up as now a upgrade over that card. they needed to price it sharply but instead they went full retarded and went up massively.

It's -$500 than the RTX 3090? Uses less energy, hits higher clocks and will go higher than game boost on top of the optimizations they showed if your using a ryzen 5000 cpu. Add all of that extra performance up.

But you wont.
 

silentsod

Neo Member
This was much better than I was expecting and I'm going to be picking up an AMD card if it's available for my dad's 4k FS2020 rig.
 
You were never going to get an AMD card. Stop lying.
This goes to show how little knowledge you have about my wallet. Why did I not try to get a preorder for the 3080/90? Why have I been waiting for AMD to have their show? Hmmm....

“It is better to keep one's mouth shut and be thought a fool than to open it and remove all doubt.”
 
What ill say is that this is good for the future gpu market.

As far as im concerned, as long as AMD has no answer to DLSS 2.0 then its not even an option to buy one of their cards, thats how incredible the tech is.

The 6800/6900 series is important still because AMD has pulled closer to nvidia in a lot of places, however without DLSS its just not a viable option IMO. But what this does do is set all the other pieces in line for AMD so that when they do get their own DLSS alternative then they will truly be worth it

Bringing dlss to consoles would be an absolute game changer for gaming and id give amd my money just for that reason
 
Last edited:

Ascend

Member
Seen a lot of comments regarding the 6900xt bring so much better than the 3090... but it only has 16gb GDDR6 VRAM compared to 24gb GDDR6X. It's not really apples to apples, since nvidia still has better tech. I can still see why people would buy the RTX 3090.
What does the 'better tech' offer you if actual performance is not better? A bigger thing to digitally waive around?

Next questions...
3080 with 10 GB GDDR6X at $699 or 6800 XT with 16GB GDDR6 at $649?
3080 with 10 GB GDDR6X at $699 or 6900 XT with 16GB GDDR6 at $999?
 
Obsolete in what sense? Consoles target 30fps. The gap between consoles and top PC cards this time around is actually far smaller than it was last gen.
I don't know. Maybe I'm just old and I remember times when consoles where at the frontline of cutting edge in terms of graphics.
also caling roughly 2080ti performance out of date would have looked kinda mental two months ago.
That's very true.
 
Be an AMD fan for a day. You have been arguing a lot against AMD the past few days. Not knocking you but I see your complaint and I don't care for it.

I'm a fan of good, proven products (like their cpus). We don't know if the 6800, 6800xt and 6900xt will be that because we didn't get enough details which, of course, was by design. No one wants to show their whole hand if they can get the public to not buy a competitor's products even without all the information.

Perfectly fine to get excited but if you're asking me to not call out the backtracking and hypocrisy that a lot of people have been engaging in, that's not going to happen. $650 was logical price for a profits driven company because that's what they think they can get. Painting one of these cards as a savior and the other as a dud is stupid. They're likely going to be competitive in almost all aspects.

The 6900xt is much more impressive if it turns out to only draw around 300 watts. I'd be jealous of that card as a gamer only for sure but cant justify the price
 

Ascend

Member
This goes to show how little knowledge you have about my wallet. Why did I not try to get a preorder for the 3080/90? Why have I been waiting for AMD to have their show? Hmmm....

“It is better to keep one's mouth shut and be thought a fool than to open it and remove all doubt.”
Uhuh... That's why you've been bashing AMD constantly lately... So maybe you should have taken your own advise for the past couple of weeks and kept your mouth shut. But you didn't. The next best time to start is right now. So... Go ahead. Take your own advise. It's better for all of us.
 
this might be slightly off topic, but do AMD GPU have 'cores' the same way CPU's have cores? Are the cores in the GPU the compute units (not talking about NVIDIA tensa cores).

The RDNA 3 is rumored to use the chiplet design similar to the Zen architecture. So the potential for future AMD GPUs RDNA 3 based would be:

1) Using chiplet design connecting GPU cores with infinity fabric (interconnect between cores)
2) Infinity cache (128mb to lets say 256mb)
3) Faster memory (HBM or GDDR7?!)
4) Better and more efficient hardware acceleration of DLSS type feature and Raytracing?

Well, I'm not 100% sure on the question. They have CUs, and they have Color/Depth units (the ROPs). You can multiply them to get your ALU counts, so 2,304 in the case of PS5 and 3,328 for Series X. I'm guessing the CU themselves would be the cores?

On the RDNA 3 stuff, I think it might be safe to say all four of those come to fruition. I have my ideas on how the GPU chiplet connections would work (you would need to spin off some of the components into a tertiary chiplet, for starters).

Infinity Cache sizes would increase of course, and I actually think whatever their top-of-the-line 6000-series GPU is (that hasn't been revealed yet) could be using HBM2 so HBM support with RDNA 3 would be a bit of no-brainer. RT and image upscaling features should improve going forward, as well.

I don't know. Maybe I'm just old and I remember times when consoles where at the frontline of cutting edge in terms of graphics.

That's very true.

If you look at the entirety of gaming, consoles have never been at the forefront of graphical pushes. Arcades always had them (and for a long time, microcomputers and PCs) beat. Even when the Dreamcast came out, later Model 3 revisions that released about roughly the same time were a bit more capable.

Maybe gets a bit hazier once talking 360/PS3 era because those systems actually had quite a few advances that PC wouldn't get for a bit. Thinking on it now tho, the OG Xbox and Gamecube had a lot of that as well.



This is a great video (and great channel altogether) that shows some of the big differences between PS2 and Xbox/Gamecube for that gen. Overall I think perhaps 6th gen was a good time where consoles were at a "forefront"; 7th gen is pushing it but in both cases things like arcades falling off helped a lot (plus on PC side there was the GHz problem of the mid '00s forcing multi-core designs while consoles like the 360 leapfrogged that by a year or so).
 
Last edited:
  • it's a RT coprocessor inside the same cluster of the CU as the texture mapping units.
  • it's doing exactly the same steps in the RT pipeline as the RT core in nvidias archs does. nv is not doing any step of the RT pipeline in its RT core that amd isn't doing in it's coprocessor!!!
  • therefore it doesn't matter if it's inside or outside the compute unit
  • (it might be better if it's inside the CU because in theory that should use less energy for the same work and safe die space)
  • AMD could also have named it RT core (if it isn't taken)
  • infact sony is calling it (iirc) intersection engine and amd intersection unit (edit: in it patents. going by the newly relewased spec sheet they are calling it Ray Accelerator now )

that info is all out there in the fucking raytracing patents



------------------------------------------------------------------------



on the topic of raytracing performance. as of now we only have two (maybe questionable) datapoints.

first one is port royal (RT benchmark in 3D mark) from leaks:

01-Percent.png


18% down on the 3080. that's not brilliant. it should be considered that drivers might not be there, where they need to be. igors datapoint is 5415 points in 4K port royale. my gigabyte 3080 gaming OC also does around 5400 points at 4K. that said, there is no standardised 4k test in port royal (port royal standard runs at a lower res). so we can't be sure if the results are really comparable.


second datapoint is from a chinese leaker board that's probably full of AIB employees and 3rd party engineers. it's a theoretical performance comparison of the different intersection testing types:

EjhhJR2UcAITXen



so make of that what you will. but i think we will get lots more info in the coming hours / days.

Arigato. Needed that clarification.
 

yurqqa

Member
I don't know. Maybe I'm just old and I remember times when consoles where at the frontline of cutting edge in terms of graphics.

Arcades were the cutting edge and consoles were always a compromise that had some edge only because computers were not standardized and games were usually not optimized for them.

On the other hand, games optimized for PC, e.g. DOOM run badly on consoles.
 

Nubulax

Member
Stop estimating and comparing desktop GPU spec with the console counterpart or a closed system . It will always pack way above its punches , just look at how The Last of Us 2 looks on a "weak" 7 years old PS4 Pro GPU .

This..people act like we didnt just see some amazing graphics in TLOU2, Ghosts or even FF7 Remake THIS YEAR on how old of hardware and a super outdated CPU
 
Uhuh... That's why you've been bashing AMD constantly lately... So maybe you should have taken your own advise for the past couple of weeks and kept your mouth shut. But you didn't. The next best time to start is right now. So... Go ahead. Take your own advise. It's better for all of us.
Saying that AMD doesn't have an equivalent to DLSS or won't match the performance in raytracing is now considered bashing?! Lmfaaaaao! So you constantly bash Nvidia by your own logic? Does that mean having preferences equates to bashing a competitor product? Because I like mustard, I'm now bashing ketchup?! Lol what kinda low IQ thinking is that?
 
Last edited:
hmm... how do the ryzen 5000 series come into play when combining with RDNA 2? AMD stated 5-10% improved performance overall. I am very excited about their APUs combining ryzen 5000 + RDNA 2 for laptops/desktops/tablets using direct x 12 ultimate, along with windows ink. EEEEEEekkk!!!

I can have all in one PC no matter what the form factor for: 4K gaming at 60 fps with moderate settings!!!! Along with using photoshop, movie editing, and drawing!!

:pie_savoring::pie_sfwth::pie_starstruck::cool::eek:
 
I'll be honest... I was expecting;
6800 @ $499
6800 XT @ $599
6900XT @ $799

The 6800 price I can live with, because it is also faster than I expected.

The 6800XT is just a tad too high in my book, but I can't really fault them. It competes very well with the 3080, but is cheaper... I wanted $100 cheaper, but we got "only" $50 cheaper. It's still good though, but if it was $599 I would for sure get it, taking into account that good AIB cards cost a bit more. Now, with it being $649, I don't know what the AIB prices will become.

The 6900XT I will never buy at that price. If it was $799 it would be the first time that I buy a top tier card, but alas, not going for it. It still kills the RTX 3090 though, which is good for us.
AMD aren't going to leave money on the table.
Half the reason they've been losing so hard, is that their pricing - even when they were competitive, was so much lower than Nvidia that they never made any money.

The market has spoken on the prices people are willing to pay per performance tier, and AMD aren't going to massively undercut those prices. This much should have been clear from Ryzen 5000 pricing.
 

evanft

Member
So let me make sure I got the breakdown right.

6900XT
- 16 GB GDDR6
- Trades blows with 3090, but mostly comes out on top
- $999

6800XT
- 16 GB GDDR6
- Trades blows with 3080. Basically matches it performance wise
- $649

6800
- 16 GB GDDR6
- Mostly beats 2080ti, which means it should beat a 3070
- $579

The 6800XT is only OK to me. It matches the 3080 for $50 less and with more VRAM, but I was hope for a jump in performance. The 6900XT and 6800 are the real stars of the show. For $80 more than the 3070, you can double your VRAM and increase your performance. If I were looking at a 3070 before, I wouldn't be now. The 6900XT shits all over the 3090. They basically beat it for $500 less. I'm not VERY much considering this card instead of the 3080.

AMD may now hold the crown for best card at $500-$600 and they may end up with the crown for fastest GPU available.

What a turnaround. Really excited to see reviews and what nVidia decides to answer with.
 
If you look at the entirety of gaming, consoles have never been at the forefront of graphical pushes. Arcades always had them (and for a long time, microcomputers and PCs) beat. Even when the Dreamcast came out, later Model 3 revisions that released about roughly the same time were a bit more capable.

Maybe gets a bit hazier once talking 360/PS3 era because those systems actually had quite a few advances that PC wouldn't get for a bit. Thinking on it now tho, the OG Xbox and Gamecube had a lot of that as well.



This is a great video (and great channel altogether) that shows some of the big differences between PS2 and Xbox/Gamecube for that gen. Overall I think perhaps 6th gen was a good time where consoles were at a "forefront"; 7th gen is pushing it but in both cases things like arcades falling off helped a lot (plus on PC side there was the GHz problem of the mid '00s forcing multi-core designs while consoles like the 360 leapfrogged that by a year or so).


That's very sweet and all but like I said: I'm old. You have to go way, way back. :messenger_grinning_sweat:
 

rodrigolfp

Haptic Gamepads 4 Life
Stop estimating and comparing desktop GPU spec with the console counterpart or a closed system . It will always pack way above its punches , just look at how The Last of Us 2 looks on a "weak" 7 years old PS4 Pro GPU .

These games only shows what a 7 years old CPU/GPU can do. If they were made for an i9+RTX 3800, they would look abysmal better...
 
People need to rethink what they understand DLSS is.
It's just Nvidia's brand for image upscale and reconstruction. There are plenty of ways of doing this, that's why Nvidia already changed it's solution. After DLSS came out with very controversial results, AMD shamed them with a much simpler solution. DLSS 2.0 and a rethink of how to use the Tensor cores with much better results.
The Tensor cores are not necessary, they just speed the work. They do low precision math in large batches. AMD GPU's were already capable of doing the same without additional hardware in a smaller degree. Even without dedicated extra hardware RDNA2 will be able to offer image reconstruction and upscale using the industry standard that is the way AMD prefers to do everything.

Don't matter that RDNA2 has less potential, what matters is how good AMD's software solution is.
The marked reached an agreement that this is a feature worth having, everything will support it soon, even Intel new discrete GPUs.
 

rodrigolfp

Haptic Gamepads 4 Life
Any studio can make another Crysis, but they won't so it's irrelevant what a top end desktop GPU can do.
Yes, but just because devs will not make another crysis doesn´t mean that they can´t, and that is the point. Plus there is always performance to consider. While consoles are doing (pathetic) 30fps, PC cards could push 120+.
 
Last edited:
What does the 'better tech' offer you if actual performance is not better? A bigger thing to digitally waive around?

Next questions...
3080 with 10 GB GDDR6X at $699 or 6800 XT with 16GB GDDR6 at $649?
3080 with 10 GB GDDR6X at $699 or 6900 XT with 16GB GDDR6 at $999?


But the performance is better. Look at Watch Dogs Legion that just came out. Extremely heavy on hardware at max settings. Cant be played 4k/60 on anything, native. But nvidia has DLSS and Raytracing, which permits the game to run at 4k/60. AMD will launch its 1000 dollars card and it will play this game at 1440p in order to be playable and without raytracing.
 
Top Bottom