RDNA3 rumor discussion

AMD snarked at Nvidia's supply problems by saying they wouldn't have a paper launch, just to have the 3090 wipe the floor with the whole RDNA 2 lineup. What's AMD's definition of a paper launch then as they said Nvidia had one? I would be curious.
I'm not talking about AMD's definition but the historical one, where a product is sampled to the press but stores get a handful of units only and are not restocked. I don't think the 3000 series launch was a paper one.

As i said above, 5nm, 6nm, 7nm, doesn't matter. Silicon wafer manufacturers can't provide fast enough for the demand. Some of TSMC's biggest suppliers are booked up to 2026. The lithography is not the biggest bottleneck here unless you want to push Apple's out of their premium node.
Demand is currently collapsing as you can see from AMD's/Intel's/Nvidia's earning statements.

"Wafer demand in 2023 is expected to be flat with 2022 or decline slightly, while capacity is expected to grow about 7 per cent in 2023, signalling oversupply, according to Mr Dale Gai, research director at Counterpoint Research."

https://www.straitstimes.com/asia/east-asia/the-sudden-reversal-of-the-global-chip-shortage
 
Hoping they announce something competitive price and performance wise. The RX6000 series does great in raster but the lack of competitive ray tracing performance and a decent alternative to DLSS at the time hurt them.

The 5700XT I bought for £400 a few years back has really held its value for me. I hope they give me a reason to upgrade.
 
cIsEg12.jpg



Seems to be as beefy as some of AIB solutions. Really nice.
 
https://overclock3d.net/news/misc_hardware/amd_s_r_d_spending_has_exploded_in_2021/1
EEzeqvD.jpg

AMDs R and D for the full year of 2021 was 2.8 billion US Dollars.That is for the whole company all departments and future stuff.So for the on going RDNA Serie should be high double digit millions or maybe triple digit millions.I think it's not so much but what is to much or not to much it's relative.
Yeah, so pure speculation. You actually haven't got the faintest idea of how much went into the R&D of RDNA 3.
 
It helps that Nvidia have spent a lot of good will from customers with their recent releases (or non-release), which received a lot of backlash from not only gamers and enthusiast but reviewers as well. With them trying to sell a 60Ti/70 class GPU as an 80. Intoducing extreme price hikes in the mid/high end stacks on top of melting adapters. AMD just needs to kick it out the park ofering competitve products at cheaper prices. Undercut them by a significant amount across the stack, for value and show us that their cards are worth considering. Rather than play the second best, slightly cheaper option.

Sadly, I can't see AMD taking advantage of the situation.. with current rumors that they are only releasing the RX 7900XT/XTX this year. Leaving Nvidia the market for Q4 and maybe Q1, next year. Taking into consideration that most high end buyers want an Nvidia card, not AMD. That offers more way more features and most likely.. much more potent Raytracing perfomance.

The big problem is that AMD has been behind Nvidia for too long for too many generations that Nvidia has majority of mindshare. And for AMD to change the minds of consumer, they have to offer RT performance in the same ballpark as Nvidia's latest and offer similar feature sets, CUDA-like performance for non gaming tasks. The former don't see happening any time soon. Thus, the only way AMD can do take big marketshare is being super aggressive and go to a pricing war. With a big time recession hitting accross the globe this is the time to be marketing and releasing not only high end but also mid range cards and low end to gamers. Build up trust in the brand for the next push.
Not sure, this seems like a "can AMD please help drive the price of the RTX card I want to buy from nVIDIA lower so I can give nVIDIA my money?".

Just being aggressive on price may not do, especially if they cannot then back it up with volumes leaving PC vendors and resellers high and dry.
 
Not sure, this seems like a "can AMD please help drive the price of the RTX card I want to buy from nVIDIA lower so I can give nVIDIA my money?".

Just being aggressive on price may not do, especially if they cannot then back it up with volumes leaving PC vendors and resellers high and dry.
They need to offer at least the same performance or even better for more, or it's pointless.

If they release a more powerful card for the same price, Nvidia will release the 4090 Ti and beat then

If they offer the same performance or more for slightly less (5/10%), people will still buy Nvidia

They need to release a card that can be competitive even if Nvidia slashes their prices, at least 20% less for same performance.

I don't see that happening, it's highly likely they'll adapt to Nvidia's pricing scheme and try to squeeze as much money as they can.
 
Peoples are expecting a monster MCM with a die roughly equal to a 4090 in area, but non of the negatives.
The best estimates we have right now are 1x 308mm2 of N5 Graphics Complex Die(GCD; contains the WGPs), and 6x 37.5mm2 of N6 Memory Complex Dies (MCD; these house the GDDR PHYs and the Infinity Cache/xGMI/Infinity Fabric links to the GCD)
So you're looking at a total die area of around 533mm2, compared to AD102's 608mm2.

So no not exactly a monster by any definition.

MCM is automatically gonna pull more heat compared to monolithic.

Ah so you have the engineering samples? Yes inter connects tend to draw more power, but how much power is that exactly?

MCM is going to add latency.

How does this impact GPU performance, if at all?
There aren't multiple GCDs, all of the compute is on a single die. They have simply disaggregated the Compute from the fixed function IO/Cache which doesn't scale as well. So I can't see this having a significantly negative impact. Besides, CPUs are far more latency sensitive than GPUs, and Ryzen multi-chip processors seem to handle it just fine as the core physical design has absolutely exceptional latency handling. I'm sure a lot of that expertise as cross pollinated with Radeon Technologies Group.

More chiplets, the more crossbars, the more the data has to make a jump at a node, it's the basics of NUMA topology.

Where did you get the impression that this is NUMA? Single compute die.

"B..b.. but Ryzen?" you say, CPU tasks not sensitive inter-GPM bandwidth and local data to latency like GPUs are.

GPU's are designed to hide latency way more than CPUs. GDDR memory has significantly higher latency than normal DDR. The tradeoff is higher bandwidth. That's why GPUs use high bandwidth, high latency GDDR, while CPUs use low bandwidth, low latency DDR.

AMD's MI200s and Nvidia's (2) H100 chipsets were MCM and were made for tasks with low latency requirements such as scientific computing. NVlink's 900GB/s and MI200s infinity fabric's 100GB/s per links with 8 links providing 800GB/s, are still no match for the whooping 2.5TB/s Apple made for the M1 Ultra.
Do you know what packaging technology they're using for this GPU? Because Apple is using a technology that TSMC developed - its probably pretty easy for AMD to just license that same tech if they need to have high bandwidth between dies.


That 2 chipset MCM basically had double CPU performances, while GPU had a +50% increase on their own freaking API! Because don't forget, this segmentation of tasks that are ultra sensitive to fast local packets of data such as FSR/RT/ML will have to be entirely invisible from the API's point of view and since we're on PC, it's on AMD's shoulders to make drivers for that.

Yes that is why each MCD has 16MB (or maybe more) of cache - to maintain data locality. I'm sure that will do a great deal to help handle RT and BVH structures.

What else is rumored.. oh, let's add 4GHz into the mix, 100 less watts, match or surpass a 4090 in rasterization, expensive communication crossbars that are outside of lithography but still manage -$600 over competition. Pulling performances out of a quantum parallel universe basically.

What? AMD took Navi 10, a GPU that clocked at 1900MHz on N7 and made Navi 21, a GPU that sits comfortably at 2500MHz - and is often capable of doing more like 2800MHz also on N7. They gained significant clockspeed without a node transition. So now they're moving from N7 to N5 - what is often described as a bit of a Unicorn node. Why would you think they wouldn't be able to extract similar gains in frequency yet again. I don't expect 4GHz, but certainly 3.5GHz is not outside of the realms of possibility.
Especially if the architecture has been specifically designed to clock fast by the way of its physical design. And RDNA is designed to clock high.

Prices are subject to change at the drop of a hat, so I'm not going to engage with you on the point of what people expect the prices to be. Because they're all probably wrong. However, it seems that you're just unwilling to accept that AMD might be able to engineer a more power-efficient architecture than Nvidia. The lead of Radeon; one David Wang, was a senior engineer who helped develop Cypress (of HD5870 fame) which was vastly more power efficient than Fermi way back when. Also please note that the 4090 is overvolted into oblivion. You can drop the power down to 350W and you'll barely lose 5-10% performance. They've overclocked the shit out of it out of the gate to extract as much performance as possible. Wonder why that is?
Anyway, I guess AMD don't feel the need to overclock the shite out of their GPU. If they can get close enough, say with in 10%, but at significantly less power, then that's good enough.
 
the 4090 is overvolted into oblivion. You can drop the power down to 350W and you'll barely lose 5-10% performance. They've overclocked the shit out of it out of the gate to extract as much performance as possible. Wonder why that is?

Because they needed to justify the price increase over the 3090 (especially in a market of GPU prices being pressured down), and/or they're predicting a significantly more competitive offering from AMD at the top end.

But yes, in the past AMD would overvolt/overclock their GPUs like hell to better keep up with higher performance Nvidia offerings, like we saw with Vega. That could indeed be Nvidia's strategy here.
 
They need to offer at least the same performance or even better for more, or it's pointless.

If they release a more powerful card for the same price, Nvidia will release the 4090 Ti and beat then

If they offer the same performance or more for slightly less (5/10%), people will still buy Nvidia

They need to release a card that can be competitive even if Nvidia slashes their prices, at least 20% less for same performance.

I don't see that happening, it's highly likely they'll adapt to Nvidia's pricing scheme and try to squeeze as much money as they can.
They need to have better raytracing performance because of the Nvidia mindshare. RDNA2 cards already stacked up or surpassed Nvidia cards in performance, but most people still wanted to buy a 3050 over a 6600 non XT that was the same price and insanely more performance. I will say they also need to have better content creation suite items for those in that hobby. It is also a reason a lot of people go Nvidia. Thursday is going to be a real interesting day for the DIY community. Hopefully AMD delivers so people will have to really think about their choices this generation on what to buy. Better for us.
 
They need to have better raytracing performance because of the Nvidia mindshare. RDNA2 cards already stacked up or surpassed Nvidia cards in performance, but most people still wanted to buy a 3050 over a 6600 non XT that was the same price and insanely more performance. I will say they also need to have better content creation suite items for those in that hobby. It is also a reason a lot of people go Nvidia. Thursday is going to be a real interesting day for the DIY community. Hopefully AMD delivers so people will have to really think about their choices this generation on what to buy. Better for us.
I don't believe for a second ray tracing can make a difference. It has to be price and performance. The RX 6000 series was only slightly cheaper than the RTX 3000 series (ignoring the crypto boom),

DLSS2 has been a fantastic reason to go with Nvidia over AMD, they need to be cheaper and have FSR3 to be comparable to even have a chance to gain some market share.
 
I don't believe for a second ray tracing can make a difference. It has to be price and performance. The RX 6000 series was only slightly cheaper than the RTX 3000 series (ignoring the crypto boom),

DLSS2 has been a fantastic reason to go with Nvidia over AMD, they need to be cheaper and have FSR3 to be comparable to even have a chance to gain some market share.
Raytracing will surely make a huge difference. It has to be in between Ampere and LoveLace's RT performance. Because want did you just say? DLSS has been a fantastic reason to go with Nvidia. Even with FSR, the RT performance on the 6000 series wasn't good because the cards weren't really designed with Raytracing in mind. Now RDNA3 will have RT performance in mind and FSR 2.1 has been on par with DLSS 2. That's the very first thing folks bring up when talking about why not going with AMD in the first place. RT performance and DLSS. Now that AMD can hang with DLSS 2 with their FSR 2.1, and if RDNA3 has respectable RT performance, the tide may change. I think AMD will be slightly cheaper again and not 30% and 40% like the masses claim they need to be in order to be competitive. Nvidia's mindshare is currently taking a hit because of the cable fiasco and 4080 12GB not being a 4080. But again, we'll get down to the brass taxes on Thursday when the 7000 series is announced.
 
I don't believe for a second ray tracing can make a difference. It has to be price and performance. The RX 6000 series was only slightly cheaper than the RTX 3000 series (ignoring the crypto boom),

DLSS2 has been a fantastic reason to go with Nvidia over AMD, they need to be cheaper and have FSR3 to be comparable to even have a chance to gain some market share.

If Price/Performance while ignoring RT is what you believe will make a difference, then the RX6000s score flying colors.
The 6800XT was 50 dollars cheaper than the RTX 3080 but out performed it more often than not, if you count crypto/pandemic shenanigans then the 6800XT was markedly cheaper than the RTX 3080.
The 6800XT was rivalling the RTX3090/3080Ti which were well more expensive even sans crypto madness.
The 6800 was ~50 bucks more expensive than the 3070 but was almost an entire GPU class above it.

Unless you are saying at the 7800(XT) level the GPUs should be 100 or more dollars cheaper while offering better performance than direct competitors, I really dont know what more AMD can do from a raster perspective to get mindshare from Nvidia.

With the RX7000s seemingly going for wider memory bus sizes then even at 4K where the gap between RX6000s and RTX30s gets small, we should expect AMD to have raster performance not only matching but likely beating all the direct RTX40 competitors.

relative-performance_2560-1440.png
 
If Price/Performance while ignoring RT is what you believe will make a difference, then the RX6000s score flying colors.
The 6800XT was 50 dollars cheaper than the RTX 3080 but out performed it more often than not, if you count crypto/pandemic shenanigans then the 6800XT was markedly cheaper than the RTX 3080.
The 6800XT was rivalling the RTX3090/3080Ti which were well more expensive even sans crypto madness.
The 6800 was ~50 bucks more expensive than the 3070 but was almost an entire GPU class above it.

All true...but I could still play through CP 2077 at a higher 4K DLSS 2 fidelity on my 3080 because the RT performance was so much better.
 
If Price/Performance while ignoring RT is what you believe will make a difference, then the RX6000s score flying colors.
The 6800XT was 50 dollars cheaper than the RTX 3080 but out performed it more often than not, if you count crypto/pandemic shenanigans then the 6800XT was markedly cheaper than the RTX 3080.
The 6800XT was rivalling the RTX3090/3080Ti which were well more expensive even sans crypto madness.
The 6800 was ~50 bucks more expensive than the 3070 but was almost an entire GPU class above it.

Unless you are saying at the 7800(XT) level the GPUs should be 100 or more dollars cheaper while offering better performance than direct competitors, I really dont know what more AMD can do from a raster perspective to get mindshare from Nvidia.

With the RX7000s seemingly going for wider memory bus sizes then even at 4K where the gap between RX6000s and RTX30s gets small, we should expect AMD to have raster performance not only matching but likely beating all the direct RTX40 competitors.

relative-performance_2560-1440.png
If the relative performance increase is as good this might be the first time I purchase an AMD GPU since the 6950. All I ask for is a viable DLSS 2.0 competitor. The current iteration of FSR looks crap and I don't like their CAS solution. I want proper ML led upscaling. DLSS 4K Quality looks better than native and the IQ is utterly impressive.
 
Last edited:
If the relative performance increase is as good this might be the first time I purchase an AMD GPU since the 6950. All I ask for is a viable DLSS 2.0 competitor. The current iteration of FSR looks crap and I don't like their CAS solution. I want proper ML led upscaling. DLSS 4K Quality looks better than native and the IQ is utterly impressive.
This is true, and nothing anyone else does will come close, because no-one else has made the investment in AI that Nvidia has.
 
Raytracing will surely make a huge difference. It has to be in between Ampere and LoveLace's RT performance. Because want did you just say? DLSS has been a fantastic reason to go with Nvidia. Even with FSR, the RT performance on the 6000 series wasn't good because the cards weren't really designed with Raytracing in mind. Now RDNA3 will have RT performance in mind and FSR 2.1 has been on par with DLSS 2. That's the very first thing folks bring up when talking about why not going with AMD in the first place. RT performance and DLSS. Now that AMD can hang with DLSS 2 with their FSR 2.1, and if RDNA3 has respectable RT performance, the tide may change. I think AMD will be slightly cheaper again and not 30% and 40% like the masses claim they need to be in order to be competitive. Nvidia's mindshare is currently taking a hit because of the cable fiasco and 4080 12GB not being a 4080. But again, we'll get down to the brass taxes on Thursday when the 7000 series is announced.
I believe they will be competitive and will have great RT performance (at least as good as Ampere, ideally better), but I still don't think RT is what will make people change their mind. Cut the price low enough and people won't care about RT. FSR3 as good as DLSS2 would be enough in my opinion. It all comes down to price.

Remember it's just a rumor...
If that is the XT and not the XTX, we're in for a ride.

If Price/Performance while ignoring RT is what you believe will make a difference, then the RX6000s score flying colors.
The 6800XT was 50 dollars cheaper than the RTX 3080 but out performed it more often than not, if you count crypto/pandemic shenanigans then the 6800XT was markedly cheaper than the RTX 3080.
The 6800XT was rivalling the RTX3090/3080Ti which were well more expensive even sans crypto madness.
The 6800 was ~50 bucks more expensive than the 3070 but was almost an entire GPU class above it.

Unless you are saying at the 7800(XT) level the GPUs should be 100 or more dollars cheaper while offering better performance than direct competitors, I really dont know what more AMD can do from a raster perspective to get mindshare from Nvidia.

With the RX7000s seemingly going for wider memory bus sizes then even at 4K where the gap between RX6000s and RTX30s gets small, we should expect AMD to have raster performance not only matching but likely beating all the direct RTX40 competitors.
I believe AMD needs to undercut Nvidia by at least 15% to be competitive. I'm not saying they aren't good, it's just that people will always find excuses to go for Nvidia.

A card as fast as the 4090 in raster being $100 cheaper won't cut it. Not even $200 cheaper. They can't compete on RT, can't compete with DLSS2 in quality (for now). At that price point Nvidia won't even slash their price and outsell them easily.

The only way I can see them gaining market share if by cutting their price to offer the same level of performance as the same price as their lower tier. In other words, their 4090 equivalent should be priced around the 4080 16 GB. Their strategy of sandwiching doesn't work, people will buy Nvidia. The 6800xt being $50 cheaper with much worse RT and no DLSS2 equivalent wasn't a good deal in my opinion and in a situation without mining those cards would have been quickly discounted.

To be honest I am not even say they should do this or that, I'm just thinking it's almost impossible to simply gain some market share from Nvidia and they'll have to go at it hard. That is, assuming that's their goal. For what we know they might not be interested in selling more cards.
 
All true...but I could still play through CP 2077 at a higher 4K DLSS 2 fidelity on my 3080 because the RT performance was so much better.

If the relative performance increase is as good this might be the first time I purchase an AMD GPU since the 6950. All I ask for is a viable DLSS 2.0 competitor. The current iteration of FSR looks crap and I don't like their CAS solution. I want proper ML led upscaling. DLSS 4K Quality looks better than native and the IQ is utterly impressive.

Which is where Nvidia is really making head way.
People who want to ignore RT and AI are missing how Nvidia has nabbed mindshare since the RTX 2000s.
The 5600XT should have been a golden child of a GPU, but the RTX 2060 embarrassed it cuz RT and DLSS were making that card so much more worth its dollar.
They are giving feature sets that you might not actually use all the time but are nice to have for those games that will utilize them.

The RTX series changed so much because the added benefits made a huge difference in making people feel like they were getting more value for their buck.
At the top end, gamers know what these features are and will gravitate towards that feature set.

I believe AMD needs to undercut Nvidia by at least 15% to be competitive. I'm not saying they aren't good, it's just that people will always find excuses to go for Nvidia.

A card as fast as the 4090 in raster being $100 cheaper won't cut it. Not even $200 cheaper. They can't compete on RT, can't compete with DLSS2 in quality (for now). At that price point Nvidia won't even slash their price and outsell them easily.

The only way I can see them gaining market share if by cutting their price to offer the same level of performance as the same price as their lower tier. In other words, their 4090 equivalent should be priced around the 4080 16 GB. Their strategy of sandwiching doesn't work, people will buy Nvidia. The 6800xt being $50 cheaper with much worse RT and no DLSS2 equivalent wasn't a good deal in my opinion and in a situation without mining those cards would have been quickly discounted.

To be honest I am not even say they should do this or that, I'm just thinking it's almost impossible to simply gain some market share from Nvidia and they'll have to go at it hard. That is, assuming that's their goal. For what we know they might not be interested in selling more cards.

Mate didnt you just post that we were ignoring RT and looking purely at Price/Performance?
AMD isnt catching up on RT, atleast not from everything thats leaked so far.
The RX8000 series is expected to have something similar to Intels RT implementation which should push them right up there if not beyond Nvidia.
But on the RX7000s they will lose in RT and will be a tier down at every junction.
AMD have alot alot of work to do if they expect to catch up in AI too, they will need to dedicate die space for that.
FSR3 or whatever could do a hell of a job, cuz the latest implementation is actually sharp enough to almost fool me.


AMD arent foolish enough to actually price their 7900XT the same as an RTX 4090.
Its gonna be markedly cheaper.
The 7800 family will also be much cheaper than the 4080'16G.

But in my opinion as long as it is lacking in RT/AI it will have a hard time convincing the hardcore fans (people who actually buy expensive GPUs) to migrate.
 
Mate didnt you just post that we were ignoring RT and looking purely at Price/Performance?
AMD isnt catching up on RT, atleast not from everything thats leaked so far.
The RX8000 series is expected to have something similar to Intels RT implementation which should push them right up there if not beyond Nvidia.
But on the RX7000s they will lose in RT and will be a tier down at every junction.
AMD have alot alot of work to do if they expect to catch up in AI too, they will need to dedicate die space for that.
FSR3 or whatever could do a hell of a job, cuz the latest implementation is actually sharp enough to almost fool me.


AMD arent foolish enough to actually price their 7900XT the same as an RTX 4090.
Its gonna be markedly cheaper.
The 7800 family will also be much cheaper than the 4080'16G.

But in my opinion as long as it is lacking in RT/AI it will have a hard time convincing the hardcore fans (people who actually buy expensive GPUs) to migrate.
I don't mean to say RT doesn't matter at all, just that it's secondary and not as important as price.

You get to choose between a 3080 and a 6800xt. The 6800xt is pretty much the same in rasterization, but doesn't have DLSS2 and it's worse in RT. People will think that is worth $50.
If I can pay $650, I can easily afford a $700 card. And I'm spending that money because I want the best, I'll go for the best.
What if the 6800xt was priced $150 less though? You can bet people wouldn't give a shit about RT.

Again, not just undercutting, they need to be much cheaper. Which, as you said, should be easily the case against the 4090. Their previous strategy was trying to slot in between Nvidia offering, and all that would do is probably convince people to add a bit more and buy the Nvidia alternative anyway.
 
This is exciting. My 4090 from Amazon got delayed so I am more interested in this than I would be otherwise.

If AMD has a reasonable card for 1200 that significantly outdoes the 3090 and will launch relatively soon, I'll certainly give it consideration.

How well do AMD cards cope in a home theater setting with HDMI? If I keep my PC on but turn off the TV and AVR then come back an hour later and turn the TV on does the card wake up smoothly? Will it still put out Audio without intervention? This use case has been problematic in the past but has been pretty good with the RTX 3000 series. Sometimes youtube thinks audio is fucked if Kodi is open but not in use. And what about HDR? I sometimes have to fight with it. My TV needs to be put in Game Mode to allow HDR and then I need to toggle it in Windows. It untoggles itself randomly and sometimes that puts the TV out of game mode. With nvidia I know the issues and work around them on reflex. I am curious and scared how AMD would perform in this scenario.
 
I believe AMD needs to undercut Nvidia by at least 15% to be competitive. I'm not saying they aren't good, it's just that people will always find excuses to go for Nvidia.

A card as fast as the 4090 in raster being $100 cheaper won't cut it. Not even $200 cheaper. They can't compete on RT, can't compete with DLSS2 in quality (for now). At that price point Nvidia won't even slash their price and outsell them easily.

The only way I can see them gaining market share if by cutting their price to offer the same level of performance as the same price as their lower tier. In other words, their 4090 equivalent should be priced around the 4080 16 GB. Their strategy of sandwiching doesn't work, people will buy Nvidia. The 6800xt being $50 cheaper with much worse RT and no DLSS2 equivalent wasn't a good deal in my opinion and in a situation without mining those cards would have been quickly discounted.

To be honest I am not even say they should do this or that, I'm just thinking it's almost impossible to simply gain some market share from Nvidia and they'll have to go at it hard. That is, assuming that's their goal. For what we know they might not be interested in selling more cards.
I think this is right. If their top card is $1399 and it ties 4090 in raster performance but performs like a 3090 in ray tracing + has no good answer for DLSS…. most of the people willing to spend that much on a GPU will not be OK with that compromise.

At $1199 it'd be competing directly against the 4080. That is a much more favorable comparison IMO.
 
If Navi 31 was a monolithic GPU on 5nm it would likely be ~400mm^2, so the 4080 16 GB is going to be the best comparison point.
 
I think this is right. If their top card is $1399 and it ties 4090 in raster performance but performs like a 3090 in ray tracing + has no good answer for DLSS…. most of the people willing to spend that much on a GPU will not be OK with that compromise.

At $1199 it'd be competing directly against the 4080. That is a much more favorable comparison IMO.

FSR2.1 is already a good answer for DLSS2. There aren't really any games that use it right now.

 
If Navi 31 was a monolithic GPU on 5nm it would likely be ~400mm^2, so the 4080 16 GB is going to be the best comparison point.
It might not be as linear as that.
It could be that bringing the memory controllers and SRAM could increase die area more than 100mm^2. IIRC SRAM doesn't scale down very well with smaller nodes.
 
If AMD tries to undercut Nvidia and position a high end raster card that competes with a 4090 against the 4080 they would just be admitting that RTX and DLSS is far superior and matters alot.

Except that FSR is nearly as good now and raytracing is at best a neat effect and at worst a gimmick. Until games are raytracing exclusive it won't be that big a deal. Even now it's a handful of games where you see a tangible diff.

However, given the price anyone whose willing to pay over $1000 is already splurging so $100-200 won't give them pause, and they will go with Nvidia and it's slight benefits.

Really curious to see what happens in a few days. But this whole trend of getting high end GPUs over $1000 is kinda lame, which ends up pulling other GPUs well above console prices. Maybe AMD will shock us, but I imagine they will price near Nvidia and ride the high margins with a low market share.
 
I've got one game with FSR 2.1 and it looks great. Visually, it's a wash for me. The real difference between that and dlss is the number of games supporting it.
 
If Price/Performance while ignoring RT is what you believe will make a difference, then the RX6000s score flying colors.
The 6800XT was 50 dollars cheaper than the RTX 3080 but out performed it more often than not, if you count crypto/pandemic shenanigans then the 6800XT was markedly cheaper than the RTX 3080.
The 6800XT was rivalling the RTX3090/3080Ti which were well more expensive even sans crypto madness.
The 6800 was ~50 bucks more expensive than the 3070 but was almost an entire GPU class above it.
That's demonstrably false. At 1080p, the 6800 XT is faster. At 1440p, they're equal. At 4K, the 3080 is faster. The crypto boom also affected AMD cards last I checked with the likes of the 5700 XT selling for $1000 USD.

Throw in RT performance and DLSS vs FSR 2.0, the 3080 is more often than not the better performer since most buyers at that performance tier will play at 1440p and above, not 1080p.

As for rivaling the 3080 Ti/3090, well, duh but the 3080 does as well because those cards are like 5-10% faster. The 3080 on the flip side also rivals the 6900 XT and outright stomps the 6950 XT in RT workloads.

It's 16GB and FSR vs 10/12GB and RT+DLSS. I wouldn't say the 6800 XT is passing with flying colors unless you can find it for a chunk less than the 3080.
 
Last edited:
The best estimates we have right now are 1x 308mm2 of N5 Graphics Complex Die(GCD; contains the WGPs), and 6x 37.5mm2 of N6 Memory Complex Dies (MCD; these house the GDDR PHYs and the Infinity Cache/xGMI/Infinity Fabric links to the GCD)
So you're looking at a total die area of around 533mm2, compared to AD102's 608mm2.

So no not exactly a monster by any definition.

Quite a monster when it's got so many modules that has to interconnect outside of lithography. I thought their flagship would have 2 GCD? That was the rumor.

Ah so you have the engineering samples? Yes inter connects tend to draw more power, but how much power is that exactly?

Wow. "dO yOu Ahve SaAMPles?!" If we stop for this question, then shut down any conversations around this GPU.
It's more, that's all there needs to be said versus a monolithic design. The point is if you match performance with interconnects, you cannot also be more efficient in power than monolithic. Something will have to give.

Infinity fabric for that processor is costing 50% of the power delivery, GPUs would require even faster links. I don't think a single GCD or 2 GCD would be as drastic as that processor, but there's a cost.

How does this impact GPU performance, if at all?
There aren't multiple GCDs, all of the compute is on a single die. They have simply disaggregated the Compute from the fixed function IO/Cache which doesn't scale as well. So I can't see this having a significantly negative impact. Besides, CPUs are far more latency sensitive than GPUs, and Ryzen multi-chip processors seem to handle it just fine as the core physical design has absolutely exceptional latency handling. I'm sure a lot of that expertise as cross pollinated with Radeon Technologies Group.

Where did you get the impression that this is NUMA? Single compute die.

So no channel and memory interleaving then? Bold plan cotton. Infinity fabric IS Non Uniform Memory Access. Otherwise your CU/Node that would try to communicate with the memory that is at a further distance and would choke (stall) while waiting.

GPU's are designed to hide latency way more than CPUs. GDDR memory has significantly higher latency than normal DDR. The tradeoff is higher bandwidth. That's why GPUs use high bandwidth, high latency GDDR, while CPUs use low bandwidth, low latency DDR.


Do you know what packaging technology they're using for this GPU? Because Apple is using a technology that TSMC developed - its probably pretty easy for AMD to just license that same tech if they need to have high bandwidth between dies.

You're confusing time sensivity of GPUs by thinking that the high bandwidth requirement is not a tradeoff between being able to feed these thousands of parallel computer units and latency. Of course feeding them is more important, because a bandwidth starved GPU is VRAM latency bound which creates way bigger impacts by stalling than DDR vs GDDR latency would provide.

Yes that is why each MCD has 16MB (or maybe more) of cache - to maintain data locality. I'm sure that will do a great deal to help handle RT and BVH structures.

What? AMD took Navi 10, a GPU that clocked at 1900MHz on N7 and made Navi 21, a GPU that sits comfortably at 2500MHz - and is often capable of doing more like 2800MHz also on N7. They gained significant clockspeed without a node transition. So now they're moving from N7 to N5 - what is often described as a bit of a Unicorn node. Why would you think they wouldn't be able to extract similar gains in frequency yet again. I don't expect 4GHz, but certainly 3.5GHz is not outside of the realms of possibility.
Especially if the architecture has been specifically designed to clock fast by the way of its physical design. And RDNA is designed to clock high.

Prices are subject to change at the drop of a hat, so I'm not going to engage with you on the point of what people expect the prices to be. Because they're all probably wrong. However, it seems that you're just unwilling to accept that AMD might be able to engineer a more power-efficient architecture than Nvidia. The lead of Radeon; one David Wang, was a senior engineer who helped develop Cypress (of HD5870 fame) which was vastly more power efficient than Fermi way back when. Also please note that the 4090 is overvolted into oblivion. You can drop the power down to 350W and you'll barely lose 5-10% performance. They've overclocked the shit out of it out of the gate to extract as much performance as possible. Wonder why that is?
Anyway, I guess AMD don't feel the need to overclock the shite out of their GPU. If they can get close enough, say with in 10%, but at significantly less power, then that's good enough.

Eventually AMD will up the GDC, if not already with 2 GCD for their flagship, just like intel plans on tiling their Xe GPU, otherwise it doesn't make sense to bring up infinity fabric and starts to make even less sense to move away from monolithic.

Gaming GPUs have totally different time sensitivities and workload than professional calculation workloads/bitcoin/etc. Memory that will split workload when modern RT/ML is effectively requiring information from other parts of the current/previous image frame that might be stored in a non direct path memory pool will require a node jump via the infinity fabric (NUMA). All that for what advantage?

From AMD's very own David Wang, infinity fabric works well with CPUs because the NUMA is part of the OS support and is invisible, making it easier to scale the workload. There's no such thing for GPUs as of now, nothing from Intel/Apple/AMD/Nvidia as of now. Seems like only a software solution would work.

Anyway, 2 days of waiting. Look, i'm all for AMD to blow the roof, i've had ATI/AMD products until the 1060, but i'm not seeing realistic expectations. Peoples are rooting for some kind of Disney movie sport miracle twist where the evil Nvidia decided to party instead of studying and lost a huge technological edge and AMD is running a freaking charity for cheap prices to customers. I mean cmon. Something's gotta give, that's all i'm saying. You can't have everything better, especially now that they're both on the same top tier foundry. Had Nvidia stayed with Samsung.. then yeah, i would see it.
 
Last edited:
ZQ0Nuni.png


This really puts the value proposition of the 4000 series on perspective.

I hope AMD offer competitive prices because someone really needs to knock Nvidia down a notch
 
Last edited:
I've got one game with FSR 2.1 and it looks great. Visually, it's a wash for me. The real difference between that and dlss is the number of games supporting it.
The neat thing about FSR is that it uses the same hooks as DLSS so you can replace the .dll files and run FSR instead of DLSS.
 
If AMD tries to undercut Nvidia and position a high end raster card that competes with a 4090 against the 4080 they would just be admitting that RTX and DLSS is far superior and matters alot.

Except that FSR is nearly as good now and raytracing is at best a neat effect and at worst a gimmick. Until games are raytracing exclusive it won't be that big a deal. Even now it's a handful of games where you see a tangible diff.

However, given the price anyone whose willing to pay over $1000 is already splurging so $100-200 won't give them pause, and they will go with Nvidia and it's slight benefits.

Really curious to see what happens in a few days. But this whole trend of getting high end GPUs over $1000 is kinda lame, which ends up pulling other GPUs well above console prices. Maybe AMD will shock us, but I imagine they will price near Nvidia and ride the high margins with a low market share.
They could tout their cost saving chiplet approach. Or just price it lower and point fingers at NVidia and their prices. Someone with some tact would know how to spin it. No melting cables, fits in your case, costs 25% less, performs close to the 4090. Are all GPU enthusiasts really fans of overengineering?
ZQ0Nuni.png


This really puts the value proposition of the 4000 series on perspective.

I hope AMD offer competitive prices because someone really needs to knock Nvidia down a notch
It does and it is one of my biggest gripes about the 4090 and the rest of the series. 4090 is basically a 3080 super of this gen and will be outclassed by the Ti variant, but no way in fuck I'd want to wait a year to get a GPU that gets cooked by the next gen one that once again has exclusive features that don't need to be withheld. I want at least 2 years of not dealing with trying to find a GPU. The 4080 is a fucking joke and the former 4080 12GB is practically a 3080 with DLS3.

I'm half wanting to jump to AMD just because of how much a cunt Jensen is. The 4080 is such a piece of shit at 1200 bucks. It's almost like a giant marketing experiment, put 4080 on something and see how many morons will buy at that price. If it works, great. If not, do something less cunty next gen but keep things as they are and sell a ton of 4060s at 500 bucks a pop.
 
Jeez aren't there any youtube videos saying it's going to be a piece of shit? I need some anti-hype.
Don't you understand the hype cycle? The thing has to be revealed with a slight flaw and then youtube will explode with how that flaw makes AMD DEAD.
 

AMD Radeon RX 7000 "RDNA 3" GPU Lineup Rumor: 2x Faster Raster & Over 2x Ray Tracing Performance Versus RDNA 2, Power Efficiency Looks Amazing!




  • 5nm Process Node
  • Advanced Chiplet Packaging
  • Rearchitected Compute Unit
  • Optimized Graphics Pipeline
  • Next-Gen AMD Infinity Cache
  • Enhanced Ray Tracing Capabilities
  • Refined Adaptive Power Management
  • >50% Perf/Watt vs RDNA 2


[/URL]

FYI, AMD RT needs to improve more than 2X.
 
Interesting but for the uninitiated to ATV: add plenty of salt.
Plus he's just piggybacking off the Angstronomics stuff that came out a while ago. Nothing new in that video beyond him running a few silicon wafer-related calculations.

Also another video from MLID dropped today but it was pretty light on actual new information. He claims he's been told the 7900XTX reference will not beat the 4090 in raster but that's it really... everything else was just his opinion/hopes (ie that AMD should take the heat to Nvidia while team green is receiving some bad press because of stuff like the melting power adapters and its cancellation of the 4070 4080 12gb.

Honestly the leaks around RDNA3 so far have been kinda lame/underwhelming (or non-existent compared to RDNA2), cannot wait for the reveal so we can see who was genuine and who's been talking out of their ass.
 
Last edited:
FYI, AMD RT needs to improve more than 2X.
What AMD needs is a sub-$1k card with +20GB fast memory and a level of performance in line with what their CPUs can actually handle.
The 4090 is overkill, that's my post launch hype summary. Could've gone with a lower end card and used the money I got over for a better screen instead, but I don't do waiting so
 
What AMD needs is a sub-$1k card with +20GB fast memory and a level of performance in line with what their CPUs can actually handle.
The 4090 is overkill, that's my post launch hype summary. Could've gone with a lower end card and used the money I got over for a better screen instead, but I don't do waiting so
Based on the leaked 3.3 Ghz clock speed with 12,288 stream processors, it would be about 81 TFLOPS compute.

Raytracing is compute related path with BVH acceleration hardware (search engine accelerator).

RX 6950 XT has about 24 TFLOPS compute with 2 of 3 BVH acceleration hardware features, the result is about half of RTX 3090 FE (>38 TFLOPS FP32 compute with 3 of 3 BVH acceleration hardware features).

For NAVI 31, AMD is throwing a lot of compute FLOPS at the raytracing problem.

Real-time raytracing is a compute TFLOPS and search engine problem.
 
Top Bottom