RDNA3 rumor discussion

Here is an older list with how much GPUs cost to make they are very cheap to make.
VQj4mDH.jpg
qUNq1MA.jpg

Very complicated to compare old GPU with old technonolgy with the actual GPU and last technonogies. The wafer cost has exploded since the 7nm technologies...
 
180W laptop/mobile GPU? Guys.. tell me how you'll power that more than 10 mins
LOL, obviously not on battery. You have to be plugged in.

The problem in that form factor is not how to power it, but how to cool it. They normally have much less power demanding CPUs and the GPU switch to the integrated one when used on battery.

Nintendo could make a Home console with the Mobile high end graphiccard and a mobile CPU from AMD with 16 GB ram with a basic SSD 256 GB for 350 and still make money from it and destroy PS5 and XBoxX

The Switch and other portable devices try to stay between 15 and 20 watts. You can't go higher if you want to have a battery life.

The Steam Deck, with last gen tech (RDNA 2) manages to perform more or less like a base PS4 with a much better CPU. If the next generation can double performance per watt, we might expect something as powerful as a PS4 Pro in that form factor.

The power of a 3090 in your hands at that power efficiency is probably 5+ years away, though arguably it's completely unnecessary for the resolutions mobile devices will target.
 
Last edited:
LOL, obviously not on battery. You have to be plugged in.

The problem in that form factor is not how to power it, but how to cool it. They normally have much less power demanding CPUs and the GPU switch to the integrated one when used on battery.



The Switch and other portable devices try to stay between 15 and 20 watts. You can't go higher if you want to have a battery life.

The Steam Deck, with last gen tech (RDNA 2) manages to perform more or less like a base PS4 with a much better CPU. If the next generation can double performance per watt, we might expect something as powerful as a PS4 Pro in that form factor.

The power of a 3090 in your hands at that power efficiency is probably 5+ years away, though arguably it's completely unnecessary for the resolutions mobile devices will target.
Yes I know I was talking about an home console from Nintendo.
 
Very complicated to compare old GPU with old technonolgy with the actual GPU and last technonogies. The wafer cost has exploded since the 7nm technologies...
Yes you are right still are way cheaper to make than what they sell.A 4090 cost no more than 500-600 to make let's say it costs 800 to make NVidia is selling it for 1500.
 
Nintendo could make a Home console with the Mobile high end graphiccard and a mobile CPU from AMD with 16 GB ram with a basic SSD 256 GB for 350 and still make money from it and destroy PS5 and XBoxX
Nintendo stopped following the power way long ago.

Next console is gonna be a tablet or some other gimmick once again.

If they release a console more powerful than a ps5 they can't release low budget games anymore, people are gonna expect sony/m level of graphic and that shit is not cheap to make.

Why doing that when your games sell 20 mil each with probably 1/10 of sony\m budget to make them? It would be fucking idiotic.
 
Last edited:
Nintendo stopped following the power way long ago.

Next console is gonna be a tablet or some other gimmick once again.

If they release a console more powerful than a ps5 they can't release low budget games anymore, people are gonna expect sony/m level of graphic and that shit is not cheap to make.

Why doing that when your games sell 20 mil each with probably 1/10 of sony\m budget to make them? It would be fucking idiotic.
Are they wrong? AAA graphical whores brought us 80 bucks games and will bring us 100 bucks games. Developing costs keep increasing in this "blockbuster" titles. At the same time they play safe, rereleasing games without any development progress. Honestly, I prefer botw II level graphics at 4k over tlou remake remaster definitive studio cut or modern warfare 3-2-1 v2.
 
hm.. I am curious what the rumors are for RTX40 mobile GPU's ...will it be equivalent to RTX30series desktop performance? :unsure: Hope the laptop doesn't melt.:LOL:🔥

I think RDNA3 gonna hit the sweet spot. Its all about balancing things. Intel and NVIDIA drawing all that power to brute force the fuck out of their products. AMD is about Zen (with pun intended)
 
hm.. I am curious what the rumors are for RTX40 mobile GPU's ...will it be equivalent to RTX30series desktop performance? :unsure: Hope the laptop doesn't melt.:LOL:🔥

I think RDNA3 gonna hit the sweet spot. Its all about balancing things. Intel and NVIDIA drawing all that power to brute force the fuck out of their products. AMD is about Zen (with pun intended)
I think we can expect more or less performance between 3080/3090 for Nvidia. Their top end will probably be AD104, which is what was supposed to power the 3080 12 GB.

Of course the power draw will be limited, but seeing how you can cut a ton of power from the 4090 and only lose about 5% of performance, I can imagine they might be able to reach at least 3080 performance with 3090 being a possibility.
AMD might beat them since efficiency is on their side and they can probably do more with less power.

Regardless of who you go with, the new machines are going to be ridiculously powerful for 1080/1440p. They're also going to be extremely CPU limited since all the new beasts seem to be power hungry....
 
Are they wrong? AAA graphical whores brought us 80 bucks games and will bring us 100 bucks games. Developing costs keep increasing in this "blockbuster" titles. At the same time they play safe, rereleasing games without any development progress. Honestly, I prefer botw II level graphics at 4k over tlou remake remaster definitive studio cut or modern warfare 3-2-1 v2.
Matter of taste, i still thank god that we have developers that push new tech and graphic.

but no, nintendo is not wrong in doing what they do if their fans give 2 fucks about graphic and performances (because it's not only graphic, switch games usually have the worst performance with basically any game in the market that run on multiple platforms)

You can be happy to play every multy platform game (and most exclusives) with low details, 30 frame shit solid at 720p, many of us just have higher standards in 2022.

It's not like nintendo games trade graphics for the pinnacle of gameplay, many of their games are absolute cakewalk with zero challenge, from luigi to odissey to botw to pokemon (i think metroid is the only one with some challenge left), so yeah it's just a matter of liking their style, they don't make inherently better games because they give up on graphic and performance, that is just a forum fairytale.


But we are off topic dude, we can continue via pm if you want to continue the discussion.

P.s. but you are right when you say that the other 2 play more safe these days, but it's not like nintendo didn't used the same fucking brands for the past 30 years:lollipop_grinning_sweat:, at least the other 2 put some budget on their projects when they sell a remaster for 70 dollars:lollipop_grinning_sweat:
 
Last edited:
This is BS.

20 years ago the 9700pro launched with proper fp32 and the ability to run AA and AF at playable framerates. When FX did finally release it could only handle FP24 so when playing stuff like source engine games (Half-Life 2 being a big one at the time) if it didn't run in compatability mode with worse image quality it ran like absolute dog shit. Then the 9800XT released and cemented ATi. X800 vs 6800 was a close run thing. X1800 vs 7800 was a slight NV advantage but the refreshes of X1900 and X1950 vs the 7900 was ATi all the way.

After that the 2000 and 3000 series were significantly worse than the 8000 and 9000 series NV parts and this is where NV really started to gain market share and snowball. AMD had a great chance with the 4000 to actually take the crown but instead went for a 'small die' strategy which meant RV770 was tiny but was able to be around 90% of GTX 280 performance. They should have made an RV790 with 1,200 shaders and just taken the performance crown. 5000 series was also faster than the 480 and 5000 series brought with it angle independent AF which was a huge IQ improvement.

6000 series was a bit of a side step and NV made a good leap with 500 series but again AMD took the crown with the 7000 series (Which for some reason was about 33% underclocked at launch with parts capable of 30% OCs easily) and even when the 680 came out OC vs OC Tahiti was faster. 290X was faster than Titan and the 780Ti came out with a full die to compete but price wise AMD was the better perf/$. Unforunately what happened is that the issues in the 2000 and 3000 series stuck so even when AMD offered better perf/$ with the 4000, 5000, 7000 and 200 series NV still sold more. That led to stuff like Fiji not being as fast as the 980Ti and then Vega 64 being a big let down. RDNA was a good arch but AMD never released a properly top tier part and then we arrive at RDNA2 where it is competitive in raster but NV does have a few more feature and is much better at RT although it comes at a very large premium.

But no. NV have not been the market leader for 20 years.


It doesn't help that for every other architecture Nvidia uses its marketshare dominance and developer influence to change the goalposts on what constitutes "better performance", so that AMD can never really know what to bet their transistors on.

The Fermi cards were all about introducing compute and GPGPU to game engines, and in 2012 AMD came up with GCN that was effectively superior to Nvidia in GPU compute. Up to this point we'd see hardware reviewers using synthetic benchmarks for GPU compute to review gaming GPUs.
But in that same year with Kepler NVidia brought down the importance of GPU compute and made everything about geometry performance. In the following years we saw stuff like those super tessellated concrete slabs in Crysis 2 that killed performance on all cards (but less on Nvidia cards), and eventually we saw reviewers using synthetic benchmarks for geometry and tessellation performance to evaluate gaming GPUs.
It then took AMD around 5 years to implement driver-level compute culling on Vega / GCN5 and finally nullify Nvidia's geometry performance advantage. But at that time even developers had come up with their own engine-integrated compute culling which was better than AMD's driver-level (most of them had been working on GCN consoles, after all).
With RDNA1 AMD finally brought competitive power/performance on rasterization and eventually surpassed NVidia with RDNA2. But at that point NVidia already started to shift the "performance that matters" to raytracing, and what we saw afterwards was a bunch of Nvidia-sponsored games with small IQ improvements but pretty mediocre performance with raytracing on their own GPUs even when using temporal upscaling (but much worse performance on RDNA2, so it was worth it).

Now, even if we get competitive raytracing performance on RDNA3, Nvidia will probably try to shift the attention to DLSS3 on their optical units which AMD doesn't have, and History will keep repeating itself.
 
Last edited:
It doesn't help that for every other architecture Nvidia uses its marketshare dominance and developer influence to change the goalposts on what constitutes "better performance", so that AMD can never really know what to bet their transistors on.

The Fermi cards were all about introducing compute and GPGPU to game engines, and in 2012 AMD came up with GCN that was effectively superior to Nvidia in GPU compute. Up to this point we'd see hardware reviewers using synthetic benchmarks for GPU compute to review gaming GPUs.
But in that same year with Kepler NVidia brought down the importance of GPU compute and made everything about geometry performance. In the following years we saw stuff like those super tessellated concrete slabs in Crysis 2 that killed performance on all cards (but less on Nvidia cards), and eventually we saw reviewers using synthetic benchmarks for geometry and tessellation performance to evaluate gaming GPUs.
It then took AMD around 5 years to implement driver-level compute culling on Vega / GCN5 and finally nullify Nvidia's geometry performance advantage. But at that time even developers had come up with their own engine-integrated compute culling which was better than AMD's driver-level (most of them had been working on GCN consoles, after all).
With RDNA1 AMD finally brought competitive power/performance on rasterization and eventually surpassed NVidia with RDNA2. But at that point NVidia already started to shift the "performance that matters" to raytracing, and what we saw afterwards was a bunch of Nvidia-sponsored games with small IQ improvements but pretty mediocre performance with raytracing on their own GPUs even when using temporal upscaling (but much worse performance on RDNA2, so it was worth it).

Now, even if we get competitive raytracing performance on RDNA3, Nvidia will probably try to shift the attention to DLSS3 on their optical units which AMD doesn't have, and History will keep repeating itself.
Unless they're able to turn things around drastically, DLSS3 is going to be useful only on the super high-end of cards for reaching stuff like 1440p/4k@240hz+.

Nvidia has marketed as a "you can double your framerate" strategy but people will quickly realize that playing a game that looks like it's running at 100hz but has the same input latency of a game at 40ps isn't that great.
I'd rather lower the fidelity a bit and reach 4k@120hz (with FSR/DLSS) than have much higher input lag.

If FSR3 can get closer to DLSS 2 (there are rumors saying it's hardware based), I think AMD has nothing to worry about.
 
Unless they're able to turn things around drastically, DLSS3 is going to be useful only on the super high-end of cards for reaching stuff like 1440p/4k@240hz+.

Nvidia has marketed as a "you can double your framerate" strategy but people will quickly realize that playing a game that looks like it's running at 100hz but has the same input latency of a game at 40ps isn't that great.
I'd rather lower the fidelity a bit and reach 4k@120hz (with FSR/DLSS) than have much higher input lag.

If FSR3 can get closer to DLSS 2 (there are rumors saying it's hardware based), I think AMD has nothing to worry about.
I'm sure it will be hardware accelerated given how Intel's GPUs feature specific hardware to maximise use of XeSS
 
Last edited:
From what I am hearing; Nvidia could be done. RDNA3 is a monster
The rumors of those cards doubling performance seem legit, but let's keep the hype in check. Nvidia isn't done. Nvidia can slash the price dramatically and AMD can still fuck up with their pricing.

The only way AMD can destroy Nvidia is by releasing the 7900xt at 999 and the 7900xtx at around 1199, but that realistically won't happen.
They'll price the 7900xt at the same price as or slightly higher than the 4080 (being probably 20/30% faster) and the 4090 at the same price of the 7900xtx.

And we'll suck it up because that will still be better value than the competition.
I'm sure it will be hardware accelerated given how Intel's GPUs feature specific hardware to maximise use of XeSS
Yeah, I agree. Hopefully it'll be ready for launch and it will be as easy to implement as 2.0.
 
All aboard the AMD hype train once again! Where all rumours must be true and Nvidia is really screwed this time. Should believe all rumours, even the ones that break the laws of physics, explained to you by YouTubers with no fucking clues, throwing shit out there as something will eventually stick. MCM with bigger total surface area than a 4090? Less watts with MCM vs Monolithic on a lesser TSMC node? Breaks 4GHz? No latency from crossbar nodes? All that for $600 less? Wow

Train GIF
 
Last edited:
The rumors of those cards doubling performance seem legit, but let's keep the hype in check. Nvidia isn't done. Nvidia can slash the price dramatically and AMD can still fuck up with their pricing.

The only way AMD can destroy Nvidia is by releasing the 7900xt at 999 and the 7900xtx at around 1199, but that realistically won't happen.
They'll price the 7900xt at the same price as or slightly higher than the 4080 (being probably 20/30% faster) and the 4090 at the same price of the 7900xtx.

And we'll suck it up because that will still be better value than the competition.

Yeah, I agree. Hopefully it'll be ready for launch and it will be as easy to implement as 2.0.
Lol at those prices. No chance.
 
Seems like there were far more Nvidia rumors than AMD.
At least the wait is not too long, Thursday will be pretty exciting.
I am starting to care less and less about raytracing these days. I'm realizing it's only going to be a big deal once games go raytracing exclusive. For now the visual difference is not substantial in most games to really buy one card brand over another, but if AMD and Nvidia are just $50 or $100 apart I can see people going Nvidia over AMD in the high end segment.
 
Well, how about a Switch 2, where the dock has a powerfull gpu build in?
But then they still have to develop the game for the lowest console.You would play the game but in 4K 120 or so.That is not what I want from Nintendo.I want Mario games like the new Mario movie Pixar or Dreamworks quality with Voice acting a new world a new era
 
What I want to see is that Radeon Pro with 32GB of RAM. Next year, most likely...but it would be wonderful in AMD did a surprise attack with a beast like that, stacked with v-cache.
 
Let's face it. All AMD has to do is be competitive and reasonably priced. If they have a 4090 competitor then price it at the same rediculous price as the 4080 16GB. Price the 4080 competitor at the same absurd price Nvidia attempted to launch the 4070 12GB at. The prices already are obscene. If AMD wants to gain substantial marketshare then they have to keep the prices in check.
 
Last edited:
Let's face it. All AMD has to do is be competitive and reasonably priced. If they have a 4090 competitor then price it at the same rediculous price as the 4080 16GB. Price the 4080 competitor at the same absurd price Nvidia attempted to launch the 4070 12GB at. The prices are already obcene. If AMD wants to gain substantial marketshare then they have to keep the prices in check.

You guys still don't get it uh?

AMD doesn't reserve wafers anywhere close to gain "substantial" marketshare. They have to split between much better margin markets such as CPUs, Servers, console APUs, etc. They've been at the bottom of the market share, so they reserve wafers accordingly. TSMC is already booked beyond reason and they wouldn't take the risk of planning a LOT of GPUs when market has basically showed them the finger even when they had decent products at a good price, they would be at risk of Nvidia undercutting them for giggles and be stuck with a mountain of GPUs.

Climbing to 50% market share would take multiple generations of GPUs launches and admittedly, Nvidia would have to fuck up for multiple generations.. (how likely is that)

3090 had sold more units than the entire RDNA 2 lineup after nearly a year out

More recent stats by GPU groups
1ru4foykc6s91.png


When AMD says they won't do a paper launch like they did with 6000 series, they're pulling your leg, as they simply do not have anywhere near the capacity to close that gap. They would have to disrupt the entire TSMC production line to make a dent. How much do you think TSMC would charge AMD for that? Do you think AMD has the balls to compete against Apple/Nvidia/Sony/Microsoft's/etc silicon wafer allowances? That GPU price would skyrocket because every other manufacturers would start a bid war for it, which would result in them basically squeezing blood out of AMD.

AMD could come with the second coming of christ in GPU form this gen, and they still wouldn't be able to provide for the market if there were huge demand.

This is where every generation, AMD fans are dreaming of law breaking specs, but also, GPU production out of a magic hat with magical prices. I was an AMD fan also for a lonnngggg time, i know the cycle, you too can snap out of it

Stop It Michael Jordan GIF
 
Last edited:
You guys still don't get it uh?

AMD doesn't reserve wafers anywhere close to gain "substantial" marketshare. They have to split between much better margin markets such as CPUs, Servers, console APUs, etc. They've been at the bottom of the market share, so they reserve wafers accordingly. TSMC is already booked beyond reason and they wouldn't take the risk of planning a LOT of GPUs when market has basically showed them the finger even when they had decent products at a good price, they would be at risk of Nvidia undercutting them for giggles and be stuck with a mountain of GPUs.

Climbing to 50% market share would take multiple generations of GPUs launches and admittedly, Nvidia would have to fuck up for multiple generations.. (how likely is that)

3090 had sold more units than the entire RDNA 2 lineup after nearly a year out

More recent stats by GPU groups
1ru4foykc6s91.png


When AMD says they won't do a paper launch like they did with 6000 series, they're pulling your leg, as they simply do not have the anywhere near the capacity to close that gap. They would have to disrupt the entire TSMC production line to make a dent. How much do you think TSMC would charge AMD for that? Do you think AMD has the balls to compete against Apple/Nvidia/Sony/Microsoft's/etc silicon wafer allowances? That GPU price would skyrocket because every other manufacturers would start a bid war for it, which would result in them basically squeezing blood out of AMD.

AMD could come with the second coming of christ in GPU form this gen, and they still wouldn't be able to provide for the market if there were huge demand.

This is where every generation, AMD fans are dreaming of law breaking specs, but also, GPU production out of a magic hat with magical prices. I was an AMD fan also for a lonnngggg time, i know the cycle, you too can snap out of it

Stop It Michael Jordan GIF
Interesting post thanks for your input.But I thought TSMC would build more fabs because of the high demand?Or AMD should ask TSMC if they can build a fab as a joint venture if such thing would be possible.
 
Interesting post thanks for your input.But I thought TSMC would build more fabs because of the high demand?Or AMD should ask TSMC if they can build a fab as a joint venture if such thing would be possible.

TSMC is building a fab in USA as a backup plan if China would invade Taiwan. USA would be grabbed by the balls otherwise.

Even giants like Sony managed a collaboration with TSMC only for 28nm

https://pr.tsmc.com/english/news/2880

AMD has nowhere near the cash it would require to start a collab with TSMC for a new top of the line 4nm~5nm. Not even Apple does. If TSMC would be open to such a thing, much bigger players than AMD would bid. To pay for that you need many customers bidding on production slots too so it's not like it's exclusively run for your product line. Not to mention the time it would require to setup.. even USA fab is expected 2024-2025 and it'll be a ramp up in production that will take a while until it matches Taiwan's.
 
TSMC is building a fab in USA as a backup plan if China would invade Taiwan. USA would be grabbed by the balls otherwise.

Even giants like Sony managed a collaboration with TSMC only for 28nm

https://pr.tsmc.com/english/news/2880

AMD has nowhere near the cash it would require to start a collab with TSMC for a new top of the line 4nm~5nm. Not even Apple does. If TSMC would be open to such a thing, much bigger players than AMD would bid. To pay for that you need many customers bidding on production slots too so it's not like it's exclusively run for your product line. Not to mention the time it would require to setup.. even USA fab is expected 2024-2025 and it'll be a ramp up in production that will take a while until it matches Taiwan's.
The US did it did they had to pay a lot for that ?If the US one opens couldn't AMD also produce there if they pay like they do with TSMC?
 
NVIDIA has ARM based CPU/GPU but its for servers. Maybe AMD should start thinking about making ARM based ZEN/RDNA to compete with QUALCOMM? Imagine Zen/RDNA for your android phones, android tablets and windows for ARM laptops and tablets?!

The trend for Apple, and Google is they want their own inhouse CPU/GPU/NPU. Samsung will soon have their own inhouse CPU/GPU/NPU for ARM. Microsoft don't want to make their own inhouse CPU/GPU/NPU, but recently released project voltera development kits similar to Apple's rosetta kits to convert x86 to arm native. This can give AMD some competitive advantage if they were to make ARM based chips.
 
You guys still don't get it uh?

AMD doesn't reserve wafers anywhere close to gain "substantial" marketshare. They have to split between much better margin markets such as CPUs, Servers, console APUs, etc. They've been at the bottom of the market share, so they reserve wafers accordingly. TSMC is already booked beyond reason and they wouldn't take the risk of planning a LOT of GPUs when market has basically showed them the finger even when they had decent products at a good price, they would be at risk of Nvidia undercutting them for giggles and be stuck with a mountain of GPUs.

Climbing to 50% market share would take multiple generations of GPUs launches and admittedly, Nvidia would have to fuck up for multiple generations.. (how likely is that)

3090 had sold more units than the entire RDNA 2 lineup after nearly a year out

More recent stats by GPU groups
1ru4foykc6s91.png


When AMD says they won't do a paper launch like they did with 6000 series, they're pulling your leg, as they simply do not have anywhere near the capacity to close that gap. They would have to disrupt the entire TSMC production line to make a dent. How much do you think TSMC would charge AMD for that? Do you think AMD has the balls to compete against Apple/Nvidia/Sony/Microsoft's/etc silicon wafer allowances? That GPU price would skyrocket because every other manufacturers would start a bid war for it, which would result in them basically squeezing blood out of AMD.

AMD could come with the second coming of christ in GPU form this gen, and they still wouldn't be able to provide for the market if there were huge demand.

This is where every generation, AMD fans are dreaming of law breaking specs, but also, GPU production out of a magic hat with magical prices. I was an AMD fan also for a lonnngggg time, i know the cycle, you too can snap out of it

Stop It Michael Jordan GIF

Are you really trying to make the case that AMD isn't actively attempting to increase market share? That is ludicrous.

All you have done is demonstrate that Nvidia is indeed the market leader. That AMD has to be more competitive in price/perf. That it takes multiple GPU generations to make a dent.

Don't forget that there was a crypto mining boom affecting cards getting into consumer hands. Also worth noting is the fact that AMD didn't release the full GPU stack for a few years, not to mention Nvidia went with Samsung 8nm and potentially didn't face the same shortage issues. Nvidia hard countered upon the launch of 6000 series with a full lineup of Super cards. All the above are factors at play.

You should listen to Lisa Su discuss the desire for increased GPU market share in one of her stockholder meeting/financial analyst days. You have a point regarding the difficulty in attaining fab time for chips but you have pushed that idea too far.
 
Last edited:
You guys still don't get it uh?

AMD doesn't reserve wafers anywhere close to gain "substantial" marketshare. They have to split between much better margin markets such as CPUs, Servers, console APUs, etc. They've been at the bottom of the market share, so they reserve wafers accordingly. TSMC is already booked beyond reason and they wouldn't take the risk of planning a LOT of GPUs when market has basically showed them the finger even when they had decent products at a good price, they would be at risk of Nvidia undercutting them for giggles and be stuck with a mountain of GPUs.

Climbing to 50% market share would take multiple generations of GPUs launches and admittedly, Nvidia would have to fuck up for multiple generations.. (how likely is that)

3090 had sold more units than the entire RDNA 2 lineup after nearly a year out

More recent stats by GPU groups
1ru4foykc6s91.png


When AMD says they won't do a paper launch like they did with 6000 series, they're pulling your leg, as they simply do not have anywhere near the capacity to close that gap. They would have to disrupt the entire TSMC production line to make a dent. How much do you think TSMC would charge AMD for that? Do you think AMD has the balls to compete against Apple/Nvidia/Sony/Microsoft's/etc silicon wafer allowances? That GPU price would skyrocket because every other manufacturers would start a bid war for it, which would result in them basically squeezing blood out of AMD.

AMD could come with the second coming of christ in GPU form this gen, and they still wouldn't be able to provide for the market if there were huge demand.

This is where every generation, AMD fans are dreaming of law breaking specs, but also, GPU production out of a magic hat with magical prices. I was an AMD fan also for a lonnngggg time, i know the cycle, you too can snap out of it

Stop It Michael Jordan GIF
Yeah they won't have a huge supply.
Not because they can't get it - they absolutely can and they have. They'd just rather allocate more of that capacity to Server CPUs.

Also what do you mean "Law breaking specs".
What laws are being broken exactly?
 
Do you think AMD has the balls to compete against Apple/Nvidia/Sony/Microsoft's/etc silicon wafer allowances?
Xbox and PS5 silicon comes from AMD's allowance, under their whole "Semi-Custom Solutions" arm, and that'd be only N7 for now. I assume they only need to meet contractual obligations there. I'm almost certain AMD are also the second biggest customer for TSMC's N5, behind Apple. "Bidding wars" for contracts happened in the past for current nodes. Yes, trying to take the whole capacity would be expensive and unlikely - as would building a new fab in collaboration with TSMC. They are definitely split a lot of different ways with the large capacity they have and realistically they're going to shift whatever they can in the highest profit segments.
 
When AMD says they won't do a paper launch like they did with 6000 series, they're pulling your leg, as they simply do not have anywhere near the capacity to close that gap. They would have to disrupt the entire TSMC production line to make a dent. How much do you think TSMC would charge AMD for that? Do you think AMD has the balls to compete against Apple/Nvidia/Sony/Microsoft's/etc silicon wafer allowances? That GPU price would skyrocket because every other manufacturers would start a bid war for it, which would result in them basically squeezing blood out of AMD.
Not having a huge supply != doing a paper launch. Also AMD have reportedly already cut back on 5nm reservations for their CPUs, and Navi 33 will be on 6nm, which saves more 5nm wafers.

Edit: They are also going to get significantly more dies per wafer due to the chiplet design.
 
Last edited:
I don't think they'll gain much market share if it's super good. Since rdna1 they've been slowly getting more respect. I think they need one more generation that needs to be more confidence building. Rdna 4 might be the real breakout.
 
Last edited:
I do t think they'll gain much market share of it's super good. Since rdna1 they've been slowly getting more respect. I think they need one more generation that needs to be more confidence building. Rdna 4 might be the real breakout.
It helps that Nvidia have spent a lot of good will from customers with their recent releases (or non-release), which received a lot of backlash from not only gamers and enthusiast but reviewers as well. With them trying to sell a 60Ti/70 class GPU as an 80. Intoducing extreme price hikes in the mid/high end stacks on top of melting adapters. AMD just needs to kick it out the park ofering competitve products at cheaper prices. Undercut them by a significant amount across the stack, for value and show us that their cards are worth considering. Rather than play the second best, slightly cheaper option.

Sadly, I can't see AMD taking advantage of the situation.. with current rumors that they are only releasing the RX 7900XT/XTX this year. Leaving Nvidia the market for Q4 and maybe Q1, next year. Taking into consideration that most high end buyers want an Nvidia card, not AMD. That offers more way more features and most likely.. much more potent Raytracing perfomance.

The big problem is that AMD has been behind Nvidia for too long for too many generations that Nvidia has majority of mindshare. And for AMD to change the minds of consumer, they have to offer RT performance in the same ballpark as Nvidia's latest and offer similar feature sets, CUDA-like performance for non gaming tasks. The former don't see happening any time soon. Thus, the only way AMD can do take big marketshare is being super aggressive and go to a pricing war. With a big time recession hitting accross the globe this is the time to be marketing and releasing not only high end but also mid range cards and low end to gamers. Build up trust in the brand for the next push.
 
Last edited:
Not having a huge supply != doing a paper launch. Also AMD have reportedly already cut back on 5nm reservations for their CPUs, and Navi 33 will be on 6nm, which saves more 5nm wafers.

Edit: They are also going to get significantly more dies per wafer due to the chiplet design.

It's going to be 5 nm and 6nm in one gpu depending on chiplet.
 
Now, even if we get competitive raytracing performance on RDNA3, Nvidia will probably try to shift the attention to DLSS3 on their optical units which AMD doesn't have, and History will keep repeating itself.
Amusingly - this is the game GPU market has been playing since before they were called GPUs, and NVidia was always there doing it too, with every generation. From championing the start of hw-accelerated T&L on PC end of 90ies, to GPU accelerated particles with Apex (and some PhysX) at end of 00, and everything else in between and the rest of what you mentioned that followed after.

But for better or worse - they always tried to position themselves as thought-leaders in the market, rather than follow/react. Which it's also not without risks as a number of these things did flop for them, and RT was a pretty big risky bet as well (that arguably cost them some market in short term).
It is obvious that in current position they can afford to make dumber bets like DLSS3 and get away with it though.
 
The US did it did they had to pay a lot for that ?If the US one opens couldn't AMD also produce there if they pay like they do with TSMC?

Sure. I would guess that TSMC USA will cost more than TSMC in Taiwan, American labor, etc. TSMC is basically the reason Biden said they would protect Taiwan with military actions if China stepped in, semiconductors are now that important, it's crazy.

Are you really trying to make the case that AMD isn't actively attempting to increase market share? That is ludicrous.
All you have done is demonstrate that Nvidia is indeed the market leader. That AMD has to be more competitive in price/perf. That it takes multiple GPU generations to make a dent.

They have better high margin businesses than trying to make a GPU price war against a company that has much deeper pockets than they do. They are competing on two fronts with Intel (1.2x AMD's market cap) and Nvidia (3.5x AMD's market cap). Intel seems to be very aggressive on pricing which will likely force AMD to slash prices of the 7000 series. Why would AMD start a price war with Nvidia? Even lower margins? Why did they not slash RDNA 2 prices way deeper? They were in a very good position, especially with Nvidia being on a weaker node at Samsung.

It's not that they would not want to increase market share, but that they are not on the same level playing fields. Nvidia is not sleeping at the wheel like Intel was for years. Even with Intel literally handing the market on a silver platter, AMD managed to go from 17% to 32% over 5 generations of the popular Ryzen. Making a dent is in GPU market is being optimistic.

Don't forget that there was a crypto mining boom affecting cards getting into consumer hands. Also worth noting is the fact that AMD didn't release the full GPU stack for a few years, not to mention Nvidia went with Samsung 8nm and potentially didn't face the same shortage issues. Nvidia hard countered upon the launch of 6000 series with a full lineup of Super cards. All the above are factors at play.

Crypto mining affected everyone, not sure what the point is? Are you trying to say that there's a ton of 6000 series cards out there, but they were sent for Crypto? That same crypto that favored fast VRAM such as 3000 series? Nvidia used Samsung with worse yields and countered with super, yes, so what? What's the angle here, that AMD fumbled their chance for RDNA 2 to dominate the market?

You should listen to Lisa Su discuss the desire for increased GPU market share in one of her stockholder meeting/financial analyst days. You have a point regarding the difficulty in attaining fab time for chips but you have pushed that idea too far.

Maybe AMD can go with Samsung to unbottleneck, seems like it was an advantage to Nvidia..

Yeah they won't have a huge supply.
Not because they can't get it - they absolutely can and they have. They'd just rather allocate more of that capacity to Server CPUs.

Also what do you mean "Law breaking specs".
What laws are being broken exactly?

Peoples are expecting a monster MCM with a die roughly equal to a 4090 in area, but non of the negatives. MCM is automatically gonna pull more heat compared to monolithic. MCM is going to add latency. More chiplets, the more crossbars, the more the data has to make a jump at a node, it's the basics of NUMA topology. "B..b.. but Ryzen?" you say, CPU tasks not sensitive inter-GPM bandwidth and local data to latency like GPUs are. AMD's MI200s and Nvidia's (2) H100 chipsets were MCM and were made for tasks with low latency requirements such as scientific computing. NVlink's 900GB/s and MI200s infinity fabric's 100GB/s per links with 8 links providing 800GB/s, are still no match for the whooping 2.5TB/s Apple made for the M1 Ultra. That 2 chipset MCM basically had double CPU performances, while GPU had a +50% increase on their own freaking API! Because don't forget, this segmentation of tasks that are ultra sensitive to fast local packets of data such as FSR/RT/ML will have to be entirely invisible from the API's point of view and since we're on PC, it's on AMD's shoulders to make drivers for that.

What else is rumored.. oh, let's add 4GHz into the mix, 100 less watts, match or surpass a 4090 in rasterization, expensive communication crossbars that are outside of lithography but still manage -$600 over competition. Pulling performances out of a quantum parallel universe basically.

AMD is simply magic
amy sedaris magic GIF by truTV’s At Home with Amy Sedaris


Or maybe something will have to give. You can't stretch the limits in performances and have better of everything, something that tech youtubers don't seem to understand.

Xbox and PS5 silicon comes from AMD's allowance, under their whole "Semi-Custom Solutions" arm, and that'd be only N7 for now. I assume they only need to meet contractual obligations there.

And what an obligation. These consoles are juggernauts of manufacturing, not even the full market share of the Ampere/RDNA 2 would come close, and they'll continue to have huge supply requirements for years to come. Add in some pro/slim models on top of it, already signed and reserved.

The node has little to no impact nowadays in allocation unless you want to compete against Apple. The problem is shortage of silicon wafers. The lithography part is not really the problem. TSMC's 300mm wafer is competing with customers that will process it with a range of 2 microns to 5nm. They still have to send some wafers so that everyone get some fucking wifi with a screen on their fridge/toaster/espresso machine/washer/cars.

I'm almost certain AMD are also the second biggest customer for TSMC's N5, behind Apple. "Bidding wars" for contracts happened in the past for current nodes. Yes, trying to take the whole capacity would be expensive and unlikely - as would building a new fab in collaboration with TSMC. They are definitely split a lot of different ways with the large capacity they have and realistically they're going to shift whatever they can in the highest profit segments.

Exactly.

Not having a huge supply != doing a paper launch. Also AMD have reportedly already cut back on 5nm reservations for their CPUs, and Navi 33 will be on 6nm, which saves more 5nm wafers.

AMD snarked at Nvidia's supply problems by saying they wouldn't have a paper launch, just to have the 3090 wipe the floor with the whole RDNA 2 lineup. What's AMD's definition of a paper launch then as they said Nvidia had one? I would be curious.

As i said above, 5nm, 6nm, 7nm, doesn't matter. Silicon wafer manufacturers can't provide fast enough for the demand. Some of TSMC's biggest suppliers are booked up to 2026. The lithography is not the biggest bottleneck here unless you want to push Apple's out of their premium node.
 
I'm excited to see what AMD announces this week. I don't expect them to match the 4090 on ray tracing performance but if they can get close on rasterized performance then I'll be very impressed.
 
Thus, the only way AMD can do take big marketshare is being super aggressive and go to a pricing war. With a big time recession hitting accross the globe this is the time to be marketing and releasing not only high end but also mid range cards and low end to gamers. Build up trust in the brand for the next push.

With Nvidia stuck with the old stock, this is AMD's golden opportunity to release a compelling offering to those low and mid tier buyers. Sounds like they might not take advantage of that though.
 
I'm excited to see what AMD announces this week. I don't expect them to match the 4090 on ray tracing performance but if they can get close on rasterized performance then I'll be very impressed.
I don't even care about them matching it. They just need to release a sensibly priced card with a sensible power output like the HD4800-series.

I keep seeing all these posts about buying 4090 cards but I honestly can't imagine why most people would even remotely need them. I use a 2080Ti with a 4k144 monitor and there's still only like 5-10 games that truly justify an upgrade. A small handful runs in the 30-40fps range but everything else is basically at 70fps+.
 
Last edited:
With Nvidia stuck with the old stock, this is AMD's golden opportunity to release a compelling offering to those low and mid tier buyers. Sounds like they might not take advantage of that though.
This is true. And we probably won't see Nvidia launch the 4070 for some time either - my guess is that they will in Q3, 2023.

AMD could market RDNA 3 cards being faster, more efficient and better value than Nvidia's 3000 series. All while being more future proof in terms of connectivity (DP 2.0). Comparable or faster in raster than Nvidia's 4000 series cards particularly in lower resolutions with still performant RT (if moderately worse than Ada). At a price which isn't too different from last generation cards.

So long as AMD have something Q1, next year. Like the RX 7800XT and maybe the 7700XT (not long after).. they could get a good amount of sales from poeple holding out for the next considerable jump, from a mainsteam card.
 
Last edited:
If I were AMD, I would launch the top tier card(s?) at a lower price than 4090 (to entice consumers, assuming that it will not beat Nvidia at the very highest tier), but also paper launch a reference model mid-tier GPU (maybe 7700XT?) at an aggressively low price to wreak havoc for Ampere mid-tier pricing and give Nvidia a headache. Of course this also means messing up pricing for its own RDNA2 cards but if we assume AMD is not dealing with as major an inventory issue as Nvidia with Ampere then it might be a risk worth taking.

I could be looking at this wrong but I think AMD's game plan should be to exploit Nvidia's Ampere inventory stockpiling problem as much as possible. The only reason Nvidia can get away with the silly 4090 pricing (to buoy prices for Ampere) is because clearly nobody is interested in RDNA2 cards (beyond maybe the 6600XT, which seems to be getting a lot of recommendations).

Then again, assuming that it'll take Nvidia and AIB partners a while to work through that Ampere stock, there's simply no rush.

I'm bad enough at regular chess, so I'm totally hopeless with this 4D chess stuff.
 
Last edited:
Top Bottom