• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia RTX 4090 Ti Reportedly Cancelled Due to GPU Melting Power Supplies

Status
Not open for further replies.

kiphalfton

Member
They didn’t change it to 16. 16 existed day one along with 12.

They probably just renaming it from the heat they got and they don’t want to tarnish the 80 series when the 3090 is more powerful than 4080 12gb. This will cause the sales of the 16g card to tank because of the confusion

What happened was something like:
Jens: we need to charge allot for our 4070. What should we do
Marketing team: let’s call it 4080.
Jens: that’s a brilliant idea just like me. Let’s do it..


Most probably the guy in the marketing team was fired today.

That's all fine and dandy, but it doesn't really matter if the 16GB RTX 4080 is still $1199.

I'm guessing what they'll do is rebadge the 12GB RTX 4080 as the RTX 4070 Ti or RTX 4070 Super. If they're feeling rela ballsy, that will just be the RTX 4070 (since it hasn't yet been announced).
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Yes i assume that because they need it if RDNA3 is good.

6 months from now is too late for nvidia
Lol.
Definitely not 1 month from the 4090.
I wouldnt be shocked if they dont even announce it this year, let alone release it.

At best I could see them showing it off at CES'23, otherwise sometime mid next year.

Even if RDNA3 is good.....i doubt it will pose much of a threat to the RTX4090.


Theres only ~2000 CUDA cores left on AD102 for them to utilize, if if if RDNA3s range topper is well beyond the 4090, they [Nvidia] would have to be working with near perfect yield chips to put in the 4090Ti.
It would create artificial scarcity for the 4090.
Releasing a range topper one month after having just released your range topper is super super scummy, even for Nvidia
It may be the plan, but I think the outrage from the 4080'12G should be tempering their expectations with how much shit people are willing to eat.
Both the 4080Ti and 4090Ti are likely mid/late next year cards.
No chance they come out so soon to the launch of the 4090....there arent enough AD102 chips to go around just yet, and cannibalizing your own product doesnt make much sense cept in yield forced situations (4070 and 4060Ti).
 

SantaC

Member
Lol.
Definitely not 1 month from the 4090.
I wouldnt be shocked if they dont even announce it this year, let alone release it.

At best I could see them showing it off at CES'23, otherwise sometime mid next year.

Even if RDNA3 is good.....i doubt it will pose much of a threat to the RTX4090.


Theres only ~2000 CUDA cores left on AD102 for them to utilize, if if if RDNA3s range topper is well beyond the 4090, they [Nvidia] would have to be working with near perfect yield chips to put in the 4090Ti.
It would create artificial scarcity for the 4090.
Releasing a range topper one month after having just released your range topper is super super scummy, even for Nvidia
It may be the plan, but I think the outrage from the 4080'12G should be tempering their expectations with how much shit people are willing to eat.
Both the 4080Ti and 4090Ti are likely mid/late next year cards.
No chance they come out so soon to the launch of the 4090....there arent enough AD102 chips to go around just yet, and cannibalizing your own product doesnt make much sense cept in yield forced situations (4070 and 4060Ti).
RDNA3 is MCM/MCD it is a chiplet style GPU just like their Ryzen processors. It would not surprise me if they upset nvidia.

Nvidia is in damage mode right now
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
RDNA3 is MCM/MCD it is a chiplet style GPU just like their Ryzen processors. It would not surprise me if they upset nvidia.

Nvidia is in damage mode right now
Damage mode because of the 4080'12G.
AD102 is a resounding success.
They achieved what they set out to achieve, near 2x performance gen on gen.

AMD would have to perform some miracle have their ML tech in place, achieve near 2.5x performance gen on gen in pure raster and something like 4x perf boost in RT to even be worth worrying about.

Disaggregated chip designs arent a magic bullet, that doesnt actually mean shit till we see the thing actually perform.
Remember the 6900XT in many cases could beat the RTX3090, but DLSS and RT performance meant it didnt actually matter.
If AMD dont have all their ducks in a row for RDNA3, then its another gen of yes AMD are good....no we still arent buying it.
 

SantaC

Member
Damage mode because of the 4080'12G.
AD102 is a resounding success.
They achieved what they set out to achieve, near 2x performance gen on gen.

AMD would have to perform some miracle have their ML tech in place, achieve near 2.5x performance gen on gen in pure raster and something like 4x perf boost in RT to even be worth worrying about.

Disaggregated chip designs arent a magic bullet, that doesnt actually mean shit till we see the thing actually perform.
Remember the 6900XT in many cases could beat the RTX3090, but DLSS and RT performance meant it didnt actually matter.
If AMD dont have all their ducks in a row for RDNA3, then its another gen of yes AMD are good....no we still arent buying it.
Chiplet designed GPU will blow nvidias ancient monolithic architecture out of the water.
 

This is a fascinating, but somewhat deep topic. As transistor counts scale, the number of functional active transistors isn't scaling linearly. There is a huge amount of dead or temporally 'dormant' silicon that is task and time dependent. Indeed, when they layout the route and placement they design for these thermal envelopes and typical workloads. The passive draw increases linearly, I believe, but the active draw doesn't and is designed for, thus you shouldn't expect this to be an actual situation where the gates 'melt' that's a crazy idea.

Now, the overall power draw on a under-performant or shitty power supply is another issue altogether and there are others here who are vastly more knowledgeable than me, so I'll leave it to them.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Chiplet designed GPU will blow nvidias ancient monolithic architecture out of the water.
Just like Ryzens chiplet designs sans 3D blew away Intels ancient monolithic designs?
Ill bookmark your post just in case you are right.....




Not really its just so I can gloat in 2 weeks.
 

SantaC

Member
Just like Ryzens chiplet designs sans 3D blew away Intels ancient monolithic designs?
Ill bookmark your post just in case you are right.....




Not really its just so I can gloat in 2 weeks.
AMD did gain like 100% on intel and did beat intel when zen 3 came out. How is your memory?
 

Xyphie

Member
Why would Navi3x be faster because it has chiplets? You don't do chiplets because of performance, it's about cost and scaling.

We can infer from just the die sizes alone that Navi 31 will probably be a slower chip than AD102.

AD102
Monolithic 608mm^2 on TSMC 4N (N5P with some nVidia tweaks)

Navi 31
1x GCD - 308mm^2 on TSMC N5P
6x MCD - 37.5 * 6 mm^2 TSMC 6nm = 225mm^2

308+225= 533mm^2.

The MCD will be mostly PHYs and SRAM which we know scales less than other logic, but even if we assume absolutely no scaling benefit there still has a solid +15% die size difference between them in favour of the nVidia chip.
 

VAVA Mk2

Member
FYULnzK.jpg
 

mhirano

Member
Ridiculously fake claims.
GPUs have power limits and temperature-triggered throttling for more than a decade.
4090 Ti was not released because it is mot needed at the moment and would only cannibalize 4090 sales.
 

Chiggs

Gold Member
Hey, guys...can we knock off the cheap shots at Moore's Law is Dead? He's clearly in-the-know, and it's not like any asshole can sign up for a YouTube account and spew bullshit.

Thanks!
 
Last edited:

Buggy Loop

Member
Damage mode because of the 4080'12G.
AD102 is a resounding success.
They achieved what they set out to achieve, near 2x performance gen on gen.

AMD would have to perform some miracle have their ML tech in place, achieve near 2.5x performance gen on gen in pure raster and something like 4x perf boost in RT to even be worth worrying about.

Disaggregated chip designs arent a magic bullet, that doesnt actually mean shit till we see the thing actually perform.
Remember the 6900XT in many cases could beat the RTX3090, but DLSS and RT performance meant it didnt actually matter.
If AMD dont have all their ducks in a row for RDNA3, then its another gen of yes AMD are good....no we still arent buying it.

You may repeat 10 times to the same person that MCM vs monolithic is not a magical solution, that it's really dependent on the foundry and their limits and that it's basically an optimization curve of finding when you are running out of monolithic's advantages to go for MCM and it will still fly over their head.

There's no such thing as MCM destroying monolithic if the foundry provides a good node, transistor density and yields, which i mean... it's fucking TSMC..

GPUs are directly tied to transistor scaling and there's clearly room to grow. Only reason you want smaller chips is for yields, or some monster GPU area with tons of inter-GPM connections (Think supercomputers, or simply $$$ & additional losses-watts)

Adding extra slower communication buses ($$$ as it's not made by lithography) and latencies to have more computational power than monolithic? GPU's parallelization is super sensitive to inter-GPM bandwidth and local data, unlike CPUs.

Say you improved the intercommunication as best as you could (still slower than monolithic), what about software? Stuffs like DLSS which uses temportal information from surrounding frames, you think splitting workload on that is easy? Like something you can't afford to lose useless milliseconds otherwise the performance tanks? Now add the constant back and forth between the shader pipeline being updated with the BVH traversal result when you use DX12 DXR, somehow splitting the local storage of these solutions? You have to make this invisible to the developers and that requires a shit ton of programming so that the workload split appear invisible for the API. That's a lot of blind faith to believe a company that is always struggling with drivers and API tech would nail on first try.

Although kopite7kimi deleted his tweet, Nvidia had both an MCM and monolithic version for the Ada Lovelace series. And of course Hopper



2 H100 chips connected by cache coherent Nvlink at 900GB/s. Faster than MI200s infinity fabric. But even that is for high latency applications.

Even Apple with their 2.5TB/s link for the M1 ultra, we see between the M1 ultra and the M1 max that it's roughly twice the performance for CPU. For GPU? More like +51% increase for double the GPU cores on their own freaking API, Metal.

Why? NUMA topology. The more nodes (2 or more crossbars) you add, all the things that require fast and small transactions between cache and memory (oh like gaming..) will hop more times across a core interconnect to get where it needs to be, the more time it takes to get a result, the more latency. The more chiplets the more nodes the more hops the more latency. Which is fine for super computers (or in Apple's case, some production suite because yea.. gaming on mac..) but not so much for gaming. CPUs don't care as they're not sensitive like GPUs to crazy fast transactions and parallelization. Peoples extrapolating Ryzen CPUs to GPUs are out of their fucking mind.

You want to just slap as many chiplets as you can to have 100 TF? Is the goal to go way beyond 4090's 608 mm²? Compared to MCM, monolithic designs consume less power, always. If you thought that Ada lovelace required big coolers and power draw, buckle up pal.

It's the eternal cycle of hoping AMD finds a quantum hole in the universe and channel it into a GPU that trounces Nvidia in computational power, is cheap, is more power efficient and has perfect drivers and match Nvidia in every aspects. Even if it doesn't make any fucking sense when we look at the facts and science behind MCM and the decades of research every single chip manufacturer has done on this subject.
 
Last edited:

TheGrat1

Member
Reading through this thread the news seems like this is bullshit. However, even if it was not this would be the opposite of bad press for the card, it will just make want people want it more.


First guy to smoke = PSU; Joint = 4090Ti; Loc Dawg = Average PC gamer.
 

Haint

Member
Even Apple with their 2.5TB/s link for the M1 ultra, we see between the M1 ultra and the M1 max that it's roughly twice the performance for CPU. For GPU? More like +51% increase for double the GPU cores on their own freaking API, Metal.

Why? NUMA topology. The more nodes (2 or more crossbars) you add, all the things that require fast and small transactions between cache and memory (oh like gaming..) will hop more times across a core interconnect to get where it needs to be, the more time it takes to get a result, the more latency. The more chiplets the more nodes the more hops the more latency. Which is fine for super computers (or in Apple's case, some production suite because yea.. gaming on mac..) but not so much for gaming. CPUs don't care as they're not sensitive like GPUs to crazy fast transactions and parallelization. Peoples extrapolating Ryzen CPUs to GPUs are out of their fucking mind.

You want to just slap as many chiplets as you can to have 100 TF? Is the goal to go way beyond 4090's 608 mm²? Compared to MCM, monolithic designs consume less power, always. If you thought that Ada lovelace required big coolers and power draw, buckle up pal.

It's the eternal cycle of hoping AMD finds a quantum hole in the universe and channel it into a GPU that trounces Nvidia in computational power, is cheap, is more power efficient and has perfect drivers and match Nvidia in every aspects. Even if it doesn't make any fucking sense when we look at the facts and science behind MCM and the decades of research every single chip manufacturer has done on this subject.

To be fair the 4090 doesn't always scale that well either. With >50% boost to both core count AND clocks over the 3090's, several benchmarks are only seeing a 40-50% advantage.
 

sertopico

Member
What a bullshit article. Power supplies have protection systems you know....

Reading that the card self destroys is hilarious lol
 
Last edited:

ShirAhava

Plays with kids toys, in the adult gaming world
GAF doesn't seem to buy this so its prob very true lmao

I hope the design and/or process isn't inherently flawed
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
GAF doesn't seem to buy this so its prob very true lmao

I hope the design and/or process isn't inherently flawed
Its ~2000 more CUDA cores than the 4090.
The 4090 at full power runs cool.
The 4090 at 60% power, loses near no performance.
The 4090 at 60% is a 360W card.

The 4090Ti might run marginally hotter and at a higher TDP, but it certainly certainly isnt going to be a problem for a 1000W let alone 1200 or 1600W.

You may repeat 10 times to the same person that MCM vs monolithic is not a magical solution, that it's really dependent on the foundry and their limits and that it's basically an optimization curve of finding when you are running out of monolithic's advantages to go for MCM and it will still fly over their head.

There's no such thing as MCM destroying monolithic if the foundry provides a good node, transistor density and yields, which i mean... it's fucking TSMC..

GPUs are directly tied to transistor scaling and there's clearly room to grow. Only reason you want smaller chips is for yields, or some monster GPU area with tons of inter-GPM connections (Think supercomputers, or simply $$$ & additional losses-watts)

Adding extra slower communication buses ($$$ as it's not made by lithography) and latencies to have more computational power than monolithic? GPU's parallelization is super sensitive to inter-GPM bandwidth and local data, unlike CPUs.

Say you improved the intercommunication as best as you could (still slower than monolithic), what about software? Stuffs like DLSS which uses temportal information from surrounding frames, you think splitting workload on that is easy? Like something you can't afford to lose useless milliseconds otherwise the performance tanks? Now add the constant back and forth between the shader pipeline being updated with the BVH traversal result when you use DX12 DXR, somehow splitting the local storage of these solutions? You have to make this invisible to the developers and that requires a shit ton of programming so that the workload split appear invisible for the API. That's a lot of blind faith to believe a company that is always struggling with drivers and API tech would nail on first try.

Although kopite7kimi deleted his tweet, Nvidia had both an MCM and monolithic version for the Ada Lovelace series. And of course Hopper



2 H100 chips connected by cache coherent Nvlink at 900GB/s. Faster than MI200s infinity fabric. But even that is for high latency applications.

Even Apple with their 2.5TB/s link for the M1 ultra, we see between the M1 ultra and the M1 max that it's roughly twice the performance for CPU. For GPU? More like +51% increase for double the GPU cores on their own freaking API, Metal.

Why? NUMA topology. The more nodes (2 or more crossbars) you add, all the things that require fast and small transactions between cache and memory (oh like gaming..) will hop more times across a core interconnect to get where it needs to be, the more time it takes to get a result, the more latency. The more chiplets the more nodes the more hops the more latency. Which is fine for super computers (or in Apple's case, some production suite because yea.. gaming on mac..) but not so much for gaming. CPUs don't care as they're not sensitive like GPUs to crazy fast transactions and parallelization. Peoples extrapolating Ryzen CPUs to GPUs are out of their fucking mind.

You want to just slap as many chiplets as you can to have 100 TF? Is the goal to go way beyond 4090's 608 mm²? Compared to MCM, monolithic designs consume less power, always. If you thought that Ada lovelace required big coolers and power draw, buckle up pal.

It's the eternal cycle of hoping AMD finds a quantum hole in the universe and channel it into a GPU that trounces Nvidia in computational power, is cheap, is more power efficient and has perfect drivers and match Nvidia in every aspects. Even if it doesn't make any fucking sense when we look at the facts and science behind MCM and the decades of research every single chip manufacturer has done on this subject.


Exactly.
People fall for marketing speak and assume its some magic bullet that just does everything better.
I almost feel bad for them, cuz while RDNA3 will impress, its not going to be some 200TFLOP GPU looking at the overall size of the chip(lets), just because its MCM.
This time both AMD and Nvidia are on the same TSMC node, with some degree of accuracy we can guess how powerful the big big Navi is going to be.
Its unlikely to even be much of a threat to the 4090 and is probably gonna get walked 9 times outta 10.
9 times outta 10 cuz im sure Ubisoft have a new version of Dunia and Anvil that absolutely love AMD even more than the current versions do.
 

nemiroff

Gold Member
Smells like a fishy story to me.. From reading reviews the 4090 is doing more than fine, and at 60 C at that. All of this drama because of a bit of factory overclock..? Nah..
 
Last edited:
A source close to YouTuber Moore's Law is Dead

You'd rather believe tarot card readers on how accurately they can predict what color jacket and undies Jensen would wear at CES 2023, than this fraud with his educated (badly) guesses from all over the internet (think Reddit, LTT forums or even 4chan) disguised as "leaks".

That fool, "HipHopGamer" is more believable than this clickbaiting muppet.

the-office-steve-carell.gif
 

rushgore

Member
I always joked about Novideo, but their gpus are now literally providing no video because they're too busy frying your computer
5000 series better be all about power efficiency improvements, this shit is fucking stupid
This shit is already miles better than Ampere when it comes to performance/power
 
Status
Not open for further replies.
Top Bottom