• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NVIDIA Allegedly Begins Testing Its Fastest Next-Gen GPU, The AD102, For GeForce RTX 4090 Graphics Card, Features 24 Gbps GDDR6X Memory

....but xx60 might also be 8GB which I know people will call a backslide versus the 3060

yes-seinfeld.gif

8gb on anything below the 60 series and Nvidia needs to get bodied by AMD this round, dlss or no dlss

But I really don't think Nvidia is that stupid, despite some fanboys ready to defend that decision.
 
Last edited:

lukilladog

Member
There is no reality where Nvidia are going to sell a card twice as fast as a 3080 for "$699" or even "$799" or "$899".

Maybe the reality of slim mining profits and nvda stock under $200... plus a recession with people spending less money on this stuff.
 
Last edited:

Rbk_3

Member
I run a custom loop because they are fun to build, especially when doing ITX builds, but even an AIO would do wonders on any modern GPU. The large die sizes and spread out nature of the boards make them love large block cooling.
Yeah, the Hybrid Kit on my 3090 drops my temps 15 degrees easily. It is a shame they are not more common.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
They already did this with the 2080, which was ironically also a series that was developed and planned during a crypto boom amid perpetual GPU sell out/shortages. There is no downside for Nvidia here. If demand for whatever reason flatlines and sales tank, they can self correct with price drops or Super/Ti rebadges of everything to get them back in line. AMD holds single digit dGPU marketshare, they would have to release cards 50-100% faster than than the 4XXX's at several hundred dollars less to claw even 20% or 30% away from Nvidia. Nvidia is a literal monopoly in dedicated gaming GPU's, people don't realize how utterly insignificant AMD truly are in the space, they effectively don't even exist.
The 2080 was a special case.
Turing in general was a special case because of how much die space was going towards RT cores and Tensor cores.
In pure raster Turing wasnt that great an upgrade.
I can fully understand people who skipped Turing if they already had a 1070 or above.
<--- Had a 1070

Indeed they do have huge market share but I'd think they still want to actually have slides that show gen on gen increases.
They usually launch xx80 and xx70 first.
If the slides show a small jump they will have to do what they did with the 3060 which is go back two generations to show just how much better this chip is. (the 3060 even ignored the card it was actually replacing the 2060S cuz they knew that slide would look weak).
yes-seinfeld.gif

8gb on anything below the 60 series and Nvidia needs to get bodied by AMD this round, dlss or no dlss

But I really don't think Nvidia is that stupid, despite some fanboys ready to defend that decision.
The AD xx60 will likely trade blows with the 3070Ti....an 8GB card.
If it really is AD104 based then itll likely be 12GB again....if its AD103 based then 8GB is likely to keep costs down.
My thinking is Nvidia wont bother making a xx60Ti like the 30 series and will have both the xx70 and xx60 share a chip.
Bad yield xx70s become xx60s.
The memory options if that is the case is 8GB and 16GB.
 
I'll almost certainly upgrade when the 4080ti is available, time will tell how shitty pricing is by then. The economy is still catching up to the massive shit the world as a whole just took on itself over the last two years. It seems no lessons have been learned in the process so I might be looking at a $1500 tank of gas which might make a $5K GPU look like a bargain.
 

FireFly

Member
They already did this with the 2080, which was ironically also a series that was developed and planned during a crypto boom amid perpetual GPU sell out/shortages. There is no downside for Nvidia here. If demand for whatever reason flatlines and sales tank, they can self correct with price drops or Super/Ti rebadges of everything to get them back in line. AMD holds single digit dGPU marketshare, they would have to release cards 50-100% faster than than the 4XXX's at several hundred dollars less to claw even 20% or 30% away from Nvidia. Nvidia is a literal monopoly in dedicated gaming GPU's, people don't realize how utterly insignificant AMD truly are in the space, they effectively don't even exist.
The downside is having a potentially underwhelming launch and luke warm reviews, like we saw with the 2xxx series. Even during the shortage, we saw semi-sensible MSRPs from Nvidia for the 3060 and 3070 Ti, and now GPU prices are coming back down to normal. Having high marketshare doesn't mean Nvidia are happy to have it erode, and Navi 33 looks like it will deliver 6900 XT performance in the ~$500 price point.
 

DenchDeckard

Moderated wildly
I just got a 1000 watt rmx corsair psu. I'm gonna need them to allow me to buy the specific nvidia power cable I guess. I want a 4080 with minimum 16gb ram but I may go all out this time and get the 4090 if it really pumps out some extra performance. TDP is an absolute joke and I wish that Intel and AMD could put up more of a fight to nock nvidia down a peg or two so that they would invest in more of an efficient design. I'm gonna need a scaffolding structure to support the gpu.
 
The 2080 was a special case.
Turing in general was a special case because of how much die space was going towards RT cores and Tensor cores.
In pure raster Turing wasnt that great an upgrade.
I can fully understand people who skipped Turing if they already had a 1070 or above.
<--- Had a 1070

Indeed they do have huge market share but I'd think they still want to actually have slides that show gen on gen increases.
They usually launch xx80 and xx70 first.
If the slides show a small jump they will have to do what they did with the 3060 which is go back two generations to show just how much better this chip is. (the 3060 even ignored the card it was actually replacing the 2060S cuz they knew that slide would look weak).

The AD xx60 will likely trade blows with the 3070Ti....an 8GB card.
If it really is AD104 based then itll likely be 12GB again....if its AD103 based then 8GB is likely to keep costs down.
My thinking is Nvidia wont bother making a xx60Ti like the 30 series and will have both the xx70 and xx60 share a chip.
Bad yield xx70s become xx60s.
The memory options if that is the case is 8GB and 16GB.
It won't be 8gb on the 4060 I would bet you money on it. 192 bit bus ensures it even if Nvidia had lost their minds and wanted to use 8gb. Dude there is a damn 2060 12gb model ffs.

Also I expect 4060 to match 3080 in performance.

All those 8gb ampere cards are pure junk with no longevity, only sellable because of a horrible buyers market and crypto. 10gb 3080 is also a joke.

Love my 12gb 3060, could use more performance at 4k but at least I got what I paid for and didn't get cheated out of vram. Excellent 1080p high fps card, and a good 4k card if you know how to tweak settings. Gets 4k60 on Witcher 3 high/max :)
 
Last edited:

nemiroff

Gold Member
That TDP is ridic. Im on the medium power bandwagon from now on. Future upgrades will never go higher TDP than what my system currently has now.
I'm unfortunately in the same group so-to-speak. ~800W for a peak gaming PC session is an astonishing amount of heat to unleash in a normal sized room unless you're living in the Arctis.
 

DinoD

Member
I can perhaps flog this to my wife, as it potentially doubles as the heating solution in winter months.
 

Md Ray

Member
All that power and it will still suffer from those nasty hitches and stutters in DX12 titles.

NVIDIA and other GPU vendors should work with MS and game devs to eliminate shader compilation stutters from games first.
 
Last edited:

KungFucius

King Snowflake
yes-seinfeld.gif

8gb on anything below the 60 series and Nvidia needs to get bodied by AMD this round, dlss or no dlss

But I really don't think Nvidia is that stupid, despite some fanboys ready to defend that decision.
The size of the memory is not the only factor. I had a game that said it needed 11 GB for max settings. I loled because I had a 3090 and thought it semi justified not sticking with a 3080. When I ran the game it used something under 8GB. 11GB was probably the spec needed for 2080Ti. I wish I still had a 3080 to see if the game would use more ram because the lower bandwidth. The point is the size is not the only factor. Most argue more is needed to future proof, but they just want margin. NVidia are world class GPU designers they chose the best configuration for the market knowing that people who are really into tech will be upgrading each gen regardless. They don't add more memory during the cycle because the launch cards are skimped, they add more memory to make the newer revs more attractive later in the cycle to add confidence to buyers who might be tempted to wait for next gen. It does then no favors that game devs/publishers give memory specs that are not accurate across all GPUs.
 
My predictions for where the chips fall:

AD102 4090 ti, 4090, 4080ti
AD103 4080, 4070ti
AD104 4070, 4060ti
AD106 4060
AD107 4050

I'm sure that will be all wrong because I didn't leave them with a mobile specific chip for the high-end mobile cards. :messenger_tears_of_joy:

I'll assume that they are dropping the width of the memory bus to try and offset the costs of the cache. The additional cache will hopefully allow them to see decent gains with only modest increases in cores.
 
Last edited:
More and more, I wish we had done this years ago when we built our home. With the power our house runs, charging our Tesla, and such, being solar powered would be just awesome now. I need to look into this again.
Its for sure not a cheap investment but like you we mainly use EVs and its just free energy at this point so its totally worth the cost to us.

We currently have a Mach E and with the level 2 charger it can get a full charge in about 12 hours all from solar

Side note we just bought a Wrangler 4XE 2 days ago and it gets a "full" charge in about 2 hours since its all electric range is only about 20 miles :)
 

nightmare-slain

Gold Member
the price and power consumption is going to be eye watering.

can't really justify PC gaming anymore when cards are so expensive and power hungry. i'll grab a Series X when my GPU isn't holding up anymore.

Do i really need a fucking 1000w psu for this thing??
probably yeah.

an overclocked CPU can run anywhere between 200-400W. so if you have a 400-600W gpu that's 600-1000W just for cpu/gpu. need to also take into account your motherboard, storage drives, fans, and liquid cooler (if you have one).

i think 750W is going to be the absolute minimum for builds now but i feel that would be cutting it close. probably be safer going with an 850W. i think most people will just go with a 1000W and if you build a really beefy system then you'll need to go for a 1200/1300W.
 
Last edited by a moderator:
The size of the memory is not the only factor. I had a game that said it needed 11 GB for max settings. I loled because I had a 3090 and thought it semi justified not sticking with a 3080. When I ran the game it used something under 8GB. 11GB was probably the spec needed for 2080Ti. I wish I still had a 3080 to see if the game would use more ram because the lower bandwidth. The point is the size is not the only factor. Most argue more is needed to future proof, but they just want margin. NVidia are world class GPU designers they chose the best configuration for the market knowing that people who are really into tech will be upgrading each gen regardless. They don't add more memory during the cycle because the launch cards are skimped, they add more memory to make the newer revs more attractive later in the cycle to add confidence to buyers who might be tempted to wait for next gen. It does then no favors that game devs/publishers give memory specs that are not accurate across all GPUs.
They skimp on memory a lot. 3080 10gb already has issues at 4k max texture on games like far cry, so in the future it's an issue that didn't need to be there.

Just like 2gb gtx 680 vs. 4gb version ; 2gb version would stutter in games very early on in the generation with max textures but 4gb was fine.

The 2080ti does not need more vram than 3080 because it is weaker, it doesn't work like that. Rather the 3080 will be able to use more vram because it's more powerful. The fact that both it and 1080ti have an extra gig over it is laughable and they only designed 3080 that way because of the market and not because they're the perfect company.

More vram is important so the card can perform its best for at least a few cycles, and the 12gb 3060 is good there ; the faster 3060ti/3070 will stutter / have lower textures in games before 3060 despite them being more expensive and that is just stupid.
 
Last edited:

GymWolf

Gold Member
the price and power consumption is going to be eye watering.

can't really justify PC gaming anymore when cards are so expensive and power hungry. i'll grab a Series X when my GPU isn't holding up anymore.


probably yeah.

an overclocked CPU can run anywhere between 200-400W. so if you have a 400-600W gpu that's 600-1000W just for cpu/gpu. need to also take into account your motherboard, storage drives, fans, and liquid cooler (if you have one).

i think 750W is going to be the absolute minimum for builds now but i feel that would be cutting it close. probably be safer going with an 850W. i think most people will just go with a 1000W and if you build a really beefy system then you'll need to go for a 1200/1300W.
I don't overclock, never did.

And i'm probably gonna end with a 4070\4080 or 3080ti if series 4000 launch is gonna be a shitshow.
Pair that with an intel series 12\similar amd offer or whatever is gonna be in the 250-300 dollars price range during series 4000 launch period.

So i guess that 850-1000 is gonna be the sweet spot for me.
 
Last edited:

Dream-Knife

Banned
10752, the RTX 3080 has 8960 (round off to 9000).
3080 has 8704.
Hmm, I may just keep my 3090 and skip next gen since my current card is still a beast at 4k and has enough vram to last a long time.

Do you guys plan to get the 4090 at launch or wait for the inevitable 4090 Ti?
3090ti -> 4090 -> 4090ti if you have to have the best.
It won't be 8gb on the 4060 I would bet you money on it. 192 bit bus ensures it even if Nvidia had lost their minds and wanted to use 8gb. Dude there is a damn 2060 12gb model ffs.

Also I expect 4060 to match 3080 in performance.

All those 8gb ampere cards are pure junk with no longevity, only sellable because of a horrible buyers market and crypto. 10gb 3080 is also a joke.

Love my 12gb 3060, could use more performance at 4k but at least I got what I paid for and didn't get cheated out of vram. Excellent 1080p high fps card, and a good 4k card if you know how to tweak settings. Gets 4k60 on Witcher 3 high/max :)
3060 is a 1080p/1440p card bud. If you're upgrading to 40 series you could have gotten one of those cards with less vram and had better performance. It's not like the longevity even matters at that point.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
It won't be 8gb on the 4060 I would bet you money on it. 192 bit bus ensures it even if Nvidia had lost their minds and wanted to use 8gb. Dude there is a damn 2060 12gb model ffs.

Also I expect 4060 to match 3080 in performance.

All those 8gb ampere cards are pure junk with no longevity, only sellable because of a horrible buyers market and crypto. 10gb 3080 is also a joke.

Love my 12gb 3060, could use more performance at 4k but at least I got what I paid for and didn't get cheated out of vram. Excellent 1080p high fps card, and a good 4k card if you know how to tweak settings. Gets 4k60 on Witcher 3 high/max :)

They skimp on memory a lot. 3080 10gb already has issues at 4k max texture on games like far cry, so in the future it's an issue that didn't need to be there.

Just like 2gb gtx 680 vs. 4gb version ; 2gb version would stutter in games very early on in the generation with max textures but 4gb was fine.

The 2080ti does not need more vram than 3080 because it is weaker, it doesn't work like that. Rather the 3080 will be able to use more vram because it's more powerful. The fact that both it and 1080ti have an extra gig over it is laughable and they only designed 3080 that way because of the market and not because they're the perfect company.

More vram is important so the card can perform its best for at least a few cycles, and the 12gb 3060 is good there ; the faster 3060ti/3070 will stutter / have lower textures in games before 3060 despite them being more expensive and that is just stupid.
Did you just my 3060 has 12GB of VRAM me?

Those 8GB cards you are laughing at will out perform your 3060 10 out of 10 times.
The 3060 only got 12GB of VRAM because 6GBs would have been an insult.
The 3060 is vastly vastly vastly weaker than the 3070 even with that extra VRAM the 3070s performance and memory interface will have it eating the 3060s breakfast until you run a game thats nothing but 8K textures on 2 polygons repeated over and over.
The 3070 wont be stuttering or lowering textures before the 3060 cuz the 3060 will have already chocked itself to death by the time we reach a point that the 3070 NEEDs to lower texture quality to be stable.

The 3060 is already struggling to hit frikken 40fps at 1440p when the 3070 can basically lock 60.
As games get more intensive the 3060 will be a sub 30fps card well before it ever gets a chance to even use that 12GB of VRAM.
3080 has 8704.
Didnt Nvidia replace the 3080 10G with the 3080 12G just like they replaced the 2060 with 2060S?
I thought the 3080 12G was basically the 3080S but I guess with the price increase and the fact they are still found on the shelves "shortage not withstanding" means Nvidia wants them to be concurrent SKUs.
Odd choice but idk.
I listed the 3080 12Gs CUDA count cuz its currently the defacto 3080, its what the 4080 would be replacing.
 
NVIDIA's next-gen AD102 GPU is currently being tested for the GeForce RTX 4090 graphics card, as reported by Kopite7kimi.

NVIDIA GeForce RTX 4090 Rumors: Flagship AD102 GPU Enters Test Phase, 24 Gbps GDDR6X Memory Ready​

There has been speculation that NVIDIA might go for the GeForce RTX 50 series naming scheme instead of the expected GeForce RTX 40 series branding but it looks like Kopite7kimi has stated NVIDIA's decision to stick to the 40 series naming scheme. Other than that, one big milestone is that NVIDIA may already have started testing and evaluating its flagship Ada Lovelace GPU, the AD102, which will power a series of graphics cards such as the RTX 4090 and the RTX 4080.



The leaked PCB design talked about the memory which will feature 12 solder points on the PCB and all are compatible with Micron's GDDR6X memory. Higher-end cards might go with single-sided and dual-capacity memory since that offers the best power/temperature balance and feature up to 24 GB capacities but at higher speeds (Up To 24 Gbps). As for the mainstream segment, we are likely to see 20 Gbps+ designs but in 8 GB and up to 16 GB flavors which can help reduce power since the power regulation will be dropped to 3 VRMs for the memory.

As for cooling these monster PCBs, NVIDIA is reportedly going to reuse their triple-slot BFGPU design while board partners are going to utilize 3.5 and even quad-slot cooling solutions weighing over 2 kg. Most AIBs might just end up utilizing AIO and Hybrid cooling designs, something that you will be seeing in the RTX 3090 Ti. The cards are expected to feature up to a 28 phase VRM design on the flagship NVIDIA GeForce RTX 4090 graphics card so all that extra cooling will be put to good use.


NVIDIA Ada Lovelace & Ampere GPU Comparison​

Ada Lovelace GPUSMsCUDA CoresTop SKUMemory BusAmpere GPUSMsCUDA CoresTop SKUMemory BusSM Increase (% Over Ampere)
AD10214418432RTX 4090?384-bitGA1028410752RTX 3090 Ti384-bit+71%
AD1038410752RTX 4070?256-bitGA103S607680RTX 3080 Ti256-bit+40%
AD104607680RTX 4060?192-bitGA104486144RTX 3070 Ti256-bit+25%
AD106364608RTX 4050 Ti?128-bitGA106303840RTX 3060192-bit+20%
AD107243072RTX 4050?128-bitGA107202560RTX 3050128-bit+20%


NVIDIA GeForce RTX 4090 'AD102 GPU' Graphics Card PCB Design - What We Know​

The NVIDIA GeForce RTX 40 series graphics cards equipped with the AD102 GPUs are expected to offer TDPs of up to 600W. That is at least what the current BIOS shipping to board partners is rated at so the rumors about 450-600W TDPs might be true but we haven't yet seen the final figures. The power ratings are usually on the high side during the testing phase so those could be optimized when the cards actually launch. The cards will be outfitted with the PCIe Gen 5 connectors and will ship with a 4 x 8-Pin to 1 x 16-pin adapter to support the huge power draw. The upcoming GeForce RTX 3090 Ti itself will be shipping with a 3 x 8-Pin to 1 x 16-Pin adapter.

More : https://wccftech.com/nvidia-geforce...ics-card-testing-begins-24-gbps-gddr6x-rumor/

At 600w? No thanks, I don’t have room for space heaters plus, where’s that 2x improvement over Ampere some prominent leaksters were touting? I don’t see it here, it’s also too late to make any under the hood changes with the architecture since such things are finalized with early thorough testing commencing nearly 5-6 months prior to launch. I hope those 600w power figures get reduced because using Atx 3.0’s max power delivery this early on is excessive to say the least. Hopefully it does improve, thermals are just as important as performance and transistor density.
 

STARSBarry

Gold Member
Hmm, I may just keep my 3090 and skip next gen since my current card is still a beast at 4k and has enough vram to last a long time.

Do you guys plan to get the 4090 at launch or wait for the inevitable 4090 Ti?

Honestly I never upgrade a generation, sometimes I skip 2 if it turns out to be anything like the 2000 gen which had barely any proformance gain on the 1000 series. My 980 Ti's easily made it through.

I feel like with the increased power consumption I want to see if they can tweak that in the inevitable 5000 series, more raw numbers is great but I don't want to have to use my 1000watt PSU at full tilt all the time, I like the headroom.

RTX 3080 I think will last me a good while, especially since I took a 2k 240hz over a 4k 120hz monitor for this gen.
 
The 3070 wont be stuttering or lowering textures before the 3060 cuz the 3060 will have already chocked itself to death by the time we reach a point that the 3070 NEEDs to lower texture quality to be stable.

I remember when people said this about the 1080p cards with 4GB of VRAM and then 8GB. The PS5 and XSX aren't getting any stronger in the GPU department as time goes on, but that won't stop devs from utilizing the memory available to them, which should increase memory needs across all target resolutions. You have to figure that 8GB cards will hit the junk heap in the very near future. But the XSS will still be working with limited memory so I guess we'll see how it plays out.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
I remember when people said this about the 1080p cards with 4GB of VRAM and then 8GB. The PS5 and XSX aren't getting any stronger in the GPU department as time goes on, but that won't stop devs from utilizing the memory available to them, which should increase memory needs across all target resolutions. You have to figure that 8GB cards will hit the junk heap in the very near future. But the XSS will still be working with limited memory so I guess we'll see how it plays out.
Im not saying 8GB of VRAM is some sweet spot.
I was actually waiting for the 3070ti 16G.....then they cancelled it.(fuckers)
Depending on how the new high cache Ada chips perform I might jump on or hold out for the generation as I do believe VRAM (chip dependent) will matter as games start to squeeze past 10GBs at 3440x1440.


What I am saying is the 8GB 3070 will walk the 12GB 3060 at every resolution and for the foreseeable future because the gap between them is too vast to be made up by having more VRAM.
The 3060s chip is just weak and by the time the 3070 is struggling the 3060 will already be well below 30fps....i.e already in unplayable territory.
As is right now the 3060 is struggling to maintain playable framerates....how is that going to get better as games get more intense?
Thats what I dont understand about 3060 owners, they are acting like when VRAM becomes a factor suddenly the actually chip gets better.....its already weak as is, when that high VRAM is actually needed the chip will be trash tier.
 
Im not saying 8GB of VRAM is some sweet spot.
I was actually waiting for the 3070ti 16G.....then they cancelled it.(fuckers)
Depending on how the new high cache Ada chips perform I might jump on or hold out for the generation as I do believe VRAM (chip dependent) will matter as games start to squeeze past 10GBs at 3440x1440.


What I am saying is the 8GB 3070 will walk the 12GB 3060 at every resolution and for the foreseeable future because the gap between them is too vast to be made up by having more VRAM.
The 3060s chip is just weak and by the time the 3070 is struggling the 3060 will already be well below 30fps....i.e already in unplayable territory.
As is right now the 3060 is struggling to maintain playable framerates....how is that going to get better as games get more intense?
Thats what I dont understand about 3060 owners, they are acting like when VRAM becomes a factor suddenly the actually chip gets better.....its already weak as is, when that high VRAM is actually needed the chip will be trash tier.

Very true, that 3060 won't get any stronger in a literal sense with time, but it won't get weaker either. It is a rough equivalent to XSX and PS5 as long as you keep the resolution low enough to handle the low bandwidth and that should hold true for the most part. The 3060ti and 3070 have been shown to tank if the VRAM is pushed harder than the 8GB allows (Look at the 4k lows here vs. the RX 6700

).

Right now that's only an issue at 4k, because that's the only scenario where the VRAM was heavily challenged. But, once we transition to the consoles fully utilizing their VRAM at say 1440p or even 1080p internal render resolution, the 3060ti and 3070 are going to be in trouble, meanwhile the 3060 could still be rolling along at 1080 or maybe 900p thanks to the large amount of memory. We've seen this before with some of the AMD cards, where they have a much longer shelflife because of generous VRAM allotments. This is never more of an issue than during a console generation change. For those that don't upgrade often, I'd say 12GB or greater going forward.

It's a bit like how the 8GB RX480 and the 4GB GTX 970 were close to each other in performance at launch, but the RX480 would improve in comparative performance as time went on. Obviously, the literal GPU power offered by both remained constant.
 
Last edited:
Did you just my 3060 has 12GB of VRAM me?
Did you just you only need 8gb, me?
Im not saying 8GB of VRAM is some sweet spot.
I was actually waiting for the 3070ti 16G.....then they cancelled it.(fuckers)
Depending on how the new high cache Ada chips perform I might jump on or hold out for the generation as I do believe VRAM (chip dependent) will matter as games start to squeeze past 10GBs at 3440x1440.


What I am saying is the 8GB 3070 will walk the 12GB 3060 at every resolution and for the foreseeable future because the gap between them is too vast to be made up by having more VRAM.
The 3060s chip is just weak and by the time the 3070 is struggling the 3060 will already be well below 30fps....i.e already in unplayable territory.
As is right now the 3060 is struggling to maintain playable framerates....how is that going to get better as games get more intense?
Thats what I dont understand about 3060 owners, they are acting like when VRAM becomes a factor suddenly the actually chip gets better.....its already weak as is, when that high VRAM is actually needed the chip will be trash tier.
Riveting.

You would have been the same expert that would have advised to get the 2gb gtx 680 over the 4gb, despite the 2gb card not even able to toggle max textures and settings in gta V, a ps3 cross gen game.

Thanks for playing my game, now i'll just get back to playing games in 4k on the 3060 because I actually know how to tweak settings and not just set it to ultra... well, textures are always ultra ;)

No but seriously, you continue to miss the point. The point is that the 3070, 3060 ti, 3080, 3070 ti don't have enough vram for as powerful as they are, and the 3060 does. It's not to say the 3060 is the better card in all instances (in terms of performance, anyway, because it is a better product), but that at least the 3060 has enough vram so that it doesn't hit that limit before it just runs out of power.

Because the 1050 ti I had was the 4gb version, even though I had to reduce settings often to reach 60fps, I never had to turn down textures. The 1060 3gb on the other hand, it's a more powerful card, but you would have to turn down textures on it on certain games while the 1050 ti had higher textures. Doom eternal for example, HATES the 3gb 1060 and it absolutely tanks, but 1050ti can cope with the higher settings.

Don't know why i'm bothering though, I know you'll just continue to trash the 3060 despite it giving me really good results in my games.
 
Last edited:
Very true, that 3060 won't get any stronger in a literal sense with time, but it won't get weaker either. It is a rough equivalent to XSX and PS5 as long as you keep the resolution low enough to handle the low bandwidth and that should hold true for the most part. The 3060ti and 3070 have been shown to tank if the VRAM is pushed harder than the 8GB allows (Look at the 4k lows here vs. the RX 6700 ). Right now that's only an issue at 4k, because that's the only scenario where the VRAM was heavily challenged. But, once we transition to the consoles fully utilizing their VRAM at say 1440p or even 1080p internal render resolution, the 3060ti and 3070 are going to be in trouble, meanwhile the 3060 could still be rolling along at 1080 or maybe 900p thanks to the large amount of memory. We've seen this before with some of the AMD cards, where they have a much longer shelflife because of generous VRAM allotments. This is never more of an issue than during a console generation change. For those that don't upgrade often, I'd say 12GB or greater going forward.

I don't know why there's this annoying subset of PC gamers, who should be asking for more hardware resources and not settling, consistently say "YoU onLY neEED this RiGhT now, you OnLY neED 6 CoreZ" etc. etc.

No, no you don't. You need more, and particularly you need more for future software.

And then they say, well you'll have moved on to newer hardware by the time it's an issue, admitting what i'm saying in the first place.

Except you do get smoother performance, and able to toggle higher settings RIGHT NOW by having a better cpu, by having more VRAM (particularly at 4k), but hey, the peasant PC brigade can keep beating their drum all they want I guess.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Very true, that 3060 won't get any stronger in a literal sense with time, but it won't get weaker either. It is a rough equivalent to XSX and PS5 as long as you keep the resolution low enough to handle the low bandwidth and that should hold true for the most part. The 3060ti and 3070 have been shown to tank if the VRAM is pushed harder than the 8GB allows (Look at the 4k lows here vs. the RX 6700

).

Right now that's only an issue at 4k, because that's the only scenario where the VRAM was heavily challenged. But, once we transition to the consoles fully utilizing their VRAM at say 1440p or even 1080p internal render resolution, the 3060ti and 3070 are going to be in trouble, meanwhile the 3060 could still be rolling along at 1080 or maybe 900p thanks to the large amount of memory. We've seen this before with some of the AMD cards, where they have a much longer shelflife because of generous VRAM allotments. This is never more of an issue than during a console generation change. For those that don't upgrade often, I'd say 12GB or greater going forward.

It's a bit like how the 8GB RX480 and the 4GB GTX 970 were close to each other in performance at launch, but the RX480 would improve in comparative performance as time went on. Obviously, the literal GPU power offered by both remained constant.

Id like to see a 3060 12G vs 3070 8G benchmark.
Cuz thats what the discussion is.
The chip being weaker means its VRAM is practically irrelevant because even as the generation goes on it wont be able to keep up with the 3070.

Except you do get smoother performance, and able to toggle higher settings RIGHT NOW by having a better cpu, by having more VRAM (particularly at 4k), but hey, the peasant PC brigade can keep beating their drum all they want I guess.

Show me one game where at like for like settings even at 4K with Ultra Textures the 3060 gets smoother performance than the 3070?
Id like to see that 3060 play any modern game going forward at 4K whatever settings (Ultra textures) considering its CUDA and bandwidth limitations.


Odd you are preaching about NOT settling, yet settled for a 3060 with its super limited bandwidth and CUDA count, its barely faster than the card its replacing the 2060S.
 

dave_d

Member
I’m rocking my 2080ti until these “4” series cards are out. If they are readily available on or around launch for MSRP then I’ll get one day 1.
The way things are going I'll probably skip the 4 series (already have a 3070) and just do a new build in 2024. I mean even if I wanted to get one of these cards I'd probably need a new power supply and maybe extra cooling. At that point I should be considering a new build.
 

VFXVeteran

Banned
This kind of news is what console gamers should be paying attention to. Nvidia is going to dictate what next-gen will look like in the future. 3xxx series just came out and those boards are already powerful. 4xxx series is just going to be ridiculous. They might even have a 5xxx series before the console generation is over.
 

Darius87

Member
This kind of news is what console gamers should be paying attention to. Nvidia is going to dictate what next-gen will look like in the future. 3xxx series just came out and those boards are already powerful. 4xxx series is just going to be ridiculous. They might even have a 5xxx series before the console generation is over.
game companies dictates what next-gen will look like, nvidia haven't made any game just some tech demos for their top cards.
current-gen cards and consoles already have enough power for photo realistic graphics, it's matter of time, money and talent.
when i read topic only thing comes to mind is "expensive ".
 
This kind of news is what console gamers should be paying attention to. Nvidia is going to dictate what next-gen will look like in the future. 3xxx series just came out and those boards are already powerful. 4xxx series is just going to be ridiculous. They might even have a 5xxx series before the console generation is over.
Most popular graphics card on steam was the GTX1060.

Nvidia is ging to dictate what next gen will look like??😂🤣
 

SmokSmog

Member
My thoughts, all depends on RDNA3 competitiveness

Optimistic:
RTX 4090 AD102 140SM 384Bit 24GB
RTX 4080Ti AD102 128SM 384Bit 24GB
RTX 4080 AD102 114SM 320Bit 20GB
RTX 4070Ti AD103 84SM 256Bit 16GB
RTX 4070 AD103 80SM 256Bit 16GB
RTX 4060Ti AD104 60SM 192Bit 12GB
RTX 4060 AD104 52SM 192Bit 12GB
RTX 4050Ti AD106 36SM 128Bit 8GB
RTX 4050 AD107 24SM 128Bit 4/8GB



This is the middle ground

RTX 4090 AD102 140SM 384Bit 24Gb/s 24GB
RTX 4080Ti AD102 114SM 320Bit 24Gb/s 20GB
RTX 4080 AD103 84SM 256Bit 24Gb/s 16GB
RTX 4070 AD103 72SM 256Bit 21Gb/s 16GB
RTX 4060Ti AD104 60SM 192Bit 21Gb/s 12GB
RTX 4060 AD104 52SM 192Bit 18Gb/s 12GB
RTX 4050Ti AD106 36SM 128Bit 18Gb/s 8GB
RTX 4050 AD107 24SM 128Bit 18Gb/s 8GB

Pessimistic:
RTX 4090 AD102 140SM 384Bit 24GB
RTX 4080Ti AD102 114SM 320Bit 20GB
RTX 4080 AD103 84SM 256Bit 16GB
RTX 4070 AD104 60SM 192Bit 12GB
RTX 4060Ti AD104 52SM 192Bit 12GB
RTX 4060 AD106 36SM 128Bit 8GB
RTX 4050Ti AD107 24SM 128Bit 8GB

Remember that Micron is finally producing 2GB GDDR6X modules they launched with RTX 3090Ti. Ampere had garbage memory setup because of 1GB GDDR6X modules, this is why RTX 3080 had only 10GB and RTX 3080Ti 12GB. RTX 3070 was using 1GB standard GDDR6 modules from smasung which was producing 2GB modules too (RDNA2/RTX3060 was using them) but Nvidia didn't put 16GB on RTX 3070 to not make RTX 3080 look bad.
RTX 3090 had 24GB with double sided PCB where memory modules were connected in clamshell mode ( 2 modules per 32 Bit controller). It was expensive and hot.
 
Last edited:

VFXVeteran

Banned
game companies dictates what next-gen will look like, nvidia haven't made any game just some tech demos for their top cards.
current-gen cards and consoles already have enough power for photo realistic graphics, it's matter of time, money and talent.
when i read topic only thing comes to mind is "expensive ".
Ok. So now game developers create hardware for next-gen? You sound silly - typical hater because innovation isn't happening on your precious box.

A developer can not make a game without knowing what the limits they have. Hardware dictates that. Not software. Developers work WITH the hardware. Not a single developer said they wanted ray-tracing last gen and Nvidia/AMD said "yes sir! Right away sir!".

And sure.. consoles have enough power from their 2080-like performance to give photorealistic graphics, so no need for updated hardware -- that is until there is announcements for "mid-gen" refreshes.. And THEN people like you will get excited about new hardware pushing even BETTER photoreal graphics right?
 
Id like to see a 3060 12G vs 3070 8G benchmark.
Cuz thats what the discussion is.
The chip being weaker means its VRAM is practically irrelevant because even as the generation goes on it wont be able to keep up with the 3070.

We can't give an example of those specific cards because we can't see the future yet. But we can look at the past and find situations where weaker cards with higher VRAM eventually overtook more powerful alternatives with less memory. The 3070 will always be the more powerful card, but if devs start utilizing more memory in assets to build the scene (likely with the consoles soon to be the baseline tech), the 3070 will not be able to reach its full potential and the 3060 and 3070 will get closer together in performance. Now if the amount of geometry greatly increased, etc. that would effect the 3060 and 3070 in equal measure, but there is less upward room there since both the consoles are weaker than a 6700.

Here is a look at the GTX 770 2GB vs. R9 380x 4GB

At the launch of the R9 380x, the GTX 770 2GB could hold its own and even be the better performer of the two.

As can be seen in this 380 review: https://www.pcgamer.com/sapphire-radeon-r9-380-4gb-review/

Now we can fast forward to 2020 and look at a collection of games running at similar settings on the two cards, with the R9 380x clearly having an advantage in most (? all).






It isn't a case of the GTX 770 losing ability as a GPU, it's the same silicone as before. What's changed is that devs closed the door on 2GB cards and started taking advantage of 3 and 4GB of VRAM (since the consoles had 5GB unified available) and that created a bottleneck for the 770. If devs start to utilize more than 8GB consistently at XSX/PS5 quality visual settings, the 3070 is going to have to start compromising quite a bit to keep up (likely needing to work with more XSS like textures and settings) even though the GPU is capable of more. Hard to say how the 3060 handles such games, but it won't have a VRAM bottleneck in the same way.
 
Last edited:
Odd you are preaching about NOT settling, yet settled for a 3060 with its super limited bandwidth and CUDA count, its barely faster than the card its replacing the 2060S.
I didn't settle, I just wanted to get what I paid for. I was not going to buy a card that is more powerful than 1080ti, yet has less vram, and that applied to everything above the 3060 at the time. (Btw, 3060 is basically a 1080ti with an extra gig, so your narrative that it's weak doesn't hold water 😙 not to mention dlss!)

Had the 3070 had 12gb, I'd have gladly bought it. But I saved some money with 3060 and got an uncompromised product.

Now there is the 3080 12gb, but ngreedia wanted a hefty premium for it and now we are on the cusp of the 4000 series, so why would I buy it?

When the product is good and has no shortcuts, I don't mind paying a premium hence why I have a 5800x3D on the way.

Oh and yeah I knew you'd gloss over my doom and gta examples, and my point is NOT strictly about 3060 vs 3070 no matter how you want it to be.

SportsFan581 SportsFan581 is trying to tell you as well but you just want to bury your head in the sand, so carry on mate.
 
Last edited:

FireFly

Member
We can't give an example of those specific cards because we can't see the future yet. But we can look at the past and find situations where weaker cards with higher VRAM eventually overtook more powerful alternatives with less memory. The 3070 will always be the more powerful card, but if devs start utilizing more memory in assets to build the scene (likely with the consoles soon to be the baseline tech), the 3070 will not be able to reach its full potential and the 3060 and 3070 will get closer together in performance. Now if the amount of geometry greatly increased, etc. that would effect the 3060 and 3070 in equal measure, but there is less upward room there since both the consoles are weaker than a 6700.
The difference is we are moving to a model where data will be streamed in as needed, aided by DirectStorage. And it remains to be see how big the VRAM pool will need to be to support this. For example, the UE5 city demo seems to fit within the 8GB limit, and the Doom Eternal example is one where using the highest texture pool size doesn't actually result in any apparent increase in visual quality.

When console games move to 1440p as standard, it may increase the 4K VRAM requirements, but it will also increase the performance requirements for 4K, likely making the 3070 non-viable for that resolution, unless you stick most settings on low. Cyberpunk for example looks to average around 50 FPS at medium w/4K on the 3070.
 

OZ9000

Banned
This kind of news is what console gamers should be paying attention to. Nvidia is going to dictate what next-gen will look like in the future. 3xxx series just came out and those boards are already powerful. 4xxx series is just going to be ridiculous. They might even have a 5xxx series before the console generation is over.
The 3XXX aren't that great. It does not guarantee 4k60 in every game.

The 4XXX will only be impressive when every single PS5/Xbox title runs at 4K60 without a single hitch/fps drop.
 
Last edited:
The difference is we are moving to a model where data will be streamed in as needed, aided by DirectStorage. And it remains to be see how big the VRAM pool will need to be to support this. For example, the UE5 city demo seems to fit within the 8GB limit, and the Doom Eternal example is one where using the highest texture pool size doesn't actually result in any apparent increase in visual quality.

When console games move to 1440p as standard, it may increase the 4K VRAM requirements, but it will also increase the performance requirements for 4K, likely making the 3070 non-viable for that resolution, unless you stick most settings on low. Cyberpunk for example looks to average around 50 FPS at medium w/4K on the 3070.

The amount of data you stream in and out might not change the scenario that much from the past. While you would be dropping and replacing VRAM contents at a higher rate, if the dev is targeting 10 or 12GB for the consoles you'll still have the issue of the 8GB potentially needing to dump more from memory and needing to go back and pickup things that the consoles and GPUs with more memory are still holding. The UE5 City demo does seem to be targeting 8GB, I think there will be next-gen only games that target a higher number.
 
Top Bottom