• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Let's face it: GPU manufacturers have hit the wall and things are only going to get worse from here.

Most people have mobile phones that do plenty of shit like a PC so that makes PC’s not exactly necessary for many people. So if you’re one of those people who just want to play games and can do the rest on a phone, get a fucking console and stop trying to convince people that PC’s “might be a bit more expensive but do other stuff as well”… I’ve heard that shit for many years and that doesn’t work anymore.
For people with real jobs and careers, this is not the case. I couldn’t imagine doing what I do on a phone. I think your argument is the least likely, and laziest argument you could make. What I’m saying is true, PCs are by far more capable than a console and you’d have to be on the lower end of the spectrum of intelligence to not see that. A lot of people use plenty of software that requires the need for a PC. Don’t be so biased and don’t think your situation applies to most other people.

That's less viable now. VRAM consumption is growing and there's an increasing reliance on DLSS etc. nVidia keep giving terrible VRAM on the lower end, and as we've seen from tests DLSS eats up VRAM.

So that 4060 8gb isn't going to last, and even the 4060 Ti 16gb is gimped by 128-bit bus so the bare minimum for any sort of future-proofing is the 4070 12gb at £480 which has seen a big jump from the £300 the 70 series used to cost. Except it's not really a 70, it's a 60 that nVidia changed the numbers on to further fuck consumers.
If this is the case, why can’t a 1080ti with 12 GDDR5 outperform a 4070 with 11GB of GDDR6X? Speed has a lot to do with it and GDDR7 and higher bandwidth allows for faster asset streaming. GPU’s aren’t meant to store anything, but instead, stream it very quickly. Other tasks are handled by the separate pool of DDR5 on your system. Unlike consoles, PC’s have 2 separate pools of RAM. So while the GPU might have 12GB your PC. Will also have an additional 16-192GB.
 
Last edited:

Xtib81

Member
PS6 will 100% disappoint in raw numbers and visuals. We are at the limits of $499 and 230w tdp. Nvidia powered PC will leave consoles further in the distant

This, right here. Power consumption and price need to be taken into account. As such, we have pretty much reached a ceiling. Hopefully, AI can mitigate that.
 

MikeM

Member
PS6 will 100% disappoint in raw numbers and visuals. We are at the limits of $499 and 230w tdp. Nvidia powered PC will leave consoles further in the distant
Visually, AMD and consoles have the most to gain. Sure, Nvidia will continue to lead barring some AMD miracle, but PSSR will surely be beefier in PS6 and future AMD GPUs will benefit from it in applying this knowledge into an AI based FSR4.
 

Ev1L AuRoN

Member
I believe that what is happening is that it is getting harder for ordinary people to perceive the differences. Ray tracing is an excellent topic, although the difference can be massive in some cases folks don't know or care about those changes, most casual gamers care about image clarity, high-res textures and good AA implementation.

Bouncing lighting, shadow physics, perfect align reflections are concepts that a lot of people can see, but can't quantify as a process or understand its computational load. Games keep evolving and the leap of this gen is big, it just gets harder to notice if you don't know what to look for, unlike previous generations.
 

DenchDeckard

Moderated wildly
Well in 2028 i expect 50 real tflops on 250w(so basically a 4090).

Honestly, you will probably get 24 to 30tf and an improved pssr if they are targeting 599 to 799.

Drop the graphics chase and let developers learn to be creative again. Relying on things such as the SSD like cerny said, to create experiences built on ways we haven't seen before.

Give me 60fps and strip back to that. Learn to create tech and tricks to impress us.

Most importantly think of the actual gameplay and engagement and let's get back to that.

My faith is in Nintendo. Restricting themselves to drastically inferior hardware helps them think outside of the box and create experiences that revolve around the gameplay.
 

64bitmodels

Reverse groomer.
My faith is in Nintendo. Restricting themselves to drastically inferior hardware helps them think outside of the box and create experiences that revolve around the gameplay.
this used to be the case but in modern day Nintendo makes video gamey ass video games that don't necessarily rely on many hardware gimmicks. This was a thing back in the Wii & DS days but not so much now.

Unless they try cracking it at VR or something similarly outlandish it's not gonna be anything wacky. Which is why Switch 2 isn't particularly exciting for me, unless the heaps of rumors are all proven wrong and their new device is a crazy new system
 
Last edited:

Kenpachii

Member
I believe that what is happening is that it is getting harder for ordinary people to perceive the differences. Ray tracing is an excellent topic, although the difference can be massive in some cases folks don't know or care about those changes, most casual gamers care about image clarity, high-res textures and good AA implementation.

Bouncing lighting, shadow physics, perfect align reflections are concepts that a lot of people can see, but can't quantify as a process or understand its computational load. Games keep evolving and the leap of this gen is big, it just gets harder to notice if you don't know what to look for, unlike previous generations.

I think the majority people just start the game and start playing without visiting the graphical settings even once. ( I see this A LOT on twitch ). The only reason they would enter the graphical tab is if the performance is super bad to lower settings or otherwise they just tank it or refund and say shit optimized.

Another thing also happens is that if u don't know you don't know kind of thing. I know exactly where to look for with RT features but a buddy of mine which is a avid pc gamer, he hasno clue what to look for. for him RTX is just DLSS that gives him more performance and the only way he know about DLSS was because of me mentioning it.

I have no doubt people have no clue what to look for when it comes to RTX.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
I believe that what is happening is that it is getting harder for ordinary people to perceive the differences. Ray tracing is an excellent topic, although the difference can be massive in some cases folks don't know or care about those changes, most casual gamers care about image clarity, high-res textures and good AA implementation.

Bouncing lighting, shadow physics, perfect align reflections are concepts that a lot of people can see, but can't quantify as a process or understand its computational load. Games keep evolving and the leap of this gen is big, it just gets harder to notice if you don't know what to look for, unlike previous generations.
True, but there are some buts :). Now, gamers might not realise that games like Indiana Jones are too big in scope and rich in detail to be brute forced by baking lighting and shadows (the game could have been realised with fully baked lighting) but it is the difference between getting the game (thanks to RTGI) or not.
What they will notice is the shadows and shading pop-in, the temporal / screen space artefacts, light leakage, etc… as you took a few step forwards but some steps back too overall.
 

DenchDeckard

Moderated wildly
this used to be the case but in modern day Nintendo makes video gamey ass video games that don't necessarily rely on many hardware gimmicks. This was a thing back in the Wii & DS days but not so much now.

Unless they try cracking it at VR or something similarly outlandish it's not gonna be anything wacky. Which is why Switch 2 isn't particularly exciting for me, unless the heaps of rumors are all proven wrong and their new device is a crazy new system

I feel nintendo pushes conventional gameplay and systems quite far tbh.

Also the feel of their games on top priority releases is second to none. The movement. Exploration. Etc everything is second to none for me.

Maybe it's just me.
 

64bitmodels

Reverse groomer.
I feel nintendo pushes conventional gameplay and systems quite far tbh.

Also the feel of their games on top priority releases is second to none. The movement. Exploration. Etc everything is second to none for me.

Maybe it's just me.
I just don't feel like what they make at the moment necessitates a new console.
 

Panajev2001a

GAF's Pleasant Genius
I just don't feel like what they make at the moment necessitates a new console.
Mmhh… considering they are already using every trick under the sun to keep framerate high in games like Odyssey and TotK/BotW… I think there is a case. Games are limited to 3 GB of RAM and very very slow, comparatively, internal storage. Let alone other limitations… not sure I think there is a case for new HW.
 

Buggy Loop

Gold Member
You might not remember, but years ago, there were many companies producing wafers. But they all fell off.
Credit is due to TSMC, for being able to deliver consistently.
But now we are all paying the cost, of TSMC's success.
Intel and Samsung are still able to set up high end process nodes, especially if they get government help.
But they still lack the technical know how, to keep pace with TSMC.
It would be very beneficial for us, consumers, if Intel and Samsung managed to get their nodes to compete with TSMC.

Hopefully Intel foundries go live and have good yields

Same for Samsung node, maybe massive switch 2 production and Nvidia going back there will whip Samsung to get their shit together, in fact, probably already something has changed if switch 2 is made there. 100M possible SoC is not something you want bad yields with.
 
I've been saying this for years, and including the CPU market which has faired even worse than the GPU one. We have relied on transistor shrinkage as the primary driving force behind all performance gains for the last 60 years, but you can only shrink something down so much before you reach a brick wall called physics. We are now approaching that wall. Over the next 10 years, one of two things will happen: we will discover a new element to make dies out of that has smaller atoms than silicon or we will see a complete halt of transistor shrinkage and need to find alternative means of performance gains (likely through software.)

Oh also that gap from the 980 Ti to the 1080 Ti is precisely why I sat on my 1080 Ti until the day the 4090 came out. If you look at the raw performance of ANY other card released between those two GPUs, it's a fucking joke. Frankly even the 4090 is only 325% faster in actual raw raster performance. That's 6 years for a performance gap we used to get in 3.
 

Celcius

°Temp. member
But what does that really mean in games? 4090 is great and only a handful of games really push it. What need is there for a 5090 at 4K? People probably should start skipping gens for GPUs.
Have you tried running FF16 at 4K without DLSS? Have you tried running path tracing games at 4K? There are also new 4K 240hz monitors and it takes a lot of power to do that.
 

winjer

Gold Member
There is another reason why GPU performance has not grown so much in these past years.
And that is because GPUs now have to make room for Ray-Tracing units and Tensor Cores.
I have seen estimates that place RT units at using about 20-30% of a GPU die space. And Tensor Cores, at about 10%.
Of course there are advantages to having these units. But they do take space from Shader units, caches, rasters, etc.
 

Zathalus

Member
There is another reason why GPU performance has not grown so much in these past years.
And that is because GPUs now have to make room for Ray-Tracing units and Tensor Cores.
I have seen estimates that place RT units at using about 20-30% of a GPU die space. And Tensor Cores, at about 10%.
Of course there are advantages to having these units. But they do take space from Shader units, caches, rasters, etc.
Not nearly that much at all from what I recall.

 

SweetTooth

Gold Member
I think all the HW designs, including PS Vita, Cerny’s team has delivered have been quite great and improving over time. I think after all this time it is safe to say they are a quite good design team making good use of the budget they are given.

Bolded is quite an understatement, Sony's Hw design team led by Cerny are legendary. Vita, PS4/Pro and PS5/Pro are all masterfully designed. Im sure PS6 will surprise everyone.
 

Soodanim

Member
If this is the case, why can’t a 1080ti with 12 GDDR5 outperform a 4070 with 11GB of GDDR6X? Speed has a lot to do with it and GDDR7 and higher bandwidth allows for faster asset streaming. GPU’s aren’t meant to store anything, but instead, stream it very quickly. Other tasks are handled by the separate pool of DDR5 on your system. Unlike consoles, PC’s have 2 separate pools of RAM. So while the GPU might have 12GB your PC. Will also have an additional 16-192GB.
If that's the case, why doesn't the 4090 have 8gb VRAM?
 
I’m facing it! I waited 4 or 5 years to jump back in on PC. It seems like in 2 decades of waiting to reach 4k, 120 native with transformative RT and GI without AI smudging.
 
Sure, but the estimate doesn't really change in terms of margins

The percentage doesn't change but the absolute value of the royalty per chip to TSMC changes with yield i.e. If NVidia pays $10000 per wafer for a GPU on TSMC's N6 EUV process and gets 500 chips per wafer, then TSMC gets $4 per chip with a 20% margin, whereas if Nvidia's yield is 350 chips per wafer TSMC gets $5.71 per chip.
 
Last edited:
This is better for the video game industry (Not the gpu makers) now the devs have to squeeze what we have, time for the talented devs to shine and the mediocre ones to git good.

You mean more time for GAAS games that do little other than try to extract as much money as possible from Whales.
 
If the rumors are true the 5080/5090 is a major waste of money. If you want max settings at 4K/60 FPS you can already get that with a 4090 and 4080 in some cases.

The price is too high and there isn’t enough games releasing that even pushes these cards.

I think the major issue also is devs aren’t even trying, games are so badly optimized nowadays.
 
Last edited:

Soodanim

Member
Probably to make way for larger assets but my statement doesn't logically move like that. You can achieve more with less given the bandwidth. But an increase in numbers is always welcome.
I found the video I referenced. 3060 12gb vs 4060 8gb in various scenarios.



In the first few minutes there are comparisons where 4060 fails to keep up with its predecessor with or without DLSS despite the speed advantage because of the gimped VRAM.

All the system RAM in the world doesn't matter if your VRAM is maxed and you get stutters and drops.

Small VRAM isn't going to be a long term viable solution, and nVidia knows this. They're offering a paltry 8gb knowing people will have to upgrade far sooner or pay out for the higher tier cards. Hence what I said about needing to go up to a 12gb 4070 to have any sort of future proofing, because 8gb is so easily not enough right now let alone going forward.
 

Bojji

Member
Big ass VRAM is worthless UNLESS games start to use it. 8GB were winning with less powerful 11GB cards for years but once games started to go above - 8GB cards became shit tier.
 

buenoblue

Member
Ja,ja,ja....meanwhile I'm gaming on 65 inch qd OLED, Atmos sound, 4k(ish) up to 120fps graphics 🤷‍♂️ this is the golden era of gaming for me (im 48). Games have never looked and played so good as what I'm playing now😎
 

hinch7

Member
If the rumors are true the 5080/5090 is a major waste of money. If you want max settings at 4K/60 FPS you can already get that with a 4090 and 4080 in some cases.

The price is too high and there isn’t enough games releasing that even pushes these cards.

I think the major issue also is devs aren’t even trying, games are so badly optimized nowadays.
Upcoming mid range is more than enough to match and exceed console settings for a good while. If used to being the bleeding edge (or close to), then yeah. You're going to have to go Titan levels of prices.
 

Haint

Member
The PS5 Pro, at $699 (not sure why people get excited and have to prop the price to $799, it does not make your argument stronger) was not designed like the PS5 was or the PS6 will be (its main goal was to avoid as many solutions that would require custom PS5 Pro work from developers separate from the PS5 work they were already doing as possible, stating that it would not be the main goal behind PS6 outside of a BC mode) . When all is said and done I think PS5 Pro will actually deliver better end results for players than PS4 Pro too.
Economy has changed over time (first generation where the base price of the consoles actually rose over time) and Sony has decided to increase the margins on a premium product aimed at enthusiasts.

IMHO, you are vastly underestimating RT improvements, there is lots to do there ( https://gfxspeak.com/featured/the-levels-tracing/ ), and AI assisted rendering ones (there is so much that denoising / ray reconstruction can deliver to make the RT improvements even more meaningful).
This is on top of the CPU changing (you can expect a compact core based on Zen 6/Zen 6+), RAM increasing (likely 32 GB with a significant bandwidth increase too), etc… Personally quite excited to see what DualSense 2 will bring in terms of enhanced haptics and controller features, let alone cheaper and wireless (plus optional wired) PSVR3 (with eye-tracked foveated rendering, PSSR, and async space warp recently added in GT7 thanks to the new SDK, we are close to have the same exact visuals on PSVR2 and PS5 Pro which will make devs support even easier… I think Sony persevering can make PSVR3 a success they do not need to spend too much effort to support either).
Pro is $780 chief, probably 99% of people interested in buying an $800 console have a pretty extensive library of disc games, and owned the disc based base PS5. It's a significant loss of value not having it. That's why everyone colloquially references it as $800, because it is.
 
Last edited:

Gamer79

Predicts the worst decade for Sony starting 2022
I will stick to my 4070 and then will get a 4090 when I can get one under $600. I am willing to wait
 

welshrat

Member
I said this about the pro and will say it again about the PS6 in this same vein

People will be very let down when they see the raw numbers on the PS6

Oh and I don't need any new hobbies :)
People really need to stop caring about raw numbers. I haven't cared for a good 5 years. I only care about FPS and image quality (different numbers :) ). I wonder if those that care about raw figures simply do so for willy waving and don't enjoy the actual games. The pro is a good example of this. It has enhanced my experience by quite some margin yet the figures and the nay sayers imply it's not worth it.
 

Clear

CliffyB's Cock Holster
Has anyone worrying about this stuff considered that the real limitation is the sheer amount of work and complexity involved in creating software that maximizes the potential of the hardware?

The reason why middle-ware engines like UE have become dominant is that rolling your own engine is such a monumental task that very few are willing to entertain the idea; That's just creating the toolset, not a finished product.

I hate to point this out, but has movie CG improved by that much over the past decade or so? No, it hasn't. In fact if anything its declined due to teams being given inadequate time for polish passes.

This is way less complex work than games because there's no interactive aspect to consider, and the creators have absolute control over what is and isn't presented and essentially have unlimited compute capacity to create their vision offline.

Crazy thought; maybe you should spend more time appreciating WHAT IS, and how much work has gone into it, rather than belly-aching over the "failure" to attain some fantasy level that nobody ever seems to quantify outside of metrics that are dubiously relevant.
 
I found the video I referenced. 3060 12gb vs 4060 8gb in various scenarios.



In the first few minutes there are comparisons where 4060 fails to keep up with its predecessor with or without DLSS despite the speed advantage because of the gimped VRAM.

All the system RAM in the world doesn't matter if your VRAM is maxed and you get stutters and drops.

Small VRAM isn't going to be a long term viable solution, and nVidia knows this. They're offering a paltry 8gb knowing people will have to upgrade far sooner or pay out for the higher tier cards. Hence what I said about needing to go up to a 12gb 4070 to have any sort of future proofing, because 8gb is so easily not enough right now let alone going forward.

4 whole GB is a different story as the comparison I gave was a difference of 1GB of VRAM and 3 generations between the 1080TI and the 4080. I don't think the same could be said about 4GB as when you think about scaling and the way that works. But in that video he states that while the game has a better average FPS, it does suffer from not having the whole 8GB of VRAM to work with and has to rely on System memory. So there technically is better performance on the 4060, but the lack of memory caused assets being loaded into RAM because the VRAM was overloaded. That being said, the 4060 at 8GB was $399 at launch while the 4060TI at 16GB was $499 at launch, that extra $100 goes quite a long way. To be frank the 3060 was $329 at launch, which released at a lower price than both. There is a very clear benefit to going to the 4060TI for approximately $170 more than the 3060's original price as you will gain 2 times the amount of ram which is clearly the better deal. Does this make the 4060 bad? No, but temper expectations as you are hitting the lowest tier of nVidia graphics card, which is still no slouch since it can run Horizon at a reasonable FPS at High settings. But clearly at such a budget price, DLSS was in mind when developing this card.

I don't know if you know this or not, but the 50 Series GPU's will be on PCIe 5.0 where as the 40 Series is still on PCIe 4.0, which will increase bandwidth and most likely take full advantage of the PCI lanes. 12GB still seems reasonable for the 5070 for the full generation and there will most likely be a TI version with 16, but also it's GDDR7 which is approximately 33% more performant than GDDR6 with less heat and energy costs. But also throwing it on PCIe5 will double the bandwidth as PCIe 4 to the nVME as well as the system RAM. I do believe the 4060 was a mistep, but also an attempt to hit the low-end market at a good price, but for $100 more you have a solid low-end card that still significantly outperforms AMD's 7700XT, which released at $449 with 12GB. In my opinion AMD is not the better deal. I'd rather of spent the extra $50 for the extra 6GB of VRAM, and better overall performance. People can be upset about nVidia all they want, but dollars to performance AMD is the bigger ripoff.

Where do you work that primarily requires a computer but doesn't provide one?
They provide one, but I'm also able to use my own. I do work that requires me to understand server specs and other technical limitations as well as their benefits as I have to calculate bandwidth and storage constantly. But also, if you own any sort of business, have a technical hobby, Code, develop games, record music, 3D model, 3D Print, run a truely locally hosted home automation system, use virtual machines, and so on, it's difficult to do without a true computational device. I've run with Windows, macOS, iOS, Ubuntu, and other OS's, I even have a couple Raspberry Pi's and so on. I wouldn't be where I'm at if I didn't have a PC at an early age. As I watch more and more people ditch PC for their phone and tablet, the more I see people becoming technologically retarded. You will sometimes see them post here about how hard PC gaming is. To combat this idiocracy, I've built my son a gaming PC and he's crushing it and will most likely have a great job in the future because of it. It pays to have a good understanding of computers.
 
Last edited:

Astray

Member
Moore's Law has hit a wall, and Rasterization has also kind of run into a limitation of high cost to implement.

The future will be through improvements in Raytracing (to deliver leading image fidelity at less cost to implement) and AI-enhanced upscaling and rendering (to be able to cover Raytracing's performance hit without losing image quality). This requires a different thinking around architecture design and a culture shift amongst end users to facilitate the move to raytracing capability as a basic ground (otherwise you're in the current situation where most big games have to essentially be built twice in a sense).

The 1st player in graphics to understand this is Nvidia, Intel realized it 2nd, and AMD is waking up too late to it.
 

Soodanim

Member
4 whole GB is a different story as the comparison I gave was a difference of 1GB of VRAM and 3 generations between the 1080TI and the 4080. I don't think the same could be said about 4GB as when you think about scaling and the way that works. But in that video he states that while the game has a better average FPS, it does suffer from not having the whole 8GB of VRAM to work with and has to rely on System memory. So there technically is better performance on the 4060, but the lack of memory caused assets being loaded into RAM because the VRAM was overloaded. That being said, the 4060 at 8GB was $399 at launch while the 4060TI at 16GB was $499 at launch, that extra $100 goes quite a long way. To be frank the 3060 was $329 at launch, which released at a lower price than both. There is a very clear benefit to going to the 4060TI for approximately $170 more than the 3060's original price as you will gain 2 times the amount of ram which is clearly the better deal. Does this make the 4060 bad? No, but temper expectations as you are hitting the lowest tier of nVidia graphics card, which is still no slouch since it can run Horizon at a reasonable FPS at High settings. But clearly at such a budget price, DLSS was in mind when developing this card.

I don't know if you know this or not, but the 50 Series GPU's will be on PCIe 5.0 where as the 40 Series is still on PCIe 4.0, which will increase bandwidth and most likely take full advantage of the PCI lanes. 12GB still seems reasonable for the 5070 for the full generation and there will most likely be a TI version with 16, but also it's GDDR7 which is approximately 33% more performant than GDDR6 with less heat and energy costs. But also throwing it on PCIe5 will double the bandwidth as PCIe 4 to the nVME as well as the system RAM. I do believe the 4060 was a mistep, but also an attempt to hit the low-end market at a good price, but for $100 more you have a solid low-end card that still significantly outperforms AMD's 7700XT, which released at $449 with 12GB. In my opinion AMD is not the better deal. I'd rather of spent the extra $50 for the extra 6GB of VRAM, and better overall performance. People can be upset about nVidia all they want, but dollars to performance AMD is the bigger ripoff.
I appreciate the input (future tech improvements) and I don't want to be rude, but your 1GB difference scenario is an extreme example that has little relevance - I was always talking about jumps of 4GB before the 1GB example, and all you said is that newer GPUs are faster which I never spoke about at all. I'm strictly talking about low VRAM, nothing else. I know the 4060 is faster than the 3060, but the VRAM holds it back and causes it to actually be slower in the real world than the 3060 in those scenarios because of the VRAM. If the 4060 had 12GB like it should have, it wouldn't have lost in any test against the 3060. That's the point. VRAM matters. I didn't post the video to dispute its findings, I posted it to provide a source -- to show you what I meant.

The fact that nVidia made the choice to cut back the VRAM of the 60 model gen-on-gen is very telling, and again speaks to my point that nVidia are brazenly limiting hardware because they don't want another 660 Ti/970/1060 Ti type situation. They make certain hardware tiers worse than they should be at their price points because they have a defacto monopoly. If they had competition all of their cards would have more VRAM. But they don't, and from what we hear it's likely the 5060 will also be 8GB - they won't even match VRAM from 2 generations before at the (alleged) same tier. Not can't, won't. I hope I'm wrong.
 
I appreciate the input (future tech improvements) and I don't want to be rude, but your 1GB difference scenario is an extreme example that has little relevance - I was always talking about jumps of 4GB before the 1GB example, and all you said is that newer GPUs are faster which I never spoke about at all. I'm strictly talking about low VRAM, nothing else. I know the 4060 is faster than the 3060, but the VRAM holds it back and causes it to actually be slower in the real world than the 3060 in those scenarios because of the VRAM. If the 4060 had 12GB like it should have, it wouldn't have lost in any test against the 3060. That's the point. VRAM matters. I didn't post the video to dispute its findings, I posted it to provide a source -- to show you what I meant.

The fact that nVidia made the choice to cut back the VRAM of the 60 model gen-on-gen is very telling, and again speaks to my point that nVidia are brazenly limiting hardware because they don't want another 660 Ti/970/1060 Ti type situation. They make certain hardware tiers worse than they should be at their price points because they have a defacto monopoly. If they had competition all of their cards would have more VRAM. But they don't, and from what we hear it's likely the 5060 will also be 8GB - they won't even match VRAM from 2 generations before at the (alleged) same tier. Not can't, won't. I hope I'm wrong.
The reason I brought up speed was because the assets don’t stay in VRAM and are constantly streaming. The faster they can stream the less VRAM is needed as there will be less bottlenecking which is happening in that video you shared. Once it hits peak VRAM usage the stuttering begins.

That being said I do see the concern, and I see what you mean. I’m hoping the wider the pipe the better the VRAM situation will get, but that will always help with small margins, but not if you’re missing 1/3 of the VRAM. But I also believe the cost of manufacturing these cards is going up which is calling for the prices they are putting out as AMD isn’t all that far behind nvidia when it comes to pricing. But even with these VRAM limitations AMD still isn’t on par with the performance that nvidia can, so nvidia is technically still the better value. I just feel we are hitting Ohms law which is why there’s been so much emphasis on DLSS.
 

Bojji

Member
Speed differences are too low to be really relevant. Once game goes out of vram and spills into system RAM you are fucked:

27htari.jpeg


 
Last edited:
Top Bottom