More next-gen graphics card rumors - AMD FP32 ~75Teraflops | Nvidia FP32 ~ 85Terraflops

I can sell my old gtx 1080 at $800 ….this card is about 4-5 years old but due to the stupid shortages cards are all inflated by about 3/4 of their actual value

I don't expect the new ones to be cheap…and I don't expect people offloading their 3080/3090s to be cheap either…we have about 3 years I reckon before prices return to normal
 
With the way things are looking, I might be chill with my 2070 until like 2025 or so

No complaints here. My Steam backlog is exhaustive

Cheers boiz
 
If RDNA 3 includes multi-chiplets for the GPU, then each chiplet will require its own 512-bit interface, so the total PHY area taken up by the 512-bit bus will be similar to the area of the Infinity Cache. But I guess you have to think that if the IC is required anyway to help with the inter-chiplet comms, the decision has pretty much already been made for you, and you can get away with a smaller 256-bit bus and save on both chiplet area and power consumption costs. It's a win-win.
Never thought about it this way, thanks.

Makes me want to see RDNA3 layout even more.
 
The 4090 will be the card to finally get me to leave my trusty 1080 Ti behind. Not even the 3090 was reason enough. After 4 years I demanded more than just 1.8-2.2x performance. This looks like it will be it. Here's hoping we also get one more amazing desktop VR kit so I can have my final build. Just once more.

I'm thinking when we get a 5090 is when we will start seeing truly crazy things due to raytracing and like 150tflops
 


unknown.png


I guess I'd say it would be at minimum 70 nvidia flops. Could easily be triple

Damn thats pretty.
 
how would anyone be able to tell difference between 70 titie flops vs 90 titie flops?

Upscaled 8k at 120hz??! or native 8k at 120hz with raytracing + dlss + hdr?!
 
how would anyone be able to tell difference between 70 titie flops vs 90 titie flops?

Upscaled 8k at 120hz??! or native 8k at 120hz with raytracing + dlss + hdr?!

with their eyes, We aren't going to hit teraflops and all of a sudden have every graphical feature we could dream of. There is still a vast ocean between the level of quality between these tiny little machines that need to render 30 frames per second and the quality of a Pixar movie that takes hours per frame.

We are far from diminishing returns and one gpu being 1.4 times stronger and another will continue to have very visible differences in quality.
We barely even have raytracing so yes one gpu will still have to either turn off raytracing or drop the resolution/framerate very visibly. If you don't care and say you dont see a difference that's fine.

Even on a 3090 we still can't even have an open world where things actually cast shadows if they are more that 10feet away. Forza is still full of pop in and totally incorrect lighting and reflections. Maybe when we got 900teraflops we will actually have to worry about if 2000teraflops is that big a difference. But chances are it willl still be.
 
Last edited:
Probably with the 5000series we will have the first path traced games (base console games with extra features on pc)
 
Except that's not true, unless that game has an engine that relies very heavily on mesh compute. Otherwise, you're seeing giant TF leaps but modest (at best) increases in culling throughput, rasterization throughput, pixel fillrate, texture/texel fillrate etc.

You know, things that are a bit more important for gaming-related performances, at least until mesh shading becomes more universally used in commercial AAA games. But even with that in mind, at most for a long while it's just going to lead to higher-resolution textures and maybe a few more effects. Game budgets will absolutely not scale enough to meaningfully use 75 TF/92 TF whatever of compute power in any way other than as resolution and texture boosters.



Those HBM3 specs look kind of low, in fact they look closer to HBMNext which IIRC is more Micron's version of HBM2E that SK-Hynix has had for a few years now. The HBM3 specs I've seen mentioned are closer to 5 Gbps, and one company I think speculated it could reach 7 Gbps per pin.

Here is some more information on more recent HBM3 developments

That being said, they could always clock the pins below spec if it means hitting a certain power budget. But at that point, you have to start weighing if the power savings are worth it over the likely premium HBM3 would bring versus GDDR6/GDDR6X (maybe GDDR7 but I don't think that's coming anytime soon).

Dunno what u are rambling about. 80+ tflop gpu's are going to be a gigantic leap over 30tflop gpu's. And there are plenty of games that u want to have that performance for already. While 3080/3090 are fast GPU's, they are no where nar the pinnacle of performance. we need a lot more for that.

c22ac9349a3712431d31c3a1be26fc87.jpg


770862de3364e4fc5c1350dcaa717ae7.jpg



b695fe0de849cdbff7449bf6de6a6019.png


RDR2

558b96f4307fbddc3d056ec45216899c.jpg


Add in downsampling which kills performance entirely such as 8k or even 7k for ultra wide, DLAA ( so no DLSS ) to remove shimmering entirely which a lot of games camp with such as horizon zero dawn / forza 5 or high hz gaming at higher settings and performance is already gone. Then lets not talk about next gen titles that will push stuff even further forwards.
I can tell you this if that 4090 is going to pack 24gb of v-ram, sits at ~1k, and has doulbe the performance of 3080 or higher, i will upgrade in a heartbeat enough games that will make use of it.

It was case by case. You can come up with examples of PC games that didn't run well at launch, but many that were completely playable and next level going back to the late 90s. The main draw was you could play something not possible on the console. That's basically a thing of the past. No reason we can't have both now, all the console games and the marvels that take full advantage of PC grunt, especially considering the disparity in power that's coming.

Because the biggest limitation for consoles where
- the company's rules and culture itself
- hardware limitation

Once we moved into the PS4 area, the hardware was pretty much completely fine for any game as the online structure was well developed out. The only problem they had was still corporations holding them back. Long times for patches, limit ways in pushing patches ( or even punishing devs for it ) . All reason why most innovation gets done on PC as result.

Still, everything can do be done on every platform at this point. The major poblem we see right now is talent and idea's not developing. I hoped with the move to SSD's we will see new gameplay time of solutions where we instantly shift through dimensions and play in different worlds at the same time, to add a new layer to gameplay complexity, ( not R&C way ) but actual instant. I hope we see something like that. But who knows.
 
So, do people think we're gonna get GPU prices come back down to normal like, ever? Like getting a very high end card (just not ti/titan) for ~500? I jumped on my 1080 for 506 after availability issues had just started to become widespread back then (iirc) and it was usually going for higher prices. Those days seem to be behind us forever and ever to be honest. Even when the chip drought ends, if there's no competition as of right now, I don't see Nvidia wanting to sell anything but low end stuff for such prices and keep the high end for like 800 or 1000 or more (I mean, the 3080 launched at like 700 euros, never mind what it skyrocketed to after distribution issues), just not the crazy scalper prices. Has there been any real pushback from big publishers? Like surely if there are no high end GPUs around (to play games, not mine) what's the reason to dev/release games that utilize them other than a deal with Nvidia to include RTX or whatever? Just go low end, keep your games cross generation on console too since they have their own issues in the "next gen" there so that you reach the most people possible with your game should be a valid strategy, no? RTX maxed out videos will only be impressive for so long before people realize they won't be getting that at home, no? Especially since issues have lasted way longer than expected, it could result in a blow for the platform if games keep increasing their requirements just because way better hardware theoretically exists but can't be bought by most gamers who were buying high end cards up until a couple years ago but now find their hobby turning to some deluxe premium pricing tiers as the baseline with anything less not offering a very good experience or even much of an upgrade.
This is my thought as well. It goes for consoles too. If the number of people that actually have the hardware is too small then what incentive is developing for that hardware.
We seen this time and again on consoles, it applies to pc as well.
Look at the PS Vita. Thing was a monster handheld at the time but barely any 3rd party (or 1st party) development was done as the install base was too small.
IF nvidia, Amd, Sony , MS, Nintendo, etc.. don't get their scalper, mining and retailer problems in line then why should devs push that tech, not enough to warrant the cost.

They need to lobby goverments or something to crack down on Mining (unpopular opinion as some here are miners, but you guys are the damn problem). Shortages will be solved by new fab which takes time, but in the meantime there are things that can be done like mining crackdowns, pushing retailers to make actual 1 unit per person queues and waiting lists (like EVGA does for gpus, I had to wait 8 months for a 3060ti but I got it at msrp, and didn't have to stress. Still don't have a ps5 as Sony doesn't do such things). We should be able to go to a website and make a deposit and be put on a waiting list with ETA and number in line. Then when its available the rest of the funds come out and you get shipped your product or have to go to store.

Also there should be more measures to stop bots. Why don't these retailers do more to combat bots on the sites. Once something is in your shopping cart for checkout it shouldn't be grabbed by a bot. Its equivalent to having a tv in your cart at best buy and you are about to have it rung up at the register and some asshole swipes it from you. That is how it currently is at places like Walmart and Amazon. Its bs and frustrating.

All these things need to change to get more products into gamers hands instead of greedy resellers and greedy ebay scumbags. I want gaming to be pushed and it to be fun for all. Things are shit with shortages, but at the same time much more could be done to alleivate this and make it more lucrative for developers to make products aiming for those markets.
 
Yeah, the additional packaging steps required will invariably impact overall yields and volumes.

I'm not sure they'd be able to get 20+ million MCMs per annum with HBM... not currently at least.



I'm not sure if 4GB chips exist yet, but 16 chips in clamshell mode like the PS4 is easily possible.



We can only dream.

That said, HBM + infinity cache is redundant when the HBM gives you TB/s worth of bandwidth to memory.
Its not really redundant, they can work together. The whole point is data locality. The cache keeps the data local to the GPU. More cache means less of a need to fetch from VRAM. However, as we know from RDNA2, the cache hitrate isn't perfect which is where HBM can massively help tighten things up.

The biggest factor is actually cost. HBM is expensive. Too expensive for consumer applications so AMD will likely trade off and use G6, as it's fast enough to get the job done, but also much cheaper. If they could get HBM to coat the same per GB as G6 and the packaging costs aren't catastrophic, it has huge advantages.
 
With the way things are looking, I might be chill with my 2070 until like 2025 or so

No complaints here. My Steam backlog is exhaustive

Cheers boiz
What sucks is the prospect of your 2070 dying in a year or two, that would leave you gpu less and not being able to play. I have a backup 1650 super and 1060 3gb model. I could sell both and come back with $500-600 easy right now. I won't as I don't want to contribute to this nightmare and also because its better to have a backup. Sure a 1650 super is nowhere near my 3060ti but its better than no gpu (i don't even have integrated as i went ryzen this time), or having to use one of the old i3 dells I have in the attic.

If your card died how long would it take to get new one, and at what cost. This shit needs to be solved and soon. Warranties will run dry eventually.
 
Last edited:
Because the biggest limitation for consoles where
- the company's rules and culture itself
- hardware limitation

Once we moved into the PS4 area, the hardware was pretty much completely fine for any game as the online structure was well developed out. The only problem they had was still corporations holding them back. Long times for patches, limit ways in pushing patches ( or even punishing devs for it ) . All reason why most innovation gets done on PC as result.

Still, everything can do be done on every platform at this point. The major poblem we see right now is talent and idea's not developing. I hoped with the move to SSD's we will see new gameplay time of solutions where we instantly shift through dimensions and play in different worlds at the same time, to add a new layer to gameplay complexity, ( not R&C way ) but actual instant. I hope we see something like that. But who knows.


Well then this goes along with my original post. There was a time when having a uber powerful PC meant something. You've arrived at that conclusion in this post, even though i disagree the Ps4 was some turning point. It was actually less capable at launch with that god awful CPU. Xbox 360 was far more well rounded at launch. And by comparison a powerful PC GPU in 2013 was a huge disparity over ps4's GPU, and that was supposedly its strength. Making a game based around powerful PC hardware in the ps4s prime would have resulted in something not possible on console.

The real reason is the industry moved to engine scalability with consoles. It's all about money and being able to put games on everything, not pushing top tier cards. Theres no real financial benefit to targeting several million ultra powerful PCs, the install base is low to make a return investment. The data shows us now the vast majority of PC owners have middling PCs.
So, we can make ourselves believe that high end PC exclusives games wouldn't be better, but that's just to make us feel better.
But it makes sense. The industry will go where the money is and that's middle ground PCs and consoles. Can't blame them.
 
Last edited:
Its not really redundant, they can work together. The whole point is data locality. The cache keeps the data local to the GPU. More cache means less of a need to fetch from VRAM. However, as we know from RDNA2, the cache hitrate isn't perfect which is where HBM can massively help tighten things up.

My argument isn't that they can't work together or that the additional Infinity Cache wouldn't be beneficial to system performance, of course, it would. But increasing on-die SRAM cache itself has a cost that can be significant depending on the process node the chip is fabricated on.

Everything is a compromise on cost, and given that GPUs are generally inherently designed to be able to less sensitive to latency than say CPUs, the question would be whether the addition of InfCache would be worth the cost to justify its inclusion when the main RAM is able to provide TB/s worth of bandwidth.

The biggest factor is actually cost. HBM is expensive. Too expensive for consumer applications so AMD will likely trade off and use G6, as it's fast enough to get the job done, but also much cheaper. If they could get HBM to coat the same per GB as G6 and the packaging costs aren't catastrophic, it has huge advantages.

I don't see a realistic scenario where HBM cost per GB drops to a similar level of GDDR. The interposer, substrate and wider packaging costs will always be there. And the additional number of fabrication process steps for 3D HBM versus 2D GDDR mean that for the HBM cost per GB to fall below GDDR it can only occur by the economies of scale; which means the market demand for GDDR would have to largely shift wholesale to HBM. This can only be driven by the application memory bandwidth requirements exceeding what GDDR can achieve. With the GDDR roadmap still allowing for future node shrinks to provide higher capacity and faster chips, e.g. GDDR7 and over, it's likely that GDDR bandwidth will be "enough" for a good while yet.
 
My argument isn't that they can't work together or that the additional Infinity Cache wouldn't be beneficial to system performance, of course, it would. But increasing on-die SRAM cache itself has a cost that can be significant depending on the process node the chip is fabricated on.

Everything is a compromise on cost, and given that GPUs are generally inherently designed to be able to less sensitive to latency than say CPUs, the question would be whether the addition of InfCache would be worth the cost to justify its inclusion when the main RAM is able to provide TB/s worth of bandwidth.



I don't see a realistic scenario where HBM cost per GB drops to a similar level of GDDR. The interposer, substrate and wider packaging costs will always be there. And the additional number of fabrication process steps for 3D HBM versus 2D GDDR mean that for the HBM cost per GB to fall below GDDR it can only occur by the economies of scale; which means the market demand for GDDR would have to largely shift wholesale to HBM. This can only be driven by the application memory bandwidth requirements exceeding what GDDR can achieve. With the GDDR roadmap still allowing for future node shrinks to provide higher capacity and faster chips, e.g. GDDR7 and over, it's likely that GDDR bandwidth will be "enough" for a good while yet.
Thing is, if they're moving to 3D Chiplet based packaging for GPU anyway, the complexity is already there. Going a step further and adding HBM - in theory - won't add to the cost as significantly as it would to a traditional monolithic die setup.

Looking at the rumours for Navi 31, I've seen stuff out there that suggests 512MB of cache, with the cache being separated on multiple (I think 32MB) MCDs and the actual GPU cores on GCDs. So the package complexity is already there. CoWoS is already being used by Instinct MI250X.

So with regards to the question as to whether you need infinity cache if HBM is there, or if you need HBM if IC is there...I think we'll find AMDs answer soon enough.
For what it's worth I think they'll be sticking to GDDR VRAM but with narrower buses. Especially for monolithic. The disadvantage being that you won't see any gains in VRAM quantity unless densities go up.
But for the biggest, most powerful Navi 31 SKU with 2 GPU dies and however many Cache dies.... HBM2e or HBM3 can offer a wider range and higher overall quantities of VRAM, and it's raw bandwidth and low latency can basically be a form of L4 cache - something like HBCC on Vega.

I don't pretend to be an expert on these things, however so I could be completely wrong.
 
Dunno what u are rambling about. 80+ tflop gpu's are going to be a gigantic leap over 30tflop gpu's. And there are plenty of games that u want to have that performance for already. While 3080/3090 are fast GPU's, they are no where nar the pinnacle of performance. we need a lot more for that.

c22ac9349a3712431d31c3a1be26fc87.jpg


770862de3364e4fc5c1350dcaa717ae7.jpg



b695fe0de849cdbff7449bf6de6a6019.png


RDR2

558b96f4307fbddc3d056ec45216899c.jpg


Add in downsampling which kills performance entirely such as 8k or even 7k for ultra wide, DLAA ( so no DLSS ) to remove shimmering entirely which a lot of games camp with such as horizon zero dawn / forza 5 or high hz gaming at higher settings and performance is already gone. Then lets not talk about next gen titles that will push stuff even further forwards.
I can tell you this if that 4090 is going to pack 24gb of v-ram, sits at ~1k, and has doulbe the performance of 3080 or higher, i will upgrade in a heartbeat enough games that will make use of it.
Fair enough; I wasn't thinking too much of Mesh Shaders when I made the earlier post, and that is something which is going to rely a lot on compute, ALUs what-have-you. I was thinking somewhat more in terms of hardware accelerators to offload specific tasks and hence that being why such big TF leaps seemed excessive.

I still think for consoles in particular seeing 10th-gen systems with TF performance that big is going to be too costly and too power demanding for a console footprint even on something like 3nm EUV (maybe doable on 2nm tho, but that also would incur more cost for the wafer and be less mature), but then again I'm kind of starting to think 10th-gen won't even have "consoles" the way we know of them right now, anymore.
 
I need a visual example of what these cards will be able to do. If it's how I imagine, then mix it with the metaverse and we'll have a real life Oasis from Ready Player One.
 
I don't understand what is the problem with cost, we are overpaying gpus right now because of miners and scalpers already, it won't make any difference, just do all in
 
I don't understand what is the problem with cost, we are overpaying gpus right now because of miners and scalpers already, it won't make any difference, just do all in
I agree.
We didn't think the PS5 & XBSS/X would have pcie-4 SSDs.

SSDs where expensive, but console makers had a choice to stay in the stone age or move majority of gamers forward.

The same will happen with HBM.
The advantages more than enough outweigh the one disadvantage, cost.
 
I'm very excited about next year's GPUs. But I am terrified at the thought of the incoming power consumption. The way these rumors are set up, my 750w PSU won't be nearly enough though. Can't imagine me buying one since getting the 6800 XT, but still fun to see new tech. I'm very interested in seeing how far AMD has come along in the raytracing realm. Not a selling point for me personally, but do want to see the strides made to see if they can achieve parity, if not better RT than Nvidia. Jensen would not be pleased, lmao.
 
I'm very excited about next year's GPUs. But I am terrified at the thought of the incoming power consumption. The way these rumors are set up, my 750w PSU won't be nearly enough though. Can't imagine me buying one since getting the 6800 XT, but still fun to see new tech. I'm very interested in seeing how far AMD has come along in the raytracing realm. Not a selling point for me personally, but do want to see the strides made to see if they can achieve parity, if not better RT than Nvidia. Jensen would not be pleased, lmao.


I grabbed an Deepcool 850w when it went on sale. My friend and i both got 3070 was laughing on me for buying 850w PSU. Nvidia never failed on improvement front and now its time for AMD to do the same. What we are missing is these companies must be focusing on next PRO machines now. Things will get more interesting in upcoming year.
 
Honestly it all depends on how miners and scalpers will affect the next gen of cards more than the rest, buying one in normal conditions is more important than tf and consumption
 
I think more would be too expensive and i bet they are already working on it. But time will tell.

We all know they can easily price these machines for 499 or 599 depending on the tech. I wont be surprised if PRO machines comes with premium price. If you want full 4k raytracing then pay premium price for it. PC people are already paying 700-800 dollars for 3070 level GPU's.
 
Honestly it all depends on how miners and scalpers will affect the next gen of cards more than the rest, buying one in normal conditions is more important than tf and consumption
I truly believe that the market will start to correct itself by Q3/4 when these GPUs come out. The circumstances should be better. A lot of people won't have that "free money"/ check that was given out during the pandemic which I believe is one of the major factors that people are overlooking. It was A LOT of people who used those checks to purchase hardware they otherwise wouldn't have been able to purchase. Which in turn had scalpers licking their chops. Then the manufacturers and AIB's followed suit seeing people had no problem burning money at unusual paces. Miners were always around and will always be here. The last two years have been more about 1. Big Corp taking advantage of a situation, 2. Scalpers on the rise because of said situation, and 3. more people having disposable income to spend absurd money on a GPU they normally wouldn't have. My opinion of course.

I do believe the prices will go up a little more because of the new tech and beef supposedly in these GPUs. But I don't think it'll be super dumb prices like we've seen the last two years. I'd be very surprised to see the 4090/7900 XT starting at $2,000/2500. I think they'll be like $1600 for reference $1800 for partner models. But I've been wrong before, lol.

tusharngf tusharngf Time is showing you to be a genius with grabbing that 850w PSU, haha.
 
Remember when it mattered to have uber-powerful PCs? And there were games that were made specifically for them? Now we're playing console versions at higher resolutions.
Like Cyberpunk right?

I just hope I can ignore them and wait for the 5090/8900 gen. I mean the alternative is weeks tied to my PC waiting for drops and failing.
 
No more than 15tf at best,they have to maintain the 400$ price range
I don't believe that. A 5 TF increase is nothing, it's useless to release a refresh for that.
I'm sure that we will have more than 20 TF for the pro version.
Why release a 15TF console if PCs have 100TF available..
 
Top Bottom