• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Radeon RX 7900 XT Flagship RDNA 3 Graphics Card Could Reach Almost 100 TFLOPs, Navi 31 GPU Frequency Hitting Over 3 GHz

LPDDR4X SDRAM. They're arranged SoP (system-on-package) style mounted on top of each other. Speed is around 4266 MT/s.

This article might provide some more useful info on the M1 range if you're interested.

Since all parts of this is apple, and granted its ARM based, can Intel and AMD do the same thing to knock out NVIDIA for x86? I got a strong feeling it's headed this way..... :unsure:
 
Last edited:
The current flagship (6900 XT, which I own) is a 300W GPU.

The 7900 XT being a 67% increase over the 6900 XT in power consumption seems a bit suspicious.... (Unless AMD threw power efficiency design to the wind, of course)
 

Dream-Knife

Banned
LPDDR4X SDRAM. They're arranged SoP (system-on-package) style mounted on top of each other. Speed is around 4266 MT/s.

This article might provide some more useful info on the M1 range if you're interested.
Sorry for the noob question, but how does memory bandwidth work? On Nvidia it will give a number like 9500mhz, and on AMD it will give you 2000mhz. Then they say AMD has 16gb/s and Nvidia has 19gb/s, but then they talk about bandwidth being upto 1tb.
 
Hopefully. Depends on how much of those games use mesh shading (TF perf is actually more useful for that vs. fixed function graphics pipeline). Also probably how much a rasterization bump we get.
I totally forgot about mesh shading. 3Dmark benchmark suggest insane performance gains, so I wonder if this feature was implemented in some games already?
 
I totally forgot about mesh shading. 3Dmark benchmark suggest insane performance gains, so I wonder if this feature was implemented in some games already?

It's definitely in the Matrix demo, at least parts of it. But I'm not sure if any commercial games are currently using mesh shaders (or primitive shaders, for that matter). They are probably still exclusively on the fixed function graphics pipeline since most games have been cross-gen so far.

AV1 is taking so long. There will be some great new possibilities once it's out.



"We are working with Twitch on the next generation of game streaming. AV1 will enable Twitch viewers to watch at up to 1440p 120 FPS at 8mbps" or greatly improved iq but still 60 fps.

Oh that's gun b gud. This should also hopefully trickle down for lower resolutions and framerates too (I usually set Twitch streams at low bitrate unless there are certain moments I'm actually watching more attentively, then I might jump the resolution up to the source. Otherwise I treat them like audio podcasts).

Hopefully Twitch changes the audio bitrate at lower resolutions; just do them on two different encode paths like how Youtube does.

Sorry for the noob question, but how does memory bandwidth work? On Nvidia it will give a number like 9500mhz, and on AMD it will give you 2000mhz. Then they say AMD has 16gb/s and Nvidia has 19gb/s, but then they talk about bandwidth being upto 1tb.

Personally I don't even look at the memory controller clocks when it comes to GPUs, just the bus size, and module bandwidth.

If you have, say, a 14 Gbps (gigabits per second) GDDR6 module, like the current-gen systems do, then that means each I/O pin on the module used for data can transfer at 14 Gbps, or 1.75 GB (gigabytes) per second (divide any ****bit amount by eight to get the ****byte, there are eight bits in each byte). Then multiply that by the number of I/O data pins; GDDR modules are 32-bit so they have 32 I/O data pins. That's how you get 56 GB/s module bandwidth.

Then look at the bus size; these are also in bits. The PS5 has a 256-bit GDDR6 memory bus; since they aren't using clamshell mode then each module runs at the full bit rate (32-bit, vs. 16-bit for clamshell configurations). That means you can put eight 32-bit modules on the bus. So multiply the module bandwidth by the product of the bus size divided by the module bit rate (in this case, 256/32 = 8) and you get 448 GB/s, the GDDR6 bandwidth in PS5.

You can use that same method for figuring out Series X, Series S, and pretty much any other modern GPU. HBM designs are different because they have a lot more data I/O lanes (128 vs 32, therefore they are 128-bit memory devices vs. 32-bit memory devices) and are designed for stacking via TSVs (through-silicon vias) in typical stacks of 4-Hi, 8-Hi, 12-Hi and (supposedly, for HBM3) 16-Hi. Stack sizes tell you how many modules are in the stack; you can then look at the capacity per module multiplied by the number in the stack to find out the total capacity per stack. You can also multiply the per-module data I/O pinout amount by the number of modules in the stack to determine the bus size of the stack.

For figuring per-module bandwidth you just use the same method as for GDDR memories. DDR system RAM is a bit different; you actually want to use the module speed, usually expressed in MHz (some also express it in MT, or megatransfers). For example DDR4-3200 is 3200 MHz; you multiply the 3200 MHz by 64 (the number of data I/O bits for DDR memories), and then divide that amount by 8 (translate the bit amount to a byte amount) for 25,600,000 MB/s, or 25.6 GB/s. Some people would split the 3200 MHz by 2 since the memory clock is technically 1600 MHz and it's doubled due to the way DDR works, but if you already know that you can skip that step.

Since all parts of this is apple, and granted its ARM based, can Intel and AMD do the same thing to knock out NVIDIA for x86? I got a strong feeling it's headed this way..... :unsure:

They're already trying to do that xD. Take a look at AMD's M100 designs, that is an indication where RDNA 4 and especially RDNA 5 will go design-wise. Intel already have Ponte Vecchio and their own MCD designs going.

What I'm more interested in is if (or more like when) AMD, Intel & Nvidia move away from GDDR for mainstream GPUs and start using HBM. And I'm especially interested if any of them design GPUs in the future around HBM-PIM technologies because that will probably represent another paradigm shift IMHO (and 10th-gen consoles would benefit a ton from it as well).
 
Last edited:
The performance was good, but they were power hungry dead-end products. There is a reason that architecture was largely abandoned. It wasn't a failure, but it wasn't the success it needed to be long term.
Vega was pretty hungry (though the 64 was particularly inefficient hence why I didn't mention it) but Polaris wasn't THAT much hungrier than 1060.

I agree that it was a dead end architecture though.
 
Last edited:
gLgA1Ga.jpg

Key Points (dont fully understand what this means from a tech perspective):
-Single GCD instead of 2
-Increase MCD count which acts like a memory controller
-192MB-384MB infinity cache
-48 work group processors split into 6 shader engines
-1GCD=N31
Source: Red Gaming Tech

New AMD Driver giving increase DX11 performance improvement:


YPTxeKz.jpg

CuS5dB9.jpg

Source: Red Gaming Tech
 
Last edited:

LordOfChaos

Member
So Intel might have like, a month to themselves on the market during the inventory dry up period before these next gen cards hit
 

Ivan

Member
I'm very excited about this big jump in performance, but at the same time I'm bitter because the last game that made us truly care about that kind of stuff was made in 2007.

I wonder if there is any business model with all of the big guys helping that would make pc centric development happen again...

Most games would still be multiplatform, but there has to be SOMETHING for the crowd that's paying top prices and is enthusiastic as ever.

Crysis 4 could be that kind of an experiment, who knows...

What else do you think would make sense as a pc only/hardware pushing game?
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
I don't care about 7900 xt. I want know about 7800 xt and if will be better than a 4070.
The 7800XT should easily walk a 4070.
Noting that a 6800XT competes with a 3080 in raster.
From the current leaks theres quite alot of space between Navi 32 and Navi 31.
Id assume a 7800XT fits somewhere in between there.....even if Navi 32 is the 7800XT that already looks to be able to work a 4070 as is right now.

JoYZVMM.jpg
 
People are underestimating AMD too much. The current RDNA3 chips are already on par with Ampere in rasterization. On 4K they lag a bit because of less bandwidth and Ampere's extra bloated flops. No matter how strong Ada will be, AMD can easily double performance by doubling hardware for RDNA3, and it's expected to have higher bandwidth, more IC, improvements in IPC with stronger RT and increased clocks. Of course, Nvidia will still come ahead in some areas because of all that dedicated hardware but I expect the different to be smaller than now.
 
I think RDNA 3 will have the advantage on price to performance ratio as always, efficiency, and less power usage. The RDNA 3 architecture is the key to the iGPU/APU starting end of this year, beginning of next year on small form factors, and slim form factors, along with updated drivers, FSR 2.0, you got a winning combo.

Im glad NVIDIA will brute force performance, but at a cost of higher price, higher power usage, and discrete form factor.
 

twilo99

Gold Member
I've been on AMD cards for almost 3 years now, no issues.

I did try the 3070ti last year but switched to a 6800xt shortly after because the performance was better and the price was about the same.

Looking forward to RDNA3.
 

manfestival

Member
Have had the 6800xt since March or April of 2021. Have only had small issues like the first month that were resolved by a driver update. Any other issues I have had were caused by my overclocking tinkering that the card/software resolves itself. Just only wish this card ran a little cooler. I know reviews were impressed with the OC thermal performance but... I guess I want more efficiency/etc.
 

Jigsaah

Gold Member
Holy Fuck the Wattage. That might be the reason I don't get one. Neither Lovelace or this AMD card. Until there are enough games or applications I use to warrant this kind of power...why would I take on this. I would need to upgrade multiple components all over again. I got 750 watts right now. Lovelace taking up 600? So new power supply, new cpu and possibly new mobo if I decide to go back to Intel. Absolutely insane.
 
Holy Fuck the Wattage. That might be the reason I don't get one. Neither Lovelace or this AMD card. Until there are enough games or applications I use to warrant this kind of power...why would I take on this. I would need to upgrade multiple components all over again. I got 750 watts right now. Lovelace taking up 600? So new power supply, new cpu and possibly new mobo if I decide to go back to Intel. Absolutely insane.

Just don't get the top end.
The midrange should be faster than the current tops and use less power.
 
Does anyone have an idea when the 7000 series will release?
My vega is still holding in there, but this 6750xt is looking really nice...
 

Bo_Hazem

Banned
Are you strictly talking about desktop/laptops? Apple's ARM based CPUs have been at least one generation ahead of the competition for at least a decade now...

Yes, more like Apple vs PC/laptops. On phones/tablets it's still comparable until you bring M1 to the table, Apple wins in some, Android on others, but M1 wins it all.
 
100Tflops....
Jesus H Christ.
While I'm not a PC gamer, I love to see the GPU makers pushing performance as much as they can.
But nothing will take advantage of it.
Supersampling, frame-rate? There is a lot of ways you can take advantage of additional performance. To me If you can't use your GPU at maximum you're dumb 😄
 
Supersampling, frame-rate? There is a lot of ways you can take advantage of additional performance. To me If you can't use your GPU at maximum you're dumb 😄
The devs won't.
GPUs get released quicker than Devs can take advantage of them.
If they are halfway through their game and suddenly a new PC card comes out with 100tflops they arnt going to go back and redo all their mesh models to take advantage of it, because evey year a newer more powerful card comes out.
But still, I love seeing new cards come out.
 
The devs won't.
GPUs get released quicker than Devs can take advantage of them.
If they are halfway through their game and suddenly a new PC card comes out with 100tflops they arnt going to go back and redo all their mesh models to take advantage of it, because evey year a newer more powerful card comes out.
But still, I love seeing new cards come out.
That's why I'm talking about basic things like resolution (including supersampling) and frame-rate. Those things alone can greatly enchance any game and celling here is very high (supersampling with very high refresh rate). Do not forget about VR and things like Unreal Engine - those can eat any GPU pretty much.
In short - I don't see any problem with "too much power" in GPU space.
 

GreatnessRD

Member
As a 6800 XT owner, I'm really curious to see how AMD competes at the raytracing level. Even though Raytracing means nothing to me personally, I still chuckle that the RT performance on the card brings it to its knees. So definitely curious to see how they bounce back in that department. Especially since FSR 2.0 looks to bring parity to DLSS 23 or whatnot.
 
Apologies, that is correct winjer winjer

Will RDNA3 use LESS TDP then NVIDIA Lovelace? Does more TDP mean more TFLOPS and performance?
TDP = Thermal Design Power
Its complicated because its never used in the same way. But in essence is the amount of heat the GPU is designed to cope with. Remember every Watt of power a GPU consumes is equal to a Watt of heat produced.
300W PC is acting like a 300W heater.
 

Panajev2001a

GAF's Pleasant Genius


The video is very technical (I don't fully understand it), how does this help with current RDNA 2 products and future RDNA 3 and RDNA 4?

For sure you will have engineers taking the input of these software improvements to decide what to add in terms of HW support on top of just adding more resources (more CU’s or for RT cores per CU and/or more units dedicated to accelerate ML operations at a low power), but it is not a 1:1 relationship meaning some of these best practices might work even better with new HW but there might be things you stop doing because it is not actually needed/as efficient on new HW (it happened with the PS2 GS to PS3 RSX transition where state changes were almost free on a very fine grained based and on PS3 you had to rely a lot more on batching and avoid flushing buffers unnecessarily).

This video was describing new best practices to lower cost and/or improve image quality purely via software. The interesting option is finding the best way to speed things up in a way that ML algorithms find easiest to process the final output and render something visually convincing. Tying the two steps together a bit more so to speak if possible.
 

tusharngf

Member

AMD Radeon RX 7900 XT RDNA 3 “Navi 31” Graphics Card Specs, Performance, Price & Availability – Everything We Know So Far​




AMD Navi 31 'Plum Bonito' GPU - The Next-Gen RDNA 3 Powerhouse​

  • 5nm Process Node
  • Advanced Chiplet Packaging
  • Rearchitected Compute Unit
  • Optimized Graphics Pipeline
  • Next-Gen AMD Infinity Cache
  • >50% Perf/Watt vs RDNA 2

AMD RDNA GPU (Generational Comparison) Preliminary:​

GPU NAMENAVI 10NAVI 21NAVI 31
GPU Process7nm7nm5nm (6nm?)
GPU PackageMonolithicMonolithicMCD (Multi-Chiplet Die)
Shader Engines246
GPU WGPs204030 (Per MCD)
60 (In Total)
SPs Per WGP128128256
Compute Units (Per Die)4080120 (per MCD)
240 (in total)
Cores (Per Die)256051207680
Cores (Total)2560512015360 (2 x MCD)
Peak Clock1905 MHz2250 MHz2500 MHz
FP32 Compute9.72338.4
Memory Bus256-bit256-bit256-bit
Memory TypeGDDR6GDDR6GDDR6
Memory Capacity8 GB16 GB32 GB
Infinity CacheN/A128 MB512 MB
Flagship SKURadeon RX 5700 XTRadeon RX 6900 XTXRadeon RX 7950 XT
TBP225W330W500W
LaunchQ3 2019Q4 2020Q4 2022


According to the latest information, the AMD Navi 31 GPU with RDNA 3 architecture is expected to offer a single GCD with 48 WGPs, 12 SAs, and 6 SEs. This will give out a total of 12,288 stream processors which is lower than the previous count. This will also drop the overall compute performance unless AMD goes crazy with over 3.0 GHz clock frequencies on its flagship part. The Navi 31 GPU will also carry 6 MCD's which will feature 64 MB Infinity Cache per die and are also likely to carry the 64-bit (32-bit x 2) memory controllers that will provide the chip with a 384-bit bus interface.

As for clock speeds, the AMD Navi 31 GPU is said to offer clock speeds that can hit or even exceed 3 GHz. NVIDIA's flagship GPUs are also said to offer close to 2.8 GHz clock speeds but AMD has had a clear advantage in clock speeds over NVIDIA during the past generation so its expected to continue. A 3 GHz clock speed means that we can expect over 75 TFLOPs of FP32 performance on the newest flagship which will be a 2.3x increase over the current RDNA 2 flagship, the RX 6950 XT.

AMD-RDNA-3-Navi-3x-GPU-SKUs-1480x755.jpg


  • AMD Radeon RX 7900 XT: ~75TFLOPs (FP32) (Assuming 3.0 GHz clock)
  • AMD Radeon RX 6950 XT: 23.80 TFLOPs (FP32) (2324 MHz Boost Clock)
  • AMD Radeon RX 6900 XT: 23.04 TFLOPs (FP32) (2250 MHz Boost clock)
  • AMD Radeon RX 6800 XT: 20.74 TFLOPs (FP32) (2250 MHz Boost clock)
  • AMD Radeon RX 6800: 16.17 TFLOPs (FP32) (2105 MHz Boost clock)
Source: https://wccftech.com/roundup/amd-radeon-rx-7900-xt/
 
Top Bottom