• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD RDNA 4 GPUs To Incorporate Brand New Ray Tracing Engine, Vastly Different Than RDNA 3

Bernoulli

M2 slut
AMD's Next-Gen RDNA 4 GPUs To Feature Major Ray Tracing Engine Changes Versus RDNA 3
The latest rumor comes from @Kepler_L2 who has shared the patches for AMD's next-gen RDNA 4 GPUs and its RT (Ray Tracing) engine. It is stated that the AMD RDNA 4 GPUs which are expected to launch with the Radeon RX 8000 graphics card lineup are going to incorporate a new RT engine that should yield much better performance versus the existing engine featured on RDNA 3 chips.\



 

winjer

Gold Member
RDNA 3 was pretty much a waste. RT is their Achilles Heel, so hopefully they get that straightened out.

The channel Ancient Gameplay just made a comparison, where he got a few RNDA2 and RDNA3 GPUs, with the same CU count, and then ran them at the same clocks for memory and core.
There are some differences with the amount of L3 cache. But the result is that there wasn't a big jump in performance, with or without RT.
There were much bigger gains with AI. The 7800XT and 7600 had 3X the performance of the 6800 and 6650XT.
But overall, a bit of a pointless generation for AMD.

 

Agent_4Seven

Tears of Nintendo
100% uplift with each generation? Are we back to the late 90s?
Unfortunately no, but that's the point - it needs to happen.

We rarely ever had a 100% uplift gen over gen.
It was very common back and during Voodoo, Riva and stuff, I'm old enough to remember all this shit.
You literally couldn't run anything well if you haven't bought Voodoo and such, or the games you've tried to play was extremely compromised and worked in not intended way.
Yes, those were the times of experiments and the birth of realtime 3D graphics with lots of competitors, but when you compare all this to today's +20 FPS to minimum FPS for $1000 year over year...
But even before all these insane price hikes, performance gains were still not even close to those golden times. I've zero faith in AMD cuz they're just like NVIDIA charging insane prices for their GPUs instead of severely undercutting their prices and improve them overall. Where's 7900/XT for $600-700 at launch? Until something like this will happen, we all be forced to pay insane prices for even entry level GPUs.
 
Last edited:

winjer

Gold Member
I don’t recall the last time this happened. I’m not even sure the 8800 GTX was 100% faster than its predecessor.

Depending on the game, but the 880 Ultra, had double the performance of the 7900GTX.
fear-1600-4x-16x.png
 

winjer

Gold Member
Unfortunately no, but that's the point - it needs to happen.


It was very common back and during Voodoo, Riva and stuff, I'm old enough to remember all this shit.
You literally couldn't run anything well if you haven't bought Voodoo and such, or the games you've tried to play was extremely compromised and worked in not intended way.
Yes, those were the times of experiments and the birth of realtime 3D graphics with lots of competitors, but when you compare all this to today's +20 FPS to minimum FPS for $1000 year over year...
But even before all these insane price hikes, performance gains were still not even close to those golden times. I've zero faith in AMD cuz they're just like NVIDIA charging insane prices for their GPUs instead of severely undercutting their prices and improve them overall. Where's 7900/XT for $600-700 at launch? Until something like this will happen, we all be forced to pay insane prices for even entry level GPUs.

That will never happen again.
Mainly because advancements in process nodes are significantly slower, and much more expensive.
 

Bitmap Frogs

Mr. Community
I recall MLID mentioning that the big news for rdna4 re: ray tracing is that some critical calculation wasn’t hardware supported by RT implementation of rdna2 and rdna3 and reportedly it will be in rdna4.
 

Buggy Loop

Member
I expect a huge turnaround on r/AMD that RT now matters, after saying for the last 3 gens it didn’t 🙃

Better late to the party than never

But I have a feeling Nvidia will turn RT on its head again, it’s already outdated to think you’ll brute force with the original RT pipelines, they’re balls deep into AI and their Neural cache radiance solution for RT could the beginning of a different RT block philosophy. Why even brute force your way with Monte Carlo or ReSTIR? AI knows what real light should look like. Anyway, just a guess, but NCR is probably their next step according to papers.
 
We knew this the moment we had those patents about new dedicated traversal RT hardware from both Cerny and AMD. And Cerny patents technologies always end up in his gaming devices.

This is not going to close their gap obviously, but at least they won't be 2 or 3 generations behind Nvidia. I am expecting they'll still be at one RT generation behind, about 4070 RT level (when Nvidia is going to release their 5XXX gen).
 
Last edited:

Buggy Loop

Member
We knew this the moment we had those patents about new dedicated traversal RT hardware from both Cerny and AMD. And Cerny patents technologies always end up in his gaming devices.

This is not going to close their gap obviously, but at least they won't be 2 or 3 generations behind Nvidia. I am expecting they'll still be at one RT generation behind, about 4070 RT level (when Nvidia is going to release their 5XXX gen).

Nvidia should not be the focus of AMD for the foreseeable future, Intel is in the rear view mirror.. While Intel has good RT, it’s highly inefficient for the silicon size, so AMD with this new RT will / should be plenty fine as I doubt gen over gen that Intel fixes everything.
 

shamoomoo

Member
RDNA 3 was pretty much a waste. RT is their Achilles Heel, so hopefully they get that straightened out.
Was it? Most games with good implementation of ray tracing was Nvidia sponsored, very few games were released for PC with RT that didn't have backing from Nvidia.
 

winjer

Gold Member
Nvidia should not be the focus of AMD for the foreseeable future, Intel is in the rear view mirror.. While Intel has good RT, it’s highly inefficient for the silicon size, so AMD with this new RT will / should be plenty fine as I doubt gen over gen that Intel fixes everything.

Inefficient is an understatement.
The A770 is twice the size of the RX7600, for the same performance. And the 7600 still manages to have some die space used up by 32Mb of L3.
Like you say, AMD ahs to improve RT and ML units. But Intel has to fix the whole GPU.
 

Cryio

Member
I have a 7900 XTX, so currently AMD's top card. I also have a 1080p monitor. If need be, I can always downsample from 4K/8K and other higher than 1080p resolutions.

I can enable RT and max it out willy nilly everywhere. You need more performance? Enable/mod in FSR2 or XeSS for upscaling. Modded FSR2, for better and worse, looks VASTLY better than most native FSR2 solutions.

Need even more "performance"? LukeFz's mod allows you to add FSR3 Frame Generations to almost ANY game that has DLSS2, FSR2 or XeSS (and run in DX12 natively). So I can just about do 4K120 with RT in basically everything, assuming FSR3 Performance upscaling and Frame Generation enabled.

In my current backlog of games, for Unreal Engine titles version 4.22-4.27, I'm also playing with ini variables to force/tweak RT in all UE4 games I can.
 
Last edited:
I will believe when I see it. AMD has yet to demonstrate that they can compete with Nvidia...Even Intel is not doing that much worse and it has just started :messenger_tears_of_joy:
Depending on the game, but the 880 Ultra, had double the performance of the 7900GTX.
fear-1600-4x-16x.png
Those 8 series cards were glorious.
 

Cryio

Member
Is a 7900 XTX universally at least as fast in ray tracing as a 3090 Ti or 4070 Ti / Super? Not entirely. But it is faster than 3080 To / 4070 Super all the time.

And there are Nvidia and Intel cards slower than those mentioned above.

I'm very curious to see what RDNA4 will bring. RTGI was limited to a uArch design with RDNA1 and were able to only slightly tweak it here and there.

I expected that RDNA5 (with maybe a different name, or even RDNA4) to be a complete uArch redesign that would more massively focus on RT.
 
I expect a huge turnaround on r/AMD that RT now matters, after saying for the last 3 gens it didn’t 🙃

I know it's a shitty excuse on their side - but they did have a point - RT isn't even a standard across games right now but I suspect that'll change moving onwards form this year, throughout the 20 and 30 Series, implementations were scarce and mediocre across a majority of the titles, and a lot of the visual upgrades weren't even worth it considering the performance hit.
 

Cryio

Member
Like ~3.5 years ago, Digital Foundry did a video on Ghostrunner 1, in Oct 2020. It was one of the first Unreal Engine 4 games with RT. It launched with UE 4.22, which was the first version with RT capabilities. For reference, latest version of UE4 at the time of game's release was 4.25 (the game would later be upgraded to UE 4.26 for the XSX/PS5 release, even on PC).

Performance numbers for those RT features are hilarious in retrospective, for shadows, AO and reflections. Most newer UE4 games with any kind of RT are either way lighter in hardware demands or have better visual implementations.
 

gundalf

Member
Hmm I wonder if AMD is making their GPU architecture more ML/AI friendly and having better ray tracing is a side effect from that.
 

Sethbacca

Member
There was a point when AMD was in the same position against Intel in the CPU arena that they're currently in against NVidia in the GPU space, and it has me thinking if anyone can do it it's AMD. NVIDIA could do with being knocked down a few notches at this point.
 

Buggy Loop

Member
There was a point when AMD was in the same position against Intel in the CPU arena that they're currently in against NVidia in the GPU space, and it has me thinking if anyone can do it it's AMD. NVIDIA could do with being knocked down a few notches at this point.

Nope

That's completely delusional to think Nvidia vs AMD GPU is in anyway similar to Intel vs AMD CPU.

Intel was sitting on their fucking laurels and didn't want to go to other foundries, they stagnated and it still took AMD multiple zen generations to be above. And CPUs are relatively much more simple in what they are supposed to do, while GPUs keep changing drastically and are not even in their final form.

Nvidia doesn't need competition to bring new features in, be the most present in universities and research fields on the topics of AI & RT and they themselves publish a ton of papers on the subjects. They're not sitting on their laurels one fucking bit.

That comparison keeps coming up, but it's completely ridiculous.
 

Sethbacca

Member
Nope

That's completely delusional to think Nvidia vs AMD GPU is in anyway similar to Intel vs AMD CPU.

Intel was sitting on their fucking laurels and didn't want to go to other foundries, they stagnated and it still took AMD multiple zen generations to be above. And CPUs are relatively much more simple in what they are supposed to do, while GPUs keep changing drastically and are not even in their final form.

Nvidia doesn't need competition to bring new features in, be the most present in universities and research fields on the topics of AI & RT and they themselves publish a ton of papers on the subjects. They're not sitting on their laurels one fucking bit.

That comparison keeps coming up, but it's completely ridiculous.
Oh there's no doubt Nvidia has clearly been an innovator in the space, but there are ways that AMD could sneak in the backdoor mostly by doing the stuff that NVidia does but making them open rather than proprietary and beating them on value rather than features, but to do that they have to have a price/performance point that is actually appealing. I get what you're saying though.
 

FireFly

Member
Nope

That's completely delusional to think Nvidia vs AMD GPU is in anyway similar to Intel vs AMD CPU.

Intel was sitting on their fucking laurels and didn't want to go to other foundries, they stagnated and it still took AMD multiple zen generations to be above. And CPUs are relatively much more simple in what they are supposed to do, while GPUs keep changing drastically and are not even in their final form.

Nvidia doesn't need competition to bring new features in, be the most present in universities and research fields on the topics of AI & RT and they themselves publish a ton of papers on the subjects. They're not sitting on their laurels one fucking bit.

That comparison keeps coming up, but it's completely ridiculous.
With GPUs you can be competitive on price even at an architectural deficit, because the lineup spans a wide performance range. That's how Intel is able to sell their new GPUs. But CPUs depend heavily on single threaded and gaming performance, which relies on IPC. And it normally takes many generations to improve IPC significantly.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
As well as get back to 100% performance uplift with each new generations, which'll never happen as well as better competiton and prices from AMD.
Get back to 100% performance uplift.

When did we have 100% performance uplifts gen on gen?

Hell between the GTX570 and the GTX1070 is only when you would feel a 100% improvement.
Its the reason most PC builders reccomend upgrading ever 2 to 3 generations, so you actually feel like things have moved forward.
Rarely is a gen on gen upgrade worth it.

Even the absolutely bonkers 4090 is still about 60 - 70% gen on gen.
 

LordOfChaos

Member
Well it sure needs to be

If you compare them based on their raster tiers, even newcomers like Intel and Apple on their first tries are higher in RT:raster performance scale than AMD

 
Last edited:

SlimySnake

Flashless at the Golden Globes
Nope

That's completely delusional to think Nvidia vs AMD GPU is in anyway similar to Intel vs AMD CPU.

Intel was sitting on their fucking laurels and didn't want to go to other foundries, they stagnated and it still took AMD multiple zen generations to be above. And CPUs are relatively much more simple in what they are supposed to do, while GPUs keep changing drastically and are not even in their final form.

Nvidia doesn't need competition to bring new features in, be the most present in universities and research fields on the topics of AI & RT and they themselves publish a ton of papers on the subjects. They're not sitting on their laurels one fucking bit.

That comparison keeps coming up, but it's completely ridiculous.
Honestly, since 2018 they havent improved their ray tracing performance. They said the 4000 series would offer better RT performance for the same processing power but that turned out to be minimal at best.

Maybe they cant improve it further but its been 6 years and their IPC gains have been non-existent. Just brute forcing performance is simply not good enough because that means the cards get bigger, hotter and more expensive. It's literally 1:1 what happened with intel. Only difference is that AMD didnt improve RDNA like they did Zen.

You could point to AI but again, its mostly on the software side. The 40 series cards will have the same DLSS upgrades as the 20 series cards from 2018. Aside from framegen which should be unlocked for all cards but thats another topic. Now if they can use their AI cores to somehow improve Path tracing or Ray tracing performance then great but using it exclusively for DLSS for 6 years is sitting on their laurels.

Maybe they take the next big leap forward with the 5000 series, but as someone who has owned both the 2080 and 3080, im rather undewhelmed by the progress. The 4080 being only 50% more powerful than the 3080 despite being $500 more expensive is not a great look.
 

Buggy Loop

Member
Honestly, since 2018 they havent improved their ray tracing performance.

Britney Spears What GIF


Turing 1x ray/triangle intersection rate
Ampere 2x ray/triangle intersection rate
Ada 4x ray/triangle intersection rate

While old RT titles don't really stress this as much as path tracing does, Turing is amputated with Cyberpunk 2077 overdrive where high poly and complex geometry open world is just too freaking hard for 1x ray/triangle.

They said the 4000 series would offer better RT performance for the same processing power but that turned out to be minimal at best.

4070 with 46 RT cores keeps up in CP2077 OD with a 3080 with 68 RT cores. Fundamentally there's less clocks on ampere but that does not compensate for nearly 1/3 reduction in RT cores

Maybe they cant improve it further but its been 6 years and their IPC gains have been non-existent. Just brute forcing performance is simply not good enough because that means the cards get bigger, hotter and more expensive. It's literally 1:1 what happened with intel. Only difference is that AMD didnt improve RDNA like they did Zen.

3080 628mm^2 → 4080 379mm^2

Again, what parallel universe is this?

1:1 what happened with intel :messenger_tears_of_joy:

You could point to AI but again, its mostly on the software side. The 40 series cards will have the same DLSS upgrades as the 20 series cards from 2018. Aside from framegen which should be unlocked for all cards but thats another topic. Now if they can use their AI cores to somehow improve Path tracing or Ray tracing performance then great but using it exclusively for DLSS for 6 years is sitting on their laurels.

Maybe they take the next big leap forward with the 5000 series, but as someone who has owned both the 2080 and 3080, im rather undewhelmed by the progress. The 4080 being only 50% more powerful than the 3080 despite being $500 more expensive is not a great look.

AMD followed with the pricing scheme of just interpolating performance gen to gen with performance gain too. They're both in a shit situation for pricing and both not in a great look. Welcome to GPUs 2024.

Cuda, Gsync, AI, DLSS, RTX, Ray reconstruction, ReSTIR, Gsync, RTX Remix, AI HDR, etc. They're ahead in every perceivable way technologically and have again many things in the pipeline if we look at their papers. With >80% marker share monopoly and no competition in sight to push them to do these innovations.

There's nothing 1:1 to intel, what insanity is this.

Can you even name something Intel did since 2002's hyper-threading? I can't. At least not up to Ryzen 1st gen. Nearly 15 years left for AMD to get their shit together.

The only way AMD grabs a portion of the market is if Nvidia is so high up in AI gold money that gaming GPUs actually become a hindrance and "wasted" silicon for them as they would make way more cash to use that wafer for AI cards. Giving up on desktop GPUs entirely as they ascend to another realm of >10 trillion $ market.
 
It's always nice to see AMD making improvements to their technology but the time when they were the ones offering value for money are long gone, and, for me personally, I could never switch from NVIDIA because they (NVIDIA) are just so far ahead of them at this point that it would absolutely feel like a downgrade. Not just RTX (and path-tracing) but also DLSS, frame generation and all the other good stuff that makes me want to stick with NVIDIA.
 

SmokedMeat

Gamer™
Was it? Most games with good implementation of ray tracing was Nvidia sponsored, very few games were released for PC with RT that didn't have backing from Nvidia.

Outside of a couple outliers like Cyberpunk and Alan Wake 2, sponsored games don’t get any special treatment, or function better than non-sponsored games. Hell, AMD sponsored The Last of Us.
 

SlimySnake

Flashless at the Golden Globes
Turing 1x ray/triangle intersection rate
Ampere 2x ray/triangle intersection rate
Ada 4x ray/triangle intersection rate

While old RT titles don't really stress this as much as path tracing does, Turing is amputated with Cyberpunk 2077 overdrive where high poly and complex geometry open world is just too freaking hard for 1x ray/triangle.

4070 with 46 RT cores keeps up in CP2077 OD with a 3080 with 68 RT cores. Fundamentally there's less clocks on ampere but that does not compensate for nearly 1/3 reduction in RT cores
I stand corrected.

However, I have done tflops calculations on the 4070 and 3080 and they are 32 and 33 tflops respectively thanks to the massive clock speed boost they get on the 4000 series cards. It's 2.7 vs 1.9 ghz in most tests ive seen. if there is an RT performance boost on the 4070, im not seeing it. It's virtually identical to the 3080 lagging behind a few fps in some RT games including cyberpunk performing virtually identical to its tflops delta.

In 2020, their $500 3070 was equal to their $1,200 2080 Ti. In 2022, their $600 4070 can barely keep up with their $700 3080. Who cares if nvidia can do more with fewer rt cores when they simply couldnt provide that value to the customer? Again, im not seeing it and while they were able to reduce shader cores and RT cores, they had to pump up the clocks to an insane degree.

And yes, they went from 8nm to 4nm, of course they will shrink the die size. But the cards themselves are bigger. I literally cant stick a 4080 or 4090 in my case because they needed giant heatsinks because of the clocks they are pushing. They are indeed brute forcing this much like intel which has clocked their CPUs way higher than the zens and are now pushing 6+ ghz.

Cuda, Gsync, AI, DLSS, RTX, Ray reconstruction, ReSTIR, Gsync, RTX Remix, AI HDR, etc.
Aside from DLSS and RTX which I already gave them credit for back in 2018, thats a pretty lame lineup of innovations. I dont even know who asked for AI HDR when every game already ships with it. and Ray reconstruction is available in one game, and is fixing issues caused by their RTX implementation in the first place. Thats not an innovation, in software engineering, we call that a defect and a bug fix. Gsync has been around since forever. i was talking about 2018 onwards.
 

SoloCamo

Member
I will believe when I see it. AMD has yet to demonstrate that they can compete with Nvidia...Even Intel is not doing that much worse and it has just started :messenger_tears_of_joy:

Those 8 series cards were glorious.

Yup, those were revolutionary at the time. I remember having an x1950pro (AGP model no less) and was blown away at the performance jump.
 
Yup, those were revolutionary at the time. I remember having an x1950pro (AGP model no less) and was blown away at the performance jump.
Yeah. At that I remember I got a PC with 9800 GT and Core 2 Duo E8400. Just launched. Also 2 GB RAM. Was an amazing combo.

Went from 6600 GT and Pentium 4 and 512 MB RAM. Though before 6600 GT I had 5200.
 
Last edited:

Bry0

Member
Yeah. At that I remember I got a PC with 9800 GT and Core 2 Duo E8400. Just launched. Also 2 GB RAM. Was an amazing combo.

Went from 6600 GT and Pentium 4 and 512 MB RAM. Though before 6600 GT I had 5200.
Great time as far as performance jumps. Was such an exciting time. I remembering seeing people doing 8800 sli rigs and seething with jealousy. I would’ve killed to have a pc like yours at the time. I was stuck on a pentium 4 and 9600 pro (the ATi one) until like 2009. Was broke at the time.
 

SoloCamo

Member
Great time as far as performance jumps. Was such an exciting time. I remembering seeing people doing 8800 sli rigs and seething with jealousy. I would’ve killed to have a pc like yours at the time. I was stuck on a pentium 4 and 9600 pro (the ATi one) until like 2009. Was broke at the time.

Yup went from ATi Rage 128 to Voodoo 3 (can't remember exact models) to Geforce 4 ti4200 to ATiX800pro then to a ATi x1950pro (AGP as well)... I held onto that x1950pro paired with an Athlon 64 x2 4200+ (socket 939) until I moved said cpu into a socket 939 pc with pci-e I bought for literally $5.00 at a church thrift store in 2011. Bought a GTX550ti to play Skyrim on said system and upgraded the same year regretfully to a FX-8120... Been on the pc upgrade train since.

The pc bought at the church was a nice HP desktop... the reason it was so cheap was because as soon as Windows XP loaded up there were endless porn virus popups. I'm sure a loving husband didn't want wifey to see that.
 
Last edited:

Buggy Loop

Member
You are right about all of those except Gsync.
Almost all current monitors and TVs support Free-sync. It's just that some companies prefer to rebrand AMD's Freesync as Gsync Compatible.
What was Nvidia's Gsync is now basically dead.

Still the first out the door, that was my point about innovation (even though it's closed ecosystem). Yes now everything is standard and Gsync modules have nearly all disappeared except for their top line.

What I mean in all this is that Nvidia isn't twiddling their thumbs like Intel did. I seriously don't believe in a Ryzen story here for GPU side.
 
Top Bottom