• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD RDNA 4 GPUs To Incorporate Brand New Ray Tracing Engine, Vastly Different Than RDNA 3

Great time as far as performance jumps. Was such an exciting time. I remembering seeing people doing 8800 sli rigs and seething with jealousy. I would’ve killed to have a pc like yours at the time. I was stuck on a pentium 4 and 9600 pro (the ATi one) until like 2009. Was broke at the time.
Yeah. It was the time of Crysis benchmarks and 8800 Ultra.
 

Kataploom

Gold Member
I expect a huge turnaround on r/AMD that RT now matters, after saying for the last 3 gens it didn’t 🙃

Better late to the party than never

But I have a feeling Nvidia will turn RT on its head again, it’s already outdated to think you’ll brute force with the original RT pipelines, they’re balls deep into AI and their Neural cache radiance solution for RT could the beginning of a different RT block philosophy. Why even brute force your way with Monte Carlo or ReSTIR? AI knows what real light should look like. Anyway, just a guess, but NCR is probably their next step according to papers.
Uh, actually the problem with RT everyone has, including myself, is that it is too expensive for the results you get in comparison to rasterized lighting, most of great spots are limited to specific sections so why lose performance of the 80% of the game for that? If RT wasn't that expensive or even free, nobody would be saying we don't care about RT, it's just that it's not worth it the performance impact.

AMD having new RT hardware is great, nobody can deny that, whether it makes RT actually worth it for most players is another matter.
 
Last edited:

winjer

Gold Member
Still the first out the door, that was my point about innovation (even though it's closed ecosystem). Yes now everything is standard and Gsync modules have nearly all disappeared except for their top line.

What I mean in all this is that Nvidia isn't twiddling their thumbs like Intel did. I seriously don't believe in a Ryzen story here for GPU side.

Yes, nvidia was the first, but in this case, AMD did win. One of the few, in recent times.
Freesync is the standard on PCs and consoles. On monitors and TVs. even if some manufacturers rebranded it as Gsync Compatible.

You are correct, nvidia a is not Intel. Jensen Huang is not an incompetent fool like Krzanich.
So it'0s unlikely that AMD will overtake nvidia. Be it in the GPU space, or for AI.
 

shamoomoo

Banned
Outside of a couple outliers like Cyberpunk and Alan Wake 2, sponsored games don’t get any special treatment, or function better than non-sponsored games. Hell, AMD sponsored The Last of Us.
The point is there are few games with relatively good implementation of RT independent of Nvidia on PC. Yeah, AMD could sponsor games to make sure RT runs decent on their GPUs.
 

Loxus

Member
With that being said, so far these are the SE count of RDNA4.
Navi44 - 16WGP (2SE, 32CU, 64ROPs)
Navi48 - 32WGP (4SE, 64CU, 128ROPs)

So the WGP per SE is still the same as RDNA3.
8WGP or 16CU per SE.
PS5 having 4SE is more probable than 2SE.
2SE with that many CU seems to be inefficient as well.

For higher end RDNA4 going by Navi44/48, it appears to be still 8WGP or 16CU per SE.
And it seems each SED houses 2 Shader Engines.
RDwiOHU.png


Navi41
1SED = 2SE/16WGP(32CU)
6SED = 12SE/96WGP(192CU)
1AID = 2×64bit or (4×32bit GDDR PHY)
2AID = 4×64bit GDDR PHY = 256bit

Navi40
1SED = 2SE/16WGP(32CU)
9SED = 18SE/144WGP(288CU)
1AID = 2×64bit or (4×32bit GDDR PHY)
3AID = 6×64bit GDDR PHY = 384bit
In one of my previous posts I detailed some RDNA4 info from various sources.

I suggested a SED consist of 2SE/16WGP(32CU).

Some more info on RDNA4 info dropped, which suggests I maybe correct.


So Navi48 maybe 2 SED.
Navi44 = 1 SED = 2SE/16WGP(32CU)
Navi48 = 2 SED = 4SE/32WGP(64CU)

But with rumors and leaks, always take them with a gain of salt.
 
Last edited:

winjer

Gold Member
In one of my previous posts I detailed some RDNA4 info from various sources.

I suggested a SED consist of 2SE/16WGP(32CU).

Some more info on RDNA4 info dropped, which suggests I maybe correct.


So Navi48 maybe 2 SED.
Navi44 = 1 SED = 2SE/16WGP(32CU)
Navi48 = 2 SED = 4SE/32WGP(64CU)

But with rumors and leaks, always take them with a gain of salt.


This could be the reason why leaks where pointing to such small GPU dies.
AMD might be able to tile a bunch of them to make bigger SKUs.
 

Trogdor1123

Member
Is there an equivalent shift like differences between x86 to Arm in GPUs? I’m assuming they are complete different but don’t actually know
 
Bumping this old thread as wanted to share this. The dude from HardwareUnboxed was on MLiD's podcast today and mentioned that he's spoken to a few people who have been "testing" RDNA 4 and that they were disappointed by its performance, said that RDNA 4 hasn't panned out the way AMD wanted to similar with RDNA 3 and that the "chiplet" strategy hasn't worked out well for them.

 
Bumping this old thread as wanted to share this. The dude from HardwareUnboxed was on MLiD's podcast today and mentioned that he's spoken to a few people who have been "testing" RDNA 4 and that they were disappointed by its performance, said that RDNA 4 hasn't panned out the way AMD wanted to similar with RDNA 3 and that the "chiplet" strategy hasn't worked out well for them.



RDNA 4 is monolithic so the chiplet strategy has nothing to do with it expect for the bigger chips that they won't release anyway
 

Bernardougf

Member
After the price of the pro and shitty sony gaas push Im waiting for the 5 series... I might jump off playstation for the first time since the ps2.
 
RDNA 4 is monolithic so the chiplet strategy has nothing to do with it expect for the bigger chips that they won't release anyway
The chiplet strategy was in reference to scalability, as AMD were struggling to scale up chiplet design to higher end SKU's, also according to MLiD, this is one of the reasons why AMD cancelled the RDNA 4 high end (pretty sure he leaked PCB design shots for that).

The fact that AMD are not even using chiplets on RDNA 4 is proof of this. It's probably why Nvidia haven't gone with such a design for their GPUs either.
 
Bumping this old thread as wanted to share this. The dude from HardwareUnboxed was on MLiD's podcast today and mentioned that he's spoken to a few people who have been "testing" RDNA 4 and that they were disappointed by its performance, said that RDNA 4 hasn't panned out the way AMD wanted to similar with RDNA 3 and that the "chiplet" strategy hasn't worked out well for them.


What chiplets is that?
 

ap_puff

Banned
What chiplets is that?
Multi-chip modules, basically the split up various parts of the gpu or cpu and fabricate them separately, then stitch them together using TSMC's advanced packaging technology to make them work together as if it were a single part.

The reason they do this is because silicon wafer production isnt perfect and even a single defect can ruin a die, so in order to get the maximum number of working chips they design them so parts can be disabled if defective, or in this case they split them into chiplets so the failed ones dont cost as much
 
Last edited:
Multi-chip modules, basically the split up various parts of the gpu or cpu and fabricate them separately, then stitch them together using TSMC's advanced packaging technology to make them work together as if it were a single part.

The reason they do this is because silicon wafer production isnt perfect and even a single defect can ruin a die, so in order to get the maximum number of working chips they design them so parts can be disabled if defective, or in this case they split them into chiplets so the failed ones dont cost as much
I know, but how does that relate to RDNA4? What we know of their product stack is that they're not aiming high, which means its unlikely they're using chiplets, as that would fuck with margins too much.
 
The chiplet strategy was in reference to scalability, as AMD were struggling to scale up chiplet design to higher end SKU's, also according to MLiD, this is one of the reasons why AMD cancelled the RDNA 4 high end (pretty sure he leaked PCB design shots for that).

The fact that AMD are not even using chiplets on RDNA 4 is proof of this. It's probably why Nvidia haven't gone with such a design for their GPUs either.
Nvidia originally planned to use chiplets for Ada Lovelace but found it so unworkable that they stayed with monolithic. Blackwell will also be monolithic for consumer GPU's

GPU's are incredibly timing and latency dependent for realtime rendering of video games and chiplets introduce very complicated management of those things. This is fine when you're just running AI applications where no one cares if it takes 1 ms or 1 second, but you have exactly 16 ms between frames at 60 fps and if you can't consistently bring frame data to present in that window then you're not gonna make it as a GPU

I don't think GPU's will go to chiplets for a long time yet, because neither Nvidia nor AMD are going to be putting expensive NVLink or whatever the AMD version is on consumer GPU's
 

Buggy Loop

Member
I know, but how does that relate to RDNA4? What we know of their product stack is that they're not aiming high, which means its unlikely they're using chiplets, as that would fuck with margins too much.

Chiplet still makes sense for what they are doing, memory cache, controllers, etc.

I think they're just laying down the foundation and ironing out the problems. RDNA 4 probably ain't gonna be "it".
 

ap_puff

Banned
I know, but how does that relate to RDNA4? What we know of their product stack is that they're not aiming high, which means its unlikely they're using chiplets, as that would fuck with margins too much.
I think the implication was originally there was a design with several of these chiplets and it was going to be a -90 class world beater and they could scale it up and down however much to suit the market. I think MLiD was at some point predicting 192 CUs (cbf to go back and find the video cause most of it is bullshit). But the chiplet strategy was a real thing and they had one of the AMD engineers give an interview about it around the time rdna3 launched.

*Edit* found it
 
Last edited:

PSlayer

Member
As well as get back to 100% performance uplift with each new generations, which'll never happen as well as better competiton and prices from AMD.
Honestly i don't think it is even possible with current silicon technology.

Imo a 50% uplift is the maximum i would expect from a really good gen
 

SolidQ

Member
What we know of their product stack is that they're not aiming high, which means its unlikely they're using chiplets
N43/N42/N41/N40 was chiplets cards. That moving to RDNA5, they need time to work it, because it's not Data Center Mi300x, it's more complex
N44 still monolithic, N43 was reworked to N48 mono
N48 = 32 WGP
RX 7900XTX = 48WGP
N40 was 144 WGP
 
Last edited:

PSlayer

Member
Bumping this old thread as wanted to share this. The dude from HardwareUnboxed was on MLiD's podcast today and mentioned that he's spoken to a few people who have been "testing" RDNA 4 and that they were disappointed by its performance, said that RDNA 4 hasn't panned out the way AMD wanted to similar with RDNA 3 and that the "chiplet" strategy hasn't worked out well for them.


Yeah Moore law is dead himself said almost a year ago that RDNA4 was a bad gen and that AMD was cutting out the high end cards.
 

Bry0

Member
N43/N42/N41/N40 was chiplets cards. That moving to RDNA5, they need time to work it, because it's not Data Center Mi300x, it's more complex
N44 still monolithic, N43 was reworked to N48 mono
N48 = 32 WGP
RX 7900XTX = 48WGP
N40 was 144 WGP
This is also my understanding. Amd is giving us what they could salvage, in a sense.
 

Panajev2001a

GAF's Pleasant Genius
Bumping this old thread as wanted to share this. The dude from HardwareUnboxed was on MLiD's podcast today and mentioned that he's spoken to a few people who have been "testing" RDNA 4 and that they were disappointed by its performance, said that RDNA 4 hasn't panned out the way AMD wanted to similar with RDNA 3 and that the "chiplet" strategy hasn't worked out well for them.


Might be that the ray tracing engine is either put to better use in PS5 Pro or is a bit beyond what based RDNA4… mmmh…
 

flying_sq

Member
It will need to be 20%-30% better than a 4080 in RR. I'm sure Nvidia will see about that in 5000 series improvements
 

ap_puff

Banned
It does because that's the hardware Sony is using for the Pro.
But thats not the context of the clip. They're talking about rdna4 not having high end desktop chips because allegedly the MCM strategy is not scaling well into higher CU/chiplet count. That has nothing to do with RT performance or how well PS5 pro is using AMD hardware.
 

StereoVsn

Gold Member
I really wish Intel didn’t botch their GPU efforts. Now with AMD also dropping the ball, Leather Jacket Man can charge whatever the f he wants for Nvidia 5000 series from low end to high end.

This kind of sucks.
 

kevboard

Member
I have some doubts, but fingers crossed.

all they need is a 5060 competitor that has drivers that don't shit the bed tbh.

their RT is great, their ML reconstruction is great... it really is the drivers that they need to get to a state where you don't have to be worried of every single game you might play, possibly not working.
 

kevboard

Member
Is it even announced to be a thing? Afaik the only 2nd gen intel gpus we've even heard about are the integrated graphics in laptops

no card has been announced, but the same is technically true for Nvidia's 5000 series too. all we got there is leaks.

them using them in APUs first might be them basically getting data to get drivers ready for their discrete GPU launch.

it's really hard to predict what Intel will do given that it's only their second generation.
 

Dorfdad

Gold Member
RDNA 3 was pretty much a waste. RT is their Achilles Heel, so hopefully they get that straightened out.
Who cares they already stated they were not going to do high end cards anymore so a mid tier card with half ass tracing isn’t worth discussing. Now if they want to play with Nvidia again that’s a different story but they publicly stated they were out of that game some I’m out of the AMD gpu scene
 

ap_puff

Banned
no card has been announced, but the same is technically true for Nvidia's 5000 series too. all we got there is leaks.

them using them in APUs first might be them basically getting data to get drivers ready for their discrete GPU launch.

it's really hard to predict what Intel will do given that it's only their second generation.
Normally I'd agree but with Intel really being short on cash and how badly Arc sold, I wouldn't bet on them committing to a discrete gpu launch. That's a lot of money down the drain if no one buys it again and nvidia can easily choke them out
 
A solid step on performance + a reasonable price would make me switch.

I would really like to ditch windows and go to Linux full time and AMD hardware would make that easier.
I'm pretty much on the same boat. I've been riding with a 6600XT for 3 years and would like a solid performance per power improvement for a reasonable price.
 
  • Like
Reactions: RCX
Top Bottom