[ETA Prime] Ryzen Z2 Extreme vs Z1 Extreme Testing

Topher

Identifies as young


Summarized the testing below. The blank cells are where he skipped the test.

Based on this, only reason to get a Z2 Extreme is if you really want to max out performance on battery, but even at 17 watts you are not seeing massive gains here. At 25 watts, the difference is very small. Between 1 and 5 fps.

FaHCdTO8hBkYPbuA.png


CND0aBy0JlcPnYuu.png


3cCWVW5kIwcljroc.png
 
so the main benefit seems to be that the Z2 can reach the Z1's 25w performance at 17w :pie_thinking: but how high can you push both in terms of clock speeds? does the Z2 have a higher upper limit?
 
Last edited:
I heard the upgrade wasn't really good but this seems bad. 1-5fps increase when you can get z1 for half the price of a z2? Yeah that won't cut it. I see z1s go on sale for like $500 at times.
 
If this SoC packed with RDNA4 and support FSR4, it would have been a game changer for me. Right now I think claw 8 AI+ is better with a more forward thinking GPU despite of Intel still playing catch-up with their driver.
 
10 to 20 percent uplift on average.

Not bad if you are jumping in but I wouldn't upgrade from a rog ally x to the new xbox variant. That og X is still a beast.
 
Too little, too expensive. The bigger upgrade is some handheld devices now switching to 8'' display. The new MSI for instance has the Z2E and 8'' display which IMO makes it the best handheld on the market right now. Previously it only had Intel.
With that said, if you already own a Z1E/7840U device, this is hardly worth even considering an upgrade for. Everyone is waiting for something more substantial. The Rog Ally X is still a top tier handheld, and probably the best bang for bucks right now.
 


Summarized the testing below. The blank cells are where he skipped the test.

Based on this, only reason to get a Z2 Extreme is if you really want to max out performance on battery, but even at 17 watts you are not seeing massive gains here. At 25 watts, the difference is very small. Between 1 and 5 fps.

FaHCdTO8hBkYPbuA.png


CND0aBy0JlcPnYuu.png


3cCWVW5kIwcljroc.png

Can't wait till we're done with RDNA3 Jesus that shit launched in 2022 yet it's 2025 and it's the best mobile GPU AMD has. Such a shame there's no RDNA4 APUs and we're gonna have to wait till the second gen of Zen 6 to get a successor so 2027. Bro 5 years of RDNA3, is this hell?

Hopefully Intel Panther Lake gets plenty of handheld adoption because unlike AMD Intel launched Alchemist (Xe¹) in 2022, Battlemage (Xe²) in 2024 and Celestial (Xe³) in 2025 with Panther Lake. Xe³ should be much better than RDNA3 and since Xe¹
Intel has AI upscaling via XeSS while AMD has made FSR4 their first AI upscaler exclusive to RDNA4. What a shit show.
 
I heard the upgrade wasn't really good but this seems bad. 1-5fps increase when you can get z1 for half the price of a z2? Yeah that won't cut it. I see z1s go on sale for like $500 at times.
Its still RDNA3. There is no RDNA4 for mobile so the gains are small but this uses Zen 5 which is a better more modern CPU. We won't have a new GPU gen till 2027 on mobile/handhelds which is when we'll get UDNA (RDNA5), RDNA5 is going to be a huge leap over RDNA3.

In the mean time we have Intel coming in with Xe³ in Panther Lake at the end of the year which should be better than RDNA3 and then Nvidia down the line with their NV1 chip. AMD themselves is focusing on bringing Zen 6 in 2026 and UDNA after that.
 
Last edited:
Too little, too expensive. The bigger upgrade is some handheld devices now switching to 8'' display. The new MSI for instance has the Z2E and 8'' display which IMO makes it the best handheld on the market right now. Previously it only had Intel.
With that said, if you already own a Z1E/7840U device, this is hardly worth even considering an upgrade for. Everyone is waiting for something more substantial. The Rog Ally X is still a top tier handheld, and probably the best bang for bucks right now.
Won't get a new GPU anytime soon from AMD in mobile. Even Zen 6 APUs in 2026 are bringing, guess what? RDNA3 GPUs yet again, same ol shit. UDNA will come after that in 2027.
 
25w is meh, but 17w gives you basically Z1 25w performance. For devices that rock the big 80Wh batteries will have some impressive battery life at 17w. 3+ hours in heavy AAA games for example. Much more in older titles.
 
Can't wait till we're done with RDNA3 Jesus that shit launched in 2022 yet it's 2025 and it's the best mobile GPU AMD has. Such a shame there's no RDNA4 APUs and we're gonna have to wait till the second gen of Zen 6 to get a successor so 2027. Bro 5 years of RDNA3, is this hell?

Hopefully Intel Panther Lake gets plenty of handheld adoption because unlike AMD Intel launched Alchemist (Xe¹) in 2022, Battlemage (Xe²) in 2024 and Celestial (Xe³) in 2025 with Panther Lake. Xe³ should be much better than RDNA3 and since Xe¹
Intel has AI upscaling via XeSS while AMD has made FSR4 their first AI upscaler exclusive to RDNA4. What a shit show.
Watching MSI launching a AMD SoC version is even more ridiculous especially when claw 8 AI+ got so much praised
 
Unfortunately, all these AMD APUs are horribly unbalanced. All that computing power is being throttled by paltry bandwidth, it was already bandwidth limited on the z1 and the Z2 basically introduced almost no bandwidth efficiencies and no actual significant improvement despite the heavy increase in compute....to compound the disaster its still rdna3 which means no ml reconstruction to help target low resolutions with decent image quality....basically a shitshow. Now you know why valve isn't releasing any new Steam Decks....the geniuses at amd have decided that no rdna4 until the launch of their udna apus which is close to 2027. So basically, despite the heavy cost, massive compute it will still end up inferior to a switch 2 in image quality and relatively comparable graphics leave it to amd to make a shitshow out of a massive opportunity.
 
At 25w where it actually matters for any mid range to heavy game ( even small games to be honest , as I try to push to highest fps and graphics possible so I will always use 25w ), the difference is a joke.

Not worth the upgrade at all. And the fact that both still use FSR 3.X and not FSR 4 is an automatic no for me ( especially when these weak ass APUs need all the help it needs )
 
Last edited:
I heard the upgrade wasn't really good but this seems bad. 1-5fps increase when you can get z1 for half the price of a z2? Yeah that won't cut it. I see z1s go on sale for like $500 at times.
Yeah, you can regularly find Z1E Allys for sale at $500, or around $350-400 on eBay.

There is no world in which mostly single digit increases are worth upgrading to devices which will surely be $800 and above.
 
Unfortunately, all these AMD APUs are horribly unbalanced. All that computing power is being throttled by paltry bandwidth, it was already bandwidth limited on the z1 and the Z2 basically introduced almost no bandwidth efficiencies and no actual significant improvement despite the heavy increase in compute....to compound the disaster its still rdna3 which means no ml reconstruction to help target low resolutions with decent image quality....basically a shitshow. Now you know why valve isn't releasing any new Steam Decks....the geniuses at amd have decided that no rdna4 until the launch of their udna apus which is close to 2027. So basically, despite the heavy cost, massive compute it will still end up inferior to a switch 2 in image quality and relatively comparable graphics leave it to amd to make a shitshow out of a massive opportunity.
Nothing affordable AMD can do about the bandwidth disparity for now. LPDDR6 is not out yet, Strix Halo is a power hog (still coming to some handhelds but lol) increasing bus size is expensive and power hungry and adding a cache comes with more costs. These techniques are coming to AMD APUs in the future but not before Zen 6 arrives. This is why running games at low resolutions is such a boon because it's way less bandwidth intensive and more CPU limited.
 
Unfortunately, all these AMD APUs are horribly unbalanced. All that computing power is being throttled by paltry bandwidth, it was already bandwidth limited on the z1 and the Z2 basically introduced almost no bandwidth efficiencies and no actual significant improvement despite the heavy increase in compute....to compound the disaster its still rdna3 which means no ml reconstruction to help target low resolutions with decent image quality....basically a shitshow. Now you know why valve isn't releasing any new Steam Decks....the geniuses at amd have decided that no rdna4 until the launch of their udna apus which is close to 2027. So basically, despite the heavy cost, massive compute it will still end up inferior to a switch 2 in image quality and relatively comparable graphics leave it to amd to make a shitshow out of a massive opportunity.

Said it before and you're absolutely right

All handhelds are currently bottle necked by LPDDR5x. Throwing more compute power at the problem is useless. You either wait a huge boost in bandwidth mobile memory, or you use a ton of expensive and not so scalable to lower node cache 🤷‍♂️
 
Last edited:
Said it before and you're absolutely right

All handhelds are currently bottle necked by LPDDR5x. Throwing more compute power at the problem is useless. You either wait a huge boost in bandwidth mobile memory, or you use a ton of expensive and not so scalable to lower node cache 🤷‍♂️
Pretty much, the least they could do is make an architecture that brings in bandwidth efficiencies designed around low power devices. I mean it's embarrassing that even the Nvidia architecture from 2 generations back is much more efficient in bandwidth use than a current flagship amd handheld tech.
 
Yeah, as expected based on the full H370 or whatever the AMd chipset is called.

So if you are coming in new, it might be worth it to get a Z2E device, but if you already have one, laying out $900-1000 for marginal upgrades is crazy. Heck, even if you have a Steam Deck, it's not worth it, IMO.

So time to wait for 2 more years basically, which is kind of sad. Unless Intel pulls off a miracle on hardware side and also improves their upscaling at the same time.
 
Sad thing is that HX 370 which this is based on has nearly 100TOPs so it likely could run FSR4.
But this can't - so we'll probably not see it on either.


does Z2E has a NPU at all? Maybe when MS open up AutoSR to all win 11 for all, it will be useful to upscale games from 720p to 1080p on a 8" screen. I hope so, no reason to lock AutoSR to ARM surface laptop which seems like a flop
 
Man I want to upgrade my Ally x but there's nothing to upgrade to, wtf...
Hopefully driver updates will get a bit more performance and efficiency out of Z2E and widen the gap even more at 17w.

If Z2E at 17w can outperform Z1E at 25w that's at least something.

It seems like 25w and above is just running hard into memory bandwidth limitations.
 
Last edited:
does Z2E has a NPU at all?
No - it's 'gaming' APU so they remove the AI block from them - same as with Z1E.

Throwing more compute power at the problem is useless. You either wait a huge boost in bandwidth mobile memory, or you use a ton of expensive and not so scalable to lower node cache 🤷‍♂️
I don't think cache really solves this either - a lot of 'desktop optimized' graphics is simply built for high bandwidth - you can't just cache your way around that.
Eg. the TAAs/Upscalers rely on a lot of memory ops (and thus they absolutely tank performance on mobile APUs - to the point where running upscaler can actually be slower than native with MSAA in many cases), you'd need to reengineer the algorithm first to localise the operations before cache would even help.
This is also partly why the benchmarking of these devices is rather spotty atm - all reviewers just assume that FSR/XeSS/TAA on is how things must run for optimal performance - but these chipsets don't behave the same way stationary versions do under those conditions.

It'd be nice if AMD actually bothered to port FSR4 over and optimise it for the NPUs - then we might get something that isn't bandwidth starved by definition, and maybe even look better than XeSS alternative.
 
Last edited:
No - it's 'gaming' APU so they remove the AI block from them - same as with Z1E.


I don't think cache really solves this either - a lot of 'desktop optimized' graphics is simply built for high bandwidth - you can't just cache your way around that.
Eg. the TAAs/Upscalers rely on a lot of memory ops (and thus they absolutely tank performance on mobile APUs - to the point where running upscaler can actually be slower than native with MSAA in many cases), you'd need to reengineer the algorithm first to localise the operations before cache would even help.
This is also partly why the benchmarking of these devices is rather spotty atm - all reviewers just assume that FSR/XeSS/TAA on is how things must run for optimal performance - but these chipsets don't behave the same way stationary versions do under those conditions.

It'd be nice if AMD actually bothered to port FSR4 over and optimise it for the NPUs - then we might get something that isn't bandwidth starved by definition, and maybe even look better than XeSS alternative.

I thought K KeplerL2 said that this will be the approach that Sony would take for their handheld, cache.

RDNA 2 also shaved off using high bandwidth memory like nvidia with cache.

There's probably a limit to what it can achieve but if there's juice to squeeze out of LPDDR5x for more compute, I can't think of another one
 
I thought K KeplerL2 said that this will be the approach that Sony would take for their handheld, cache.

RDNA 2 also shaved off using high bandwidth memory like nvidia with cache.

There's probably a limit to what it can achieve but if there's juice to squeeze out of LPDDR5x for more compute, I can't think of another one
Yeah, lots of cache and much better memory compression.
 
Do they perhaps have enough NPU for AI texture compression? I guess it's one inevitable thing to come along the future. AMD has a lot of research alongside also what Nvidia announced
Also a big question is if Sony handheld would be in the custom UDNA platform which will bring in FSR4. If they launch in 2027/2028, it might happen. And hopefully LDDR6 will be available by then.
 
I thought K KeplerL2 said that this will be the approach that Sony would take for their handheld, cache.
I mean - for the purposes of running PS5 software I could see the approach work, since there's concrete target to test against and validate what works for most software and how it relatively compares to the target.

For PS6 - it's a lot less useful since the software doesn't even exist yet - and you may very well end up in another Series S situation where the hardware dramatically underperforms the original performance targets. It's even worse for developers since there's no performance guarantees - only moving targets, and caches are a lot harder to optimise for than memory pools of different bandwidth.

There's probably a limit to what it can achieve but if there's juice to squeeze out of LPDDR5x for more compute, I can't think of another one
I mean - the obvious one would be eDram, where you get substantially more bang for the silicon, but the downside is there's no free-lunch for software utilisation. If this was a standalone handheld (not 'PS6, but also PS5/PS4 handheld') I would argue eDram would be better option than cache anytime, but since (according to present rumours) it's supposed to run software not even written for it - yea it's a bit of tricky situation.

Do they perhaps have enough NPU for AI texture compression?
Current state of the art in AI texture compression is decidedly not general purpose (it applies to pretty specific usecases/situations, ie. it will need human handholding to implement over standard methods on a case by basis), and I'm not convinced it'll ever be a general purpose differentiator.
Also since the 'big' console would presumably also have the same - that would only be there to maintain parity, not bridge the bandwidth gap?
 
Last edited:
Pretty much, the least they could do is make an architecture that brings in bandwidth efficiencies designed around low power devices. I mean it's embarrassing that even the Nvidia architecture from 2 generations back is much more efficient in bandwidth use than a current flagship amd handheld tech.
It's not clear that Ampere is much more efficient in its memory bandwidth usage. The Switch 2 in docked mode and Rog Ally offer seemingly similar performance for the same bandwidth.
 
He pushed the Z2E in docked mode. Modified cooling and 60W. Unless there is a performance improvement when the ROG Ally Xbox launches, I might jump on a Z1E ROG Ally X when the prices drop. Price to performance is looking like it might be the way to go (depending on how much they discount the Z1E models).

**I feel all of ETA Prime's videos should come with a trigger warning for those opposed to frame gen since he does not shy away from it.

 
Any breakdown on what RDNA3.5 means interms of features? I know it has no AI
Basically no FSR4. That's all you need to know.

which means the image quality upscaling is shit. Especially from this low res.

wait for a device with FSR4 / DLSS or grab something like the Z1E used or for really good price. Because the performance at 25W is not really that big. Only big advantage is the low power mode, around 15W. but then again... these things are so weak you want to push it for the max power it can deliver in the hope for a stable, good frames.

Z2E is a disappointment.

Also. the Rog Ally / Rog ally X / Xbox rog ally Series all share the same screen. That screen size is shit for the device size and the bezels are very distracting its very annoying.
 
Last edited:
Basically no FSR4. That's all you need to know.

which means the image quality upscaling is shit. Especially from this low res.

wait for a device with FSR4 / DLSS or grab something like the Z1E used or for really good price. Because the performance at 25W is not really that big. Only big advantage is the low power mode, around 15W. but then again... these things are so weak you want to push it for the max power it can deliver in the hope for a stable, good frames.

Z2E is a disappointment.

Also. the Rog Ally / Rog ally X / Xbox rog ally Series all share the same screen. That screen size is shit for the device size and the bezels are very distracting its very annoying.

I wonder tho if the AI variants will get FSR4.

the Xbox Ally X uses the Ryzen AI Z2 Extreme variant instead of the normal one, which has a 50 tops NPU.
so maybe AMD is planning on adding FSR4 support for those chips, even tho they are RDNA 3.5 like the non AI versions.
 
Last edited:
Basically no FSR4. That's all you need to know.

which means the image quality upscaling is shit. Especially from this low res.

wait for a device with FSR4 / DLSS or grab something like the Z1E used or for really good price. Because the performance at 25W is not really that big. Only big advantage is the low power mode, around 15W. but then again... these things are so weak you want to push it for the max power it can deliver in the hope for a stable, good frames.

Z2E is a disappointment.

Also. the Rog Ally / Rog ally X / Xbox rog ally Series all share the same screen. That screen size is shit for the device size and the bezels are very distracting its very annoying.


Hope MS open up their AutoSR to everything especially their own Rog Ally X unless it is not compatible also or requires tensor cores. As far as I know RDNA3 still has some basic AI functions. Running games at 720p upscaling to 1080p on a small portable screen is still good and can also save performance and power.
 
Hope MS open up their AutoSR to everything especially their own Rog Ally X unless it is not compatible also or requires tensor cores. As far as I know RDNA3 still has some basic AI functions. Running games at 720p upscaling to 1080p on a small portable screen is still good and can also save performance and power.

the issue with autoSR is input lag however. so you'll need to run at least at 60fps to mitigate possible latency issues.

autoSR essentially adds at least 1 frame of lag, which at 60fps is just 16ms, but at 30fps would be 33ms, on top of the game generally having higher latency of course the lower the framerate.
 
Last edited:
the issue with autoSR is input lag however. so you'll need to run at least at 60fps to mitigate possible latency issues.

autoSR essentially adds at least 1 frame of lag, which at 60fps is just 16ms, but at 30fps would be 33ms, on top of the game generally having higher latency of course the lower the framerate.

Can you force anti lag 2? If not I would image MS would work with AMD to get that implemented especially with with recent partnership accounment
 
Top Bottom