• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF: AMD FSR 4 Upscaling Tested vs DLSS 3/4 & - A Big Leap Forward - RDNA 4 Delivers!

Mibu no ookami

Demoted Member® Pro™
Yup, seems like no one watched Cerny presentation.

Look who you're talking about... There's little reason to even discuss PSSR here, but here we are.

You again have people who are very angry that the PS5 Pro exists and that it's going to be the best device to play games on probably for the next 4 years (pound for pound).

That they don't realize that PSSR is ALSO going to evolve over the next 4 years (particularly with new hardware) is quite a statement. PSSR2 for lack of a better name will probably get all of the advantages of RDNA4/5.
 
Last edited:

Aaravos

Neo Member
Or maybe it's because u can only add so much to the apu. Maybe ps5 pro is a tester and ps6 will improve on the pssr? Or sony might get rid of pssr and use fsr4 and improve on the fsr 4
Playstation is my preferred platform so I want it do well looking at some previous comments fsr4 wasn't an option because of compatibility, can pssr be improved during this gen I was watching cerny df interview he wasn't clear
 

SweetTooth

Gold Member
Look who you're talking about... There's little reason to even discuss PSSR here, but here we are.

You again have people who are very angry that the PS5 Pro exists and that it's going to be the best device to play games on probably for the next 4 years (pound for pound).

That they don't realize that PSSR is ALSO going to evolve over the next 4 years (particularly with new hardware) is quite a statement. PSSR2 for lack of a better name will probably get all of the advantages of RDNA4/5.

Funny thing is PSSR was out competing with DLSS3 in its first iteration but somehow its bad to folks here lol

Also Cerny said that Sony asked for a targeted changes to RDNA2 arch before even releasing PS5 in 2020 just to implement their custom ML for Pro!

That's a company at the forefront of technology with one hell of R&D team! You can't say the same thing about Microsoft or Nintendo (lol)..its always Sony doing that!

They targeted 5.5GB SSD and created a custom port! before PCie4 was even a thing !! When they set a target they try to achieve it no matter the hurdles.
 
Last edited:

Kataploom

Gold Member
Good job AMD, you are still behind but you are catching up, its the most we can ask of you. Now catch up in Raytracing and Ray Reconstruction so we can finally have a reason not to choose Nvidia (or at least give them some competition).
What's funny is that if Nvidia hadn't come out with the new Transformer model, FSR would be superior
 

Rosoboy19

Member
Nvidia has a massive mindshare advantage so it really makes no difference what AMD is doing. They will have to sell their cards for $200 to get anything meaningful going on in terms of market share.

It very similar to the console market…
So AMD should just throw in the towel and let NVidia dominate even more?

I’m glad they didn’t think this way back in 2016 when Intel was cleaning their clock with CPU market share.

I’ve had my doubts that AMD could make a comeback too, but this launch gives me hope.
 
Last edited:

SolidQ

Member
That also interesting moment
8bf9faae8b77386cbbab5d2b36b066bb.jpg
 

Zathalus

Member
DX need update, it's too old already. Once it update it's will be diffrenece story
DirectX is not the reason Black Myth and Alan Wake 2 perform so much better on Nvidia 4000/5000 series GPUs. It is due to Opacity Micromaps that was introduced with Ada that gives the performance boost with all the foliage tracing. SER and DMM help as well. Maybe the games can do with an update to take better advantage of RDNA4, but DirectX isn’t the problem here.
 
Part of me doesn't think we'll get dedicated machine learning hardware for NextBox and PS6, mainly due to die space limitations and cost, While inferior, PlayStation's custom solution that makes use of larger vector registers, and custom instruction sets to execute machine learning related tasks like PSSR was really "outside of the box" thinking, and definitely is trading some performance for cost, but cost is the big factor.

Armchair prediction: PS6 60/64CU RDNA 3.5/4 Hybrid, GPU frequency of 2500mhz targeting around 1000+ INT-8 TOPS without using dedicated AI Cores/Tensor Cores, 30TF-40TF.
 
Sponsored games. We need wait AMD sponsored games. AMD already announce few Path Tracing games, which don't know
AMD only (finally) moved to a portable DLL model with FSR 3.1, so all older FSR games can never be upgraded to FSR 4

But starting with FSR 3.1, games work like DLSS where there's a simple DLL you can replace to upgrade the FSR version

AMD paying for games to exclude DLSS was widely derided by the PC gaming community, especially back when FSR was fucking dogshit compared to DLSS, so I don't think they'll try to do that anymore. Especially since they don't want to start a sponsored games arms race with the much wealthier Nvidia
 
Last edited:

winjer

Member
I’m not sure on if that is the case. The 9070 XT number includes sparsity (confirmed by AMD) while the Pro number is unclear. No mention of sparsity was made in the Pro technical presentation. So like for like it may be 390 TOPs vs 300 TOPs.

Dense and Sparsity only differ in the way they handle zeros in a matrix.
Dense still handles zeros as they are in a matrix. Sparsity doesn't consider them, it's like a compression algorithm.
So sparsity is faster in matrices with many zeros, while dense is faster in matrices with few zeros.
Also, sparsity uses less memory.
 

Wolzard

Member
Part of me doesn't think we'll get dedicated machine learning hardware for NextBox and PS6, mainly due to die space limitations and cost, While inferior, PlayStation's custom solution that makes use of larger vector registers, and custom instruction sets to execute machine learning related tasks like PSSR was really "outside of the box" thinking, and definitely is trading some performance for cost, but cost is the big factor.

Armchair prediction: PS6 60/64CU RDNA 3.5/4 Hybrid, GPU frequency of 2500mhz targeting around 1000+ INT-8 TOPS without using dedicated AI Cores/Tensor Cores, 30TF-40TF.

They will probably have an NPU for this, just look at AMD's recent APUs.

d3815a0a3927443711a7d94e58c8ee43.jpg
 

Fafalada

Fafracer forever
Why did Sony not use RDNA 4's machine learning units?
We don't really know that they didn't.
It's 'custom' to their hardware - but noone is exactly releasing the spec of the instruction set added or anything else so it's all guesswork.

The Int8 throughput of RDNA4 matches up to PS5 Pro stated spec though - so unless you're on the DF bandwagon that claims Sony inflated their spec the NVidia way (contradicted by statements from Cerny - but hey, who knows), they are at least taking a similar approach to getting to those TOPs.
 

Pagusas

Elden Member
If you were after a 5070 card, would you pay 50 dollars more for an extra 4GB of VRAM, 16 in total? If you would, then you need to compare the 5070 with the 9700XT.

I don't buy midrange cards, only flagships, but if I was forced to build a budget machine, I'd go for a balance build with a 9700XT or try finding a well priced 4070 Super. The 5070TI's are just in a bad spot price wise. But I'm the idiot type that will buy a 3k GPU and throw a $500 water block on it, and be happy, so not good for budget shopping.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Yeah funny how several people kept correcting me when I was skeptical of Pro compatibility with FSR 4.

"Acshually, it is just an inference engine, it can run on any ML hardware."

It's so hard for people to wait for actual info and dependencies. Question is, will PSSR still be relevant next gen? They might as well work with AMD and have just one solution in the market and use the resources to build other things, like ray reconstruction etc.

I think that's what Project Amethyst is really all about. If Sony wants to release a PS5 handheld it may have PSSR 2.0 on it. And by 2028 when Sony releases the PS6, PSSR might turn into Amethyst. Which may just end up being FSR 5.0 anyway.
 

Nex240

Neo Member
Look who you're talking about... There's little reason to even discuss PSSR here, but here we are.

You again have people who are very angry that the PS5 Pro exists and that it's going to be the best device to play games on probably for the next 4 years (pound for pound).

That they don't realize that PSSR is ALSO going to evolve over the next 4 years (particularly with new hardware) is quite a statement. PSSR2 for lack of a better name will probably get all of the advantages of RDNA4/5.
Lol pound 4 pound? is this boxing?
 
Last edited:

Nex240

Neo Member
Funny thing is PSSR was out competing with DLSS3 in its first iteration but somehow its bad to folks here lol

Also Cerny said that Sony asked for a targeted changes to RDNA2 arch before even releasing PS5 in 2020 just to implement their custom ML for Pro!

That's a company at the forefront of technology with one hell of R&D team! You can't say the same thing about Microsoft or Nintendo (lol)..its always Sony doing that!

They targeted 5.5GB SSD and created a custom port! before PCie4 was even a thing !! When they set a target they try to achieve it no matter the hurdles.
In what Ratchet and Clank when it was 1800p it came close to DLSS? You memory holing how at sub 1080p PSSR falls flat in games like Silent Hill 2 and Alan Wake 2?
PSSR is about as good as Xess, it's a step behind FSR4 and DLSS4.
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Playstation is my preferred platform so I want it do well looking at some previous comments fsr4 wasn't an option because of compatibility, can pssr be improved during this gen I was watching cerny df interview he wasn't clear

PSSR has already been improved since the Pro released. Sony has updated PSSR a handful of times. Each better than the last.
 

viveks86

Member
Dense and Sparsity only differ in the way they handle zeros in a matrix.
Dense still handles zeros as they are in a matrix. Sparsity doesn't consider them, it's like a compression algorithm.
So sparsity is faster in matrices with many zeros, while dense is faster in matrices with few zeros.
Also, sparsity uses less memory.
I think those discussing are well aware of this. The point being made is how TOPS is measured and communicated. If sparsity is considered as part of the calculation, effective TOPS is doubled, which is the number currently being circulated by Nvidia (and now AMD) for their cards. The assumption DF is making is that Cerny is using sparsity in his 300 TOPS number, when there was no such confirmation.
 
Last edited:

Nex240

Neo Member
You are a very ignorant person if you think the phrase "Pound for pound" only applies to boxing.
If PS5 Pro was $499 you would have a point maybe. But for 700 or 800 (if you need a disc drive) it's not the best value. Spending up to 800 bucks on a new machine and not getting new goodies like path tracing is pretty silly.
 

Crayon

Member
At my viewing distance I might be able to push this down to performance. Rn I can only put FSR 3 on quality and even that looks a bit wooly. (1440)

Really good showing here from fsr4. Better than I expected.

These scalers will all converge at some point, though. Maybe even inside 5 years they'll all be perfect at these base resolutions. Then it'll be a race to see how low the base res can go.
 
I’m not sure on if that is the case. The 9070 XT number includes sparsity (confirmed by AMD) while the Pro number is unclear. No mention of sparsity was made in the Pro technical presentation. So like for like it may be 390 TOPs vs 300 TOPs.

It's not just the TOPS metric, but it could be the case in regards to the specific ML hardware implementation both companies are using, it's possible that PSSR relies on less dedicated ML silicon than FSR 4 does and thus requiring less power (trade-off being slightly worse image quality), we know Sony are extremely fussy about how the console SoC's are designed (the famous example being the cutdown FPU units on the PS5's CPU).

This would also line up with several of Mark Cerny's comments about the Pro's ML implementation being heavily custom, if it was the same as RDNA 4 then it's likely he would have mentioned it as was the case with ray-tracing.

Again just speculation on my side, as I do wonder why Sony really went out their way to design their own ML based hardware and software solution, rather than just plucking it from AMD's feature set which could have been easier and maybe even saved them money. The only explanation I can think of us designing something which is more efficient respectively more efficient when considering power, budget and cost constraints.
 

Buggy Loop

Member
so good to see some competition, AMD just matched Nvidia's best CNN model with their own CNN model, let's hope they could match DLSS4 if they change to transformer

They already use some transformer model in FSR 4 by the way, according to the info so far they use an hybrid of CNN + Transformer for best of both worlds according to them. I think CNN staying in the solution might have been a bit too overstating "best of both worlds" but its a start.

I'm shocked they even have any transformer part in it. I think they can easily expand on it and improve further down the line. Wouldn't be surprised to see them close the gaps with DLSS 4 by end of 2025 or sometime 2026.

Along with their work on neural radiance cache path tracing, neural texture compressions and part of the HLSL consortium for neural vectors for neural shaders, AMD is not in the dark ages anymore. Whoever at AMD that changed position on these technologies needs a huge promotion.

These scalers will all converge at some point, though. Maybe even inside 5 years they'll all be perfect at these base resolutions. Then it'll be a race to see how low the base res can go.

I wonder if at some point microsoft will just say fuck it and this is the upscaler everyone uses in directX for example. They'll be very similar soon enough.
 
Last edited:

Barakov

Member
Seems to be a lot more performant than the Transformer model, at least in Rift Apart.

vsL6PMS.png


Honestly, the performance cost of DLSS4 is unacceptable in this case. It's 19% slower than DLSS3. Sure, you get much better IQ, but you're also knocked down an entire GPU class.
Good on AMD on finally closing the gap.
 

omegasc

Member
Yup, seems like no one watched Cerny presentation.
Does not mean the ML part was ready, as they stated it's a custom version. Maybe just to cut costs, maybe AMD had trouble finishing that part. But the fact is PS5 Pro ML is custom and can mean it's a mix between 4 and previous, or an entirely different thing.
 

Bojji

Member
Look who you're talking about... There's little reason to even discuss PSSR here, but here we are.

You again have people who are very angry that the PS5 Pro exists and that it's going to be the best device to play games on probably for the next 4 years (pound for pound).

That they don't realize that PSSR is ALSO going to evolve over the next 4 years (particularly with new hardware) is quite a statement. PSSR2 for lack of a better name will probably get all of the advantages of RDNA4/5.

Funny thing is PSSR was out competing with DLSS3 in its first iteration but somehow its bad to folks here lol

Also Cerny said that Sony asked for a targeted changes to RDNA2 arch before even releasing PS5 in 2020 just to implement their custom ML for Pro!

That's a company at the forefront of technology with one hell of R&D team! You can't say the same thing about Microsoft or Nintendo (lol)..its always Sony doing that!

They targeted 5.5GB SSD and created a custom port! before PCie4 was even a thing !! When they set a target they try to achieve it no matter the hurdles.

Its, which makes Sonys achievement all more impressive!

PSSR is the worst ML upscaler. AMD delivered much better results on day one.

Sony fucked up, they should have waited for full RDNA4.
 

Bojji

Member
And have an even more expensive console then they already have? Console market has not got the appetite for expensive consoles.

Why more expensive? They have parts of different GPU architectures (including RDNA4!) in their Frankenstein GPU. I doubt price would change but console could launch later. And nothing of value would have been lost, almost nothing big released between pro launch and now.
 
Last edited:

SKYF@ll

Member
I’m not sure on if that is the case. The 9070 XT number includes sparsity (confirmed by AMD) while the Pro number is unclear. No mention of sparsity was made in the Pro technical presentation. So like for like it may be 390 TOPs vs 300 TOPs.
*Cerny "We customized the AI accelerator so that 3x3 multiplication and accumulation at 8 bits can be performed in one cycle."
PS5 Pro(FP16) : 64 WMMA x 2 Elements x 2 (FLOP) x 2 AI Accelerator x 60 (CU) x 2.17GHz≒67 TFLOPS
PS5 Pro(FP8) : 64 WMMA×9 Elements* ×2(OP)×2 AI Accelerator×60 CU×2.17GHz≒300 TOPS

Sparsity is a process that efficiently eliminates the calculation of 0, separate from the above specifications, right?
It's difficult and I'm confused.:messenger_grinning_sweat:
 

Mibu no ookami

Demoted Member® Pro™
PSSR is the worst ML upscaler. AMD delivered much better results on day one.

Sony fucked up, they should have waited for full RDNA4.

You're fundamentally unserious.

  • First you had to codify ML, because your argument doesn't stand up to reality otherwise
    • There are no other consoles on the market with ML upscaling
  • PSSR out of the gate is already better than non-ML upscaling
    • And no, PSSR is far ahead of FSR1, do you know how we know that? Because PSSR is better than FSR3
  • You don't understand why they didn't go with RDNA4
With the PS5 Pro launching in 2024, it's likely going to be the only traditional console on the market for the next 4 years with machine learning upscaling.

I've codified traditional console because the Switch 2 is going to be a hybrid and while it might have DLSS, it's still going to be significantly lower powered than the base PS5 despite the ML because of its nature of being a hybrid. Microsoft might drop a hybrid of their own, but it's unlikely to be sold for a loss or at cost like traditional consoles. This all goes to the nature of performance for the price.

By launching in 2024, Sony guaranteed itself to have the most compelling hardware for the price for the next 4 years, which is quite a long time in video games (generations used to be 5 years). This means the PS5 Pro is going to be the best buy on the market for years to come. When the PS5 family drops in price by 100 dollars after GTA6 launches and sales slow again, the PS5 Pro is going to be 600 dollars while the Switch 2 is probably going to be 400 dollars... The difference in these two devices is going to be tremendous through the end of the generation.

Time wasn't the only factor in not going with RDNA4. If you weren't just here for trolling and you were here to learn and absorb real information whether it lines up with your ideology or not, you'd have heard Cerny say exactly why they didn't use RDNA4. It would have meant developers would have had to create entirely new libraries for games they had already made or new games. It would have made developing for the PS5 Pro significantly harder and thus probably reduced the number of games that were Pro Enhanced. They took elements of RDNA3 and 4 and added it to their chipset, but largely kept the chipset the same, so the instructions are the same.

Again, it seems like many people's primary argument against the PS5 Pro (and PSSR) is that it isn't a PS6 and frankly that's just super lazy thinking.

And if the only advantage here of going with RDNA4 was the ability to use FSR4, it wouldn't have been worth it. Sony wants to have their own ML separate from AMD. It'll be very interesting to see if that pay dividends in future console competition when games are running on FSR4 on one system and PSSR on another. Not to mention that The PS6 is going to have a massive list of PS5 Pro games that it'll be able to enhance (4 years worth).
 

Lysandros

Member
Part of me doesn't think we'll get dedicated machine learning hardware for NextBox and PS6, mainly due to die space limitations and cost, While inferior, PlayStation's custom solution that makes use of larger vector registers, and custom instruction sets to execute machine learning related tasks like PSSR was really "outside of the box" thinking, and definitely is trading some performance for cost, but cost is the big factor.

Armchair prediction: PS6 60/64CU RDNA 3.5/4 Hybrid, GPU frequency of 2500mhz targeting around 1000+ INT-8 TOPS without using dedicated AI Cores/Tensor Cores, 30TF-40TF.
We have yet to see FSR 4 running with 300 TOPs while producing superior results at similar processing cost per frame before declaring PSSR as an 'inferior' custom solution i think.

Why would Sony stick to RDNA 3/4 for PS6 in 2028?... This isn't Nintendo. It will surely incorporate hardware from the future/yet to be released AMD arc along with original customizations like in their past systems. PS5 PRO is already RDNA 4 when it come to RT hardware.
 
Last edited:
We have yet to see FSR 4 running with 300 TOPs while producing superior results at similar processing cost per frame before declaring PSSR as an 'inferior' custom solution i think.

Why would Sony stick to RDNA 3/4 for PS6 in 2028?... This isn't Nintendo. It will surely incorporate hardware from the future/yet to be released AMD arc along with original customizations like in their past systems. PS5 PRO is already RDNA 4 when it come to RT hardware.

I totally forgot about UDNA. You think 2028? I was thinking more along the likes of 26/27 with a healthy amount of cross generational support for the PS5.
 

yamaci17

Member
Holy shit did people actually use FSR 3.1? That looks like unrelenting dog shit. It's honestly so bad that it's not even worth testing.
casual people won't care
my friend played black myth wukong with fsr set to perfrmance... at 1080p
I asked her like "how does the game look, any good?" (without any upscaling context by the way, she doesn't know anything about that stuff). she said, "hmm, it's fine idk"

another friend of mine don't even bother using "quality" mode as he thinks it will reduce his FPS. most people have no clue about how any of this works
 
Last edited:
Top Bottom