Digital Foundry: Nintendo Switch 2 DLSS Image Quality Analysis: "Tiny" DLSS/Full-Fat DLSS Confirmed

The point was that there are several "black boxes" in the form of presets and other available parameters in a game (like RR). On the PC it doesn't really matter because do you really care if Preset K is 4% more expensive than Preset E and your performance drops to 58fps or whatever, or that artifact you didn't like was removed with x setting or changing some dll? you as a user will adjust settings and this wouldn't be mentioned anywhere. hardly if ever is performance or framerates mentioned in comparisons. Now though when you have fixed hardware that isn't capable of offering that better "black box" without affecting performance, DLSS is "tiny DLSS" and "proper DLSS" and not just DLSS on fixed hardware and devs trying to hit performance targets with fixed settings. It's just DLSS but with the constraints of fixed hardware and fixed performance targets. The idea that "it isn't used anywhere else" applies to PSSR too but that's not relevant what's relevant is the fixed hardware, fixed targets, and developer choices for a DLSS/PSSR mode that may not be to the users preference. You're seeing that now with Switch 2.

On PSSR devs are making those parameter choices too. For AC:S Sony helped Ubisoft finetune model parameters for PSSR so it's not a complete black box:


These upscalers also add to frametime and when your fps target is higher and your resolution is higher the more costly they become (in terms of the proportion of frametime taken). 30fps targets help.


It's still a black box, the things that can be tuned are the upscaler inputs.
And the weights of the model, meaning the point in the curve at which neurons activate.
These can make a significant difference in the quality on an upscaler. But it's not the same thing as retraining a model.
 
I don't follow you. Machine is a machine. Ok let's forget for a second the word "commands". For me a whole of routines behaviours which take a data input to deliver a new data output by a machine it's an algorithm. Why AI can't be considered an algorithm. Not make sense to me.
So PC machine is algorithm then even though we don't know what programs each component run
Algorithm is a defined sequence of steps how inputs converted to outputs - so PC program is an algorithm and PC itself usually not considered as such
Same with AI - it's a black box that convert inputs to outputs, but exact way is unknown (but there is hidden "algorithm" inside as it's a math theoretical basis of what NN do - if there is a hidden function/causal relationship, NN can approximate it, basically it write it own algorithm as it learns)
You don't call assembling PC as writing algorithm and you shouldnt do it for AI too
 
So PC machine is algorithm then even though we don't know what programs each component run
Algorithm is a defined sequence of steps how inputs converted to outputs - so PC program is an algorithm and PC itself usually not considered as such
Same with AI - it's a black box that convert inputs to outputs, but exact way is unknown (but there is hidden "algorithm" inside as it's a math theoretical basis of what NN do - if there is a hidden function/causal relationship, NN can approximate it, basically it write it own algorithm as it learns)
You don't call assembling PC as writing algorithm and you shouldnt do it for AI too
I just said that. At the basis there are always algorithms. No, I never said a pc machine is algorithm, humans design the algorithms to make work a machine.
 
Last edited:
but this is what I asked you "which artifacts people didn't like in ac shadows with dlss 3?"
and I would never consider motion blurring as an artifact. if you do, fair
you're free to provide examples
Yes I do and keep in mind that with DLSS3 on hardware that isn't a Switch a lot of people enable framegen too so we are not just talking about upscaling the image but the imperfections that come with that feature too. There can be legitimate reasons for switching from DLSS3 to 4.
If you think I'm trying to lambast the upscaling implementation of this game I'm not, it's a very competent game and PSSR/DLSS3 produce very good results on it. I mentioned presets simply to show that there is user preferences and the ability to change things vs the console fixed modes. so if a particular artifact is not good for somebody that somebody can change presets or ingame settings. I'm going purely by the complaints I read for making the change to preset K from E but I might test it myself in the evening to see differences.
It's still a black box, the things that can be tuned are the upscaler inputs.
And the weights of the model, meaning the point in the curve at which neurons activate.
These can make a significant difference in the quality on an upscaler. But it's not the same thing as retraining a model.
You're thinking in terms of the developers not seeing the exact inner workings/code? I agree, it's a black box if that is what you're referring to. Not sure how important that is though, there are parameters that the developer can adjust so it is an implementation of it at the end of the day, and they have multiple boxes to choose from too pre and post Switch 2.
 
Last edited:
You're thinking in terms of the developers not seeing the exact inner workings/code? I agree, it's a black box if that is what you're referring to. Not sure how important that is though, there are parameters that the developer can adjust so it is an implementation of it at the end of the day, and they have multiple boxes to choose from too pre and post Switch 2.

A game dev can't change parameters in a model. Though a company like Sony can choose the precision rate at which they are processed at, in a way distilling the model to be less performance intensive.
But this can only be done up to a certain point, until the quality degrades too much. AMD did have some success with FSR 4.0.2 using Int8. It doesn't look as good as the FP8 version. Most gamers won't notice the differences, especially on a TV at a distance.

Even the weights, I'm not sure how much a company like Sony allows other devs to have access to them. Sony is probably the one that tweaks them, and then sends the result to game devs.
I don't know how many models Sony has for PSSR, that are available to external devs. I doubt they have many.
And the Switch 2 seems to have 2. The CNN model, and a very distilled one, that looks as bad as FSR2.
 
I just said that. At the basis there are always algorithms. No, I never said a pc machine is algorithm, humans design the algorithms to make work a machine.
If PC is not an algorithm, then AI model too is not algorithm
AI model is just framework, same as PC chassis. There is even some hardware chips that do AI.
To do a work you need a program for both PC and AI model - they will not work as is. Just the methods of "writing" these programs are different between PC and AI.

The actual programs of AI models are weights in the model. And the problem of migration/incorporation into other AI model that it's essentially a lowest level bytecode and bytecode of one model might not be compatible with another. Similar like bytecode for ARM will not work for x86.
And there is no way to convert AI model program into high level representation (and then back to bytecode of new model)
 
Last edited:
If PC is not an algorithm, then AI model too is not algorithm
AI model is just framework, same as PC chassis. There is even some hardware chips that do AI.
To do a work you need a program for both PC and AI model - they will not work as is. Just the methods of "writing" these programs are different between PC and AI.

The actual programs of AI models are weights in the model. And the problem of migration/incorporation into other AI model that it's essentially a lowest level bytecode and bytecode of one model might not be compatible with another. Similar like bytecode for ARM will not work for x86.
And there is no way to convert AI model program into high level representation (and then back to bytecode of new model)
I don't understand what it not clear. I'm not talking of physical part but abstract logic of a machine. Everything conceptually is designed to deliver a new data by a machine is through a specific algorithm in my native language. Can you tell me for you what is it exactly an algorithm?
 
Last edited:
I don't understand what it not clear. I'm not talking of physical part but abstract logic of a machine. Everything conceptually is designed to deliver a new data by a machine is through a specific algorithm in my native language. Can you tell me for you what is it exactly an algorithm?
It's quite clear that you don't know/understands what NN are and how they work, but try to adapt your school education to concepts it hardly applicable
Algorithm is a defined sequence of steps how inputs converted to outputs
Putting inputs into black box that convert them onto outputs based on some mystery inside is not an algorithm, same as PC itself is not an algorithm. It's framework for running algorithms.
 
Top Bottom