Not sure where you get this guess wrong thing from. Every single analysis has shown the image to be near identical to native even in cases where the native image is not blurred by TAA. Feel free to post analysis that shows otherwise.
As for power consumption and processing, once again I can find nothing to this effect. Performance gains for both technology is similar. DLSS does use tensor cores for the AI algorithm, but that is trivial. I've certainly seen near zero difference in GPU power consumption on my card when switching between FSR and DLSS.
And FSR is noticable at small screen sizes, certainly on my Steam Deck you can notice a drop in quality even using FSR quality. The motion issue do not suddenly disappear because of the screen size. It's not just a PC thing either, Jedi Survivor is a mess in performance mode when camera movement is involved (on both XSX and PS5).
As for future proofing it should not really be a problem, DLSS can be disabled on any future systems should Nintendo not decide to stick with Nvidia, if Nintendo even bothers with BC. It's not integral to the rendering paradigm, if PC games can switch between 6 different upscalers then any console game can surely do as well.
Lets start with the last topic first.
Nintendo aren't accepting a solution that doesn't allow them to port their games pixel perfect to the original release. Games are art, and if the upscaling is part of the art, Nintendo - and their customers - will expect the ability to do pixel perfection without paying a license fee in the future, which is the biggest stumbling block.
Using FSR on Nvidia cards should show a sizeable difference in power draw, because the DLSS units are still powered even when no used, so a lack of increases should convince you that FSR is light on power use when it has no dedicated silicon drawing power.
I've looked a quite a few of the analyses of FSR vs DLSS from DF from a few years back, and there was glaring guesses, like the one with the speakers, where DLSS had unbalanced the image compared to FSR and the native image. The same seemed true of the PC God of War comparison they did. The DLSS was aesthetically more pleasing, but still less consistent with native, which means a PSNR would be higher on the FSR solution.
Put it this way, why is everyone discussing these technologies and making conclusions in favour of Nvidia ? when DF and the like haven't even done such formal scientific analysis to provide PSNR numbers in their opinionated comparisons.
Using a broken game to say the motion issue is the same on all console use is a strawman. FSR works great as a free enhancement on console games, and developers on console can guide the algorithm to alleviate artefacts because the hardware is fixed. DLSS is useless technology for console gaming until it is on par for licensing like FSR, but Nvidia would rather use it to justify their customers upgrading in the PC space - FYI, I use a Nvidia (RTX) card myself and have done for 20years, so this isn't a AMD or Intel GPU customer slanted opinion.