Lets start with the last topic first.
Nintendo aren't accepting a solution that doesn't allow them to port their games pixel perfect to the original release. Games are art, and if the upscaling is part of the art, Nintendo - and their customers - will expect the ability to do pixel perfection without paying a license fee in the future, which is the biggest stumbling block.
Using FSR on Nvidia cards should show a sizeable difference in power draw, because the DLSS units are still powered even when no used, so a lack of increases should convince you that FSR is light on power use when it has no dedicated silicon drawing power.
I've looked a quite a few of the analyses of FSR vs DLSS from DF from a few years back, and there was glaring guesses, like the one with the speakers, where DLSS had unbalanced the image compared to FSR and the native image. The same seemed true of the PC God of War comparison they did. The DLSS was aesthetically more pleasing, but still less consistent with native, which means a PSNR would be higher on the FSR solution.
Put it this way, why is everyone discussing these technologies and making conclusions in favour of Nvidia ? when DF and the like haven't even done such formal scientific analysis to provide PSNR numbers in their opinionated comparisons.
Using a broken game to say the motion issue is the same on all console use is a strawman. FSR works great as a free enhancement on console games, and developers on console can guide the algorithm to alleviate artefacts because the hardware is fixed. DLSS is useless technology for console gaming until it is on par for licensing like FSR, but Nvidia would rather use it to justify their customers upgrading in the PC space - FYI, I use a Nvidia (RTX) card myself and have done for 20years, so this isn't a AMD or Intel GPU customer slanted opinion.