Look for Fast fusion, they are using the tiny model and you have the same crap you are getting here
it absolutely doesn't.
Fast Fusion simply looks low res. there's zero (or very little) reconstruction happening in motion essentially, and in still shots it looks clean.
that's how "tiny DLSS" works.
it is really weak when it comes to reconstructing quick moving elements. but it doesn't AI scale anything, it simply does a really bad job at temporally Antialiasing the image, which creates pixelation and motion artifacts.
what we see in these videos and images is a clear sign of static AI upscaling using something that doesn't use temporal data.
Tiny DLSS still actually uses temporal data, just very little of it (hence it breaks in motion but looks good in slow moving shots or still shots)
this here is more similar to what you get when running a video file or an image file file through an AI scaler
this is also why it's important to clearly distinguish between upscaling and temporal reconstruction.
makes communication far easier when everyone understands that temporal reconstruction is not the same as a static upscaler.
this right here is very likely a static upscaler and not a reconstruction method (or it's a completely broken reconstruction method)