DLSS5 will probably run fine on a variety of moderately modern cards. I wouldn't be surprised if any RTX card could run it with varying results, though they will probably lock certain functions behind 5000 series cards. I can't see them releasing a landmark product like DLSS5 and then totally locking it down to the ultra scarce 5000 series cards, with like, less than 1% of the PC gaming populace actually using a 5000 series card. Just doesn't make sense.
Remember when AMD said that FSR4 could not run on anything other than an RDNA4/9000 series card? And then... Oopsie, the files got leaked and it turns out you can totally run it on something as weak and a steam deck and there's legit merit to doing so. Is it heavy? Yes. Is the tremendous boost to IQ worth it? Absolutely.
DF even said it themselves. Once this shit hits optiscaler it's going to be on basically everything. To what degree of effectiveness? Hard to say. I wouldn't worry too much though. Furthermore, just look how much DLSS and FSR as technologies have matured over the years.
DLSS1 used to look like shit, and its actual effectiveness was questionable. Now the newest DLSS preset looks insanely good, arguably better than native, even at very low resolutions. It's flat out magic.
FSR used to be miles behind and look like shit. Now its actually pretty good. FSR4 is genuinely worthwhile.
Same for frame generation, which used to be smeary and unusable, especially with text involved. Now it's totally fine. You might not like it, but it is usable.
People are ripping DLSS5 to fucking SHREDS and it's still in development, not even due out for at least another 6 months or so. It reminds me of all of those morons screeching that GTA6 looked like ass when it leaked years ago. Like.. no fucking shit, it's not done yet.
And to the naysayers squawking about it ruining artistic intent- if the developers choose to add DLSS5 to their game, obviously it lines up with their "artistic intent," or they just wouldn't allow it to be an option, hello? Even the slapdash demo of Starfield - endorsed by Todd Howard himself - looked amazing, graphical bugs notwithstanding. One of that game's biggest problems was how dull and flat all of the characters looked. If this technology makes games look better, and it's supported by the developers themselves, what's the problem?
DLSS and FSR are already using machine learning in the exact same way to literally reconstruct an image and generate more pixels that didn't exist before from much lower quality images, and then generate whole ass completely new frames with it. It's AI all the way down, and we've been using it for years! Why now is it the bogeyman?