DenchDeckard
Moderated wildly
They can be but it must be amongst data not the only data.
So raw raster performance plus ml frames.
So raw raster performance plus ml frames.
In addition to the reduce power consumption, I see them leveraging this technology for throughput as a benefit. The complaints are hardly warranted especially when then drawbacks are for fringe cases such as competitive gaming. These models/transformers upscale with incredible accuracy and precision — using much of the same scene data from the buffer too — within fractions between frames polled from user input.One thing I don't see talked about in the discussion with generated frames is the additional CPU hit that would be required to hit those same higher frame rates with pure rasterization.
There is bottle necking caused by older CPUs paired with higher end new GPUs. But with most games you have a single thread or maybe two or three which are where the main processing takes place. With some additional work divided out to other cores which fly through them and then sit idle. But the frame rate you can hit will still be capped by those main threads.The existing single frame generation we've had this last generation of video cards had the option of motion vector data on options to help it decide how objects were moving to optimize the generated frame. Now it can have input data as well which will affect everything in the scene. Which will mimic the smooth motion benefit we have with higher frame rates.
It will be interesting if the market moves in the way NVIDIA is thinking it will. I bought the original GeForce 256 at launch and it was a game changer. Being able to offload things that the CPU previously had to do suddenly made games play so much better. But will this AI rendering do the same to the market? Remix being able to replace a 3D object in a game in real time without modifying the executable leads to me think we might be seeing some along the lines of the original Talisman project Microsoft was undertaking during the extremely early days of Direct3D vs OpenGL vs GLIDE.
With raster you are pumping out all of draw calls to form a full scene every frame. Talisman was object based. It was able to take a 3D object and create a 2D sprite of it. If the next frame you had the same angle, lighting, etc. Just output the saved sprite out and you didn't spend anytime rendering it. So this isn't just taking a full image and applying a fancy Photoshop filter. You could have things closer to the camera moving faster than further way to hand proper parallax effect. We could be on the verge of extremely fluid motion, especially if the rendering if aware of rigging and animation, and animation was one of the things Jensen mentioned. People forget the early days of 3D where 15-20 FPS, with the "magic" the Voodoo 1 brought was solid 30 FPS. It's been recent that solid 60 FPS on the PC was a thing which elevated it over console. In simpler e-sports games made to run on everything you can get 240, 360 FPS. Having new techniques to hit those rate in extremely complex scenes needs something more than straight brute force.
But what about Reflex 2 which will take your input data and use that in the "fake" generated frames. The same should feel even more responsive than running at a lower frame rate with the new frame generation disabled.Now, I'm not someone who is into conspiracy therories, but I gotta say, I've heard a worrying amount of "tech commentators" (and some of the biggest names on YouTube among them) jump on this "all frames are fake" bandwagon after Nvidia's keynote. Raster frames are now suddenly just as fake as AI generated ones and no one seems to even try to determine why people call raster frames real and AI ones fake. To me personally, this is very simple. If a given frame directly contributes to reducing input latency (which raster frame does; meaning a game gets progressively more responsive to input the more fps you get) then it's a real frame and it is the only legitimate metric in fps performance comparisons as a result, in my opinion.
Is it OK to have + "non AI" faux frames?- Native + AI frames
Thing is, we are talking about a manufactured solution to a problem that realistically should not exist in the first place, after all. Besides, Reflex 2 isn't out yet and we don't have objective third party data on how it works in real world. If it truly holds up to the claims made by Nvidia themselves and does not exhibit some ugly artifacts, then sure, I guess it deserves a place in the "bandaid" section of graphics tech.But what about Reflex 2 which will take your input data and use that in the "fake" generated frames. The same should feel even more responsive than running at a lower frame rate with the new frame generation disabled.
People act like they are fake or free.. they aren’t. There’s complex physical hardware in the GPU that is producing those frames.
It’s different, but it is legitimate, and you’ll be seeing it only grow in usage and efficiency with time.
Stop being a boomer.
Yes, and the term to measure that is input latency not fps. Stop conflating the two terms, which is what you and a number of other people keep doing. They’re correlated but no longer intrinsically tied with the advent of AI. Just as the person you’re responding to stated, tensor cores exist on-board the GPU, they’re physical components computing the frames based on scene data — not just “software” fed raster images after 2D projection is already done. This isn’t simple Gigapixel AI upscaling or static image prediction.If your game is at 30 fps no matter how many FAKE frames you make your game will still control like it's 30 fps.
Now these people are creating ways for the game to half play itself to make up for that but that affects actual control in a FPS.
No one is arguing that AI frames are "free" (whatever that means) or that there isn't hardware in these GPUs responsible for their generetion...People act like they are fake or free.. they aren’t. There’s complex physical hardware in the GPU that is producing those frames.
It’s different, but it is legitimate, and you’ll be seeing it only grow in usage and efficiency with time.
Stop being a boomer.
Just out of interest if Nvidia released a card with identicial rasterisation to the previous card, but it had exclusive DLSS features, would you buy it? Because you're essentially paying for nothing.AI is the future. Rasterization is archaic.
I tested with Spyro Trilogy yesterday using LSFG in my 6700 XT, in 4K I'm getting 65-72 FPS with it disabled, once I enabled it, I get mid 40 with high input latency and an output of 83 FPS approximately. The motion seems similar, BTW, but can easily tell it's running at a much lower frame rate, not to mention the judder when panning the camera.So say we remove AI upscaling, frame gen and reflex. You're left with native
How's that performance and latency?
![]()
Unless you're Jensen and you've tested functional builds all you're working with is hope and marketing.But what about Reflex 2 which will take your input data and use that in the "fake" generated frames. The same should feel even more responsive than running at a lower frame rate with the new frame generation disabled.