• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

What do we think Nvidia's next tech innovation will bring?

daninthemix

Member
For those who are bristling at my use of the word "invent" I have modified my OP to say "bring to market" instead.

The fact remains: Nvidia bring technologies to market, which become the hot technologies that rivals (AMD etc) as well as both console designers and users crave for their products a few years down the line. Nvidia dictate the technical direction of the videogame graphics industry. It's pretty much that simple. I'm sorry if that truth is aggravating for some of you. But because that is the truth, I was just wondering what their next innovation will be.
 

Hudo

Member
Man, I'm trying to make you see the light, but its hard

"There's a difference between "testing out", "looking into", "existing in theory" and offering an actual working, stable sollution for customers at a reasonable price."

Most tecnologies that we use today were "discovered" decades ago. The thing is, most of them werent "invented" by companies that are offering them today, but these companies spent millions/billions in R&D to make them cheap enough, stable enough and consumer friendly enough to make them usable to the masses.

These companies deserve as much credit as those people who "invented" those technologies. 'Cause there's a pretty stark difference between theorizing about something and actually making a product with it.

Without Gsync we wouldnt have Freesync, withour DLSS we wouldnt have FSR and so on. Or maybe we would have to wait many years, maybe decades until competitors figured out how to make these techs work.
My fucking point was only that they did not invent this stuff, which is what this thread seemed to assume. That's all. I never questioned them bringing that shit (in some rudimentary forms) to the consumer. Note that on the industrial side, specified algorithms and hardware were available before that, of course.
You are arguing a point I never questioned.

Edit: We can argue whether how they brought it to the end consumer worked for everyone and for the market as a whole (imho, it just did fuck the market even more in the end, it seems like). But that's an entirely different argument.
 
Last edited:

Killer8

Gold Member
Man, I'm trying to make you see the light, but its hard

"There's a difference between "testing out", "looking into", "existing in theory" and offering an actual working, stable sollution for customers at a reasonable price."

Most tecnologies that we use today were "discovered" decades ago. The thing is, most of them werent "invented" by companies that are offering them today, but these companies spent millions/billions in R&D to make them cheap enough, stable enough and consumer friendly enough to make them usable to the masses.

These companies deserve as much credit as those people who "invented" those technologies. 'Cause there's a pretty stark difference between theorizing about something and actually making a product with it.

Without Gsync we wouldnt have Freesync, withour DLSS we wouldnt have FSR and so on. Or maybe we would have to wait many years, maybe decades until competitors figured out how to make these techs work.

Nvidia deserve the credit for bringing these technologies to the forefront, that much is true. However I view what they achieved as getting a foot in the door before others open the door fully. Think the iPhone smartphone revolution -> Android phones then normalizing that.

The reason I say this is because everything Nvidia does is extremely proprietary.

G-sync required (and still requires for G-sync Ultimate) a special module in the monitor. Meanwhile VESA Adaptive Sync / AMD's Freesync / HDMI 2.1 VRR are now so ubiquitous and exist in a vast amount of monitors, TVs as well as game consoles, with no extra hardware required, that basically anyone can now enjoy the feature.

DLSS is still the king in terms of upscaling quality, but it requires proprietary tensor core hardware that only their GPUs have. Meanwhile FSR, while not AI based or as good, works on basically everything via software - old GPUs, consoles, the Switch - and so is the current go-to solution for a huge number of console games.

Same story with frame generation (which existed long before Nvidia started pimping it out) - their DLSS3 might be better with less artifacting, but it's FSR3 that is ultimately going to reach more people.

Being the innovator and 'the original' is exciting and a source of pride and profit, but it's regularly the competitor which does it nearly as well, and for far cheaper, who actually lets the masses reap the benefits of the innovation.
 
Realtime AI style transfer. For adding realism or making it more cartoony or just remastering old games, without needing to tweak any geometry or textures.
ri6C8QK.jpg
Wow, the right one looks exactly like my uncle, apart from the hair.
 
Last edited:

Arsic

Loves his juicy stink trail scent
I want them to fix hair in games. I really like the hair in re4 remake but it still looks like it’s floating on the models head. Saga Anderson hair in aw2 looks good but doesn’t have physics to it.

Reflections and shadows are done deal. Facial tech is on point now. Mo cap for real animations. Hair and eyes are still needing improvement but hair especially.

Otherwise just give us dlss improvements. It’s clearly the future and standard to make games work:
 
Last edited:

Buggy Loop

Member
I think it will be inevitable that the RT blocks are totally revamped to focus on path tracing performances, neural radiance ones like their researched Neural Radiance Caching which is even more accurate than ReSTIR.
 

Hugare

Gold Member
Nvidia deserve the credit for bringing these technologies to the forefront, that much is true. However I view what they achieved as getting a foot in the door before others open the door fully. Think the iPhone smartphone revolution -> Android phones then normalizing that.

The reason I say this is because everything Nvidia does is extremely proprietary.

G-sync required (and still requires for G-sync Ultimate) a special module in the monitor. Meanwhile VESA Adaptive Sync / AMD's Freesync / HDMI 2.1 VRR are now so ubiquitous and exist in a vast amount of monitors, TVs as well as game consoles, with no extra hardware required, that basically anyone can now enjoy the feature.

DLSS is still the king in terms of upscaling quality, but it requires proprietary tensor core hardware that only their GPUs have. Meanwhile FSR, while not AI based or as good, works on basically everything via software - old GPUs, consoles, the Switch - and so is the current go-to solution for a huge number of console games.

Same story with frame generation (which existed long before Nvidia started pimping it out) - their DLSS3 might be better with less artifacting, but it's FSR3 that is ultimately going to reach more people.

Being the innovator and 'the original' is exciting and a source of pride and profit, but it's regularly the competitor which does it nearly as well, and for far cheaper, who actually lets the masses reap the benefits of the innovation.
The difference to Apple is that Nvidia justifies the premium price. Apple havent for a while now, with other companies making better phones while Apple only relies on their name and nothing more (that $999 monitor stand, for example. Just ridiculous)

You know by getting a Nvidia card that you'll have tech that cant be compared to nothing else on the market. FSR isnt nowhere near DLSS in terms of quality, imo. Every console game that comes with it looks dreadful.

Freesync and AMD frame gen look good, but without Nvidia they wouldnt exist. And AMD cards are still ass in terms of RT performance and features. Not to mention worse driver support.

Nvidia cards are stupidly expensive, I know, but so is their R&D to keep innovating with tech. They are greedy, but I only blame the competition for not being able to keep up with the pace.
 

Holammer

Member
I expect AI generated volumetric materials down the line.
First render draws everything as normal, with the addition of a layer with a simplified voxel-like representation with variable detail of a physics object , then AI works it over in another pass and turns it into the desired material. An AI solution for clouds, liquid simulation, hair, explosions and fog. Stuff that's draws a lot of from the compute budget with today's technology. The sky's the limit, you could use it to render trees in the mid and far distance, rather than using billboards.

Eventually by DLSS10 the entire scene will be rendered this way, but I expect it to start in a limited fashion first.
 
I hope for some hardware form of AI texture compression to reduce VRAM usage and maybe memory bandwidth. I've heard something like that before. Maybe in 5000 series.
 

Buggy Loop

Member
I retract my previous statement, while it’s more than likely, my wish is for Nvidia to find a way to circumvent shader stuttering. I don’t know how, if even possible, maybe some special cache for it? but that would be a game changer.
 

Wildebeest

Member
$5,000 "budget" cards that are not good enough to play the games being released, but do come with a free "moore's law is dead" shirt.
 

hlm666

Member
The reason I say this is because everything Nvidia does is extremely proprietary.

G-sync required (and still requires for G-sync Ultimate) a special module in the monitor. Meanwhile VESA Adaptive Sync / AMD's Freesync / HDMI 2.1 VRR are now so ubiquitous and exist in a vast amount of monitors, TVs as well as game consoles, with no extra hardware required, that basically anyone can now enjoy the feature.
While true there is a reason AMD had to demo their year before release version on a laptop because no one would put the full vesa spec into monitors and TV's controllers and even then for a year or so after release freesync still could only manage vrr windows between like 40-90hz while gsync module was letting nvidia do 30hz to monitor max refresh. Without nvidia pushing it and making people understand the benefits, would the vesa spec which was initially for laptop power saving features have been pushed into desktop/tv displays for vrr? With no carrot for AMD and display makers i'm not sure we would be where we are today in regard to vrr.

It's probably one of the few of nvidias proprietary efforts that ended up benefitting everyone in the long run.
 
Maybe raytraced sound.

Giving each material a sound value(how does this material reflect sound) + information about the size/shape of the room.
waterfalls in games are so quiet.
acoustics in games is generally total crap.

Physx 2.0 Silicone in future cards.

Big push for Voxel Based Fluid Physics.

N-Fluids.....IDK.
smoke, water and wind could do with a lot of help.
plants/vegetation in the wind almost always look off.
 

00_Zer0

Member
They will add a credit card scanner the connects directly to the card.
Microtransactions to unlock GPU cores and features! Sounds about right for Nvidia, but don't give AMD and Intel any ideas, eh Nvidia? I blame AMD, and Intel for the mess we are in just as much as Nvidia. Playing follow the leader instead of truly innovating has gotten us to this point of overpriced GPU's.
 

WitchHunter

Banned
Well, the one thing that elevates sales is bitcoin. So a new crytocurrency that is called GoldCoin can be mined with 12x efficiency using the new Nvidia 60000 GPU, that mines crypto while you game and also in the background, whatever you do. It also sports a built in, non removable - otherwise the card gets bricked - microphone. So anytime you turn on the pc there is a chance, that Jensen Huang will greet you himself from the NVIDIA HQ. When this happens whatever you do, it will be put in hibernation, his face comes in on your monitor and you have to talk with him a few minutes about your life and the experience of using the 60000 GPU. If you happen to anger him, well, he randomly deletes a few things from your ssd, plus burns his face into the low right corner of your beloved OLED. If you happen to make him smile, your account earns GoldCoin even faster, by assigning a few hundred gpus in the cloud to you for a few hours. Of course without full blown internet access you won't earn shit, so you must keep the firewalls turned off.

Whenever you earn back the card's full price, you have to buy double the amount of cards you previously owned. Otherwise your card will be set into rudimentary mode and limited to upscaled 1080p, in order to meet ESG standards.

When the GoldCoin reaches dollar like status they will replace the US president and Jensen Huang becomes the king of the world.

PS: Do not try to mine GoldCoin with ASICs or AMD GPUS... if you value your silicon.
 
Last edited:

CGNoire

Member
waterfalls in games are so quiet.
acoustics in games is generally total crap.


smoke, water and wind could do with a lot of help.
plants/vegetation in the wind almost always look off.
FC2 wind system was already fantastic with turbulence and everything. Just going back to that would be 95% percent the way there.
 

Hudo

Member
Nvidia's next innovation should be consumer card pricing (all models) below $800. That'd be an innovation I'd genuinely praise them for.
 
Top Bottom