• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RTX 5080 | Review Thread

Hot5pur

Member
That is a very sad gen.

In this case, 5080 same price (vs. super) and 10% uplift.
For the 5090, +25-30% power draw, +20-30% performance, + 25% more cost (over 4090).
FG is a gimmick, it's a hit or miss on the 40xx, and at 4x it's probably even more problematic.
Sure this looks good compared to the 40xx which was a massive hike in prices, thanks crypto and AI.

If you wanted to summarize this gen in an image, this would be it:
mrXvdUJ.png
 

peish

Member


With the Blackwell launch we introduce the era of developer-created neural shaders, some ofwhich will also run on prior generation GPUs.Neural Shaders are the next evolutionary step in programmable shading. Rather than writing complex shader code to describe thesefunctions, developers train AI models to approximate the result that the shader codewould have computed. Neural shaders are set tobecome the predominant form of shaders ingames, and in the future, all gaming will use AItechnology for rendering.Up until this point, NVIDIA has been using neuralshading for DLSS, using CUDA to harness theTensor Cores. With the new Cooperative VectorsAPI for DX12 and Vulkan, Tensor Cores can beaccessed through any type of shader, includingpixel and ray tracing, in a graphics applicationallowing for a host of neural technologies. NVIDIAhas worked with Microsoft to create the newCooperative Vectors API. When combined withdifferentiable shading language features inSlang, Cooperative Vectors unlock the ability forgame developers to use neural techniques intheir games including neural texturecompression, that provides up to seven-to-oneVRAM compression over block compressedformats, and other techniques such as RTXNeural Materials, Neural Radiance Cache, RTXSkin, and RTX Neural Faces.

Sounds like NS are nothing more than helping lazy developers to generate efficient coding, so that their lazy ass logic will not bog down performance...meh! Buggy Loop Buggy Loop
 
Last edited:

Buggy Loop

Member
Sounds like NS are nothing more than helping lazy developers to generate efficient coding, so that their lazy ass logic will not bog down performance...meh! Buggy Loop Buggy Loop

You're misinterpreting





Complex shading graphs would not even run in real-time with all the optimization in the world peish, its for offline rendering. Game shaders have never even got close to "complex" shaders.

Helping lazy developers... ffs, why do I bother.
 
Last edited:

zeroluck

Member
Looking at Blackwell's massive ray tracing performance disappointment, it may be a driver bug looking at this benchmark...

hH3Njp3.png
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Cant believe this card can't even beat a 4080 super in some games when the 5070 can beat a 4090...nvidia fucked up which card was which!

Lmao...silly nvidia.


Hahahaha.
Nvidia marketing department need a medal if they managed to fool anyone with that nonesense.
 
No, that's not the future of rendering

Nvidia, AMD, Intel & Qualcomm & Microsoft don't think so. I trust them.

The whole pipeline is reinvented. Shaders & geometry are completely changing from traditional pipeline to a neural one.
You trust Nvidia and Microsoft? :messenger_tears_of_joy:

Will you at least admit that if AI becomes the sole reason to upgrade, you're essentially paying for nothing?
 

Buggy Loop

Member
UE5 is going to be the defacto engine going foward and stalker and black myth showed a 3 fps improvement. Not 3%. 3 fps total.

UE5 is still raster and a ton of software level rendering, what are you even talking about.

UE5 is transitioning from nanite + primary mesh shaders to more and more mesh shaders to begin with so accelerating that over software is pretty much going towards Nvidia and AMD's recommendations on mesh shader usage and RTX mega geometry is gonna be using that nanite/mesh for not simplifying the geometry for BVH.

The new pipeline will benefit UE5 or UE6 whenever they make the move. The API will support it and all vendors. I repeat, all vendors are in.

This is in UE5.



But sure, UE will not follow a massive movement with Nvidia/AMD/Intel/Microsoft...

Again Slimy, see my message below. If it ends up that there's enough games that showcase it in what, 2, 4, 6 years from now, and its up to 6000 series or 7000 series or Playstation 6 pro, my argument here is not to say jump on blackwell. I am interested in what's under the hood. Its a glimpse of the FUTURE of the graphical fidelity threads. When? Who the fuck knows.
 
Last edited:

Buggy Loop

Member
You trust Nvidia and Microsoft? :messenger_tears_of_joy:

Will you at least admit that if AI becomes the sole reason to upgrade, you're essentially paying for nothing?

Who says I'm saying now is the time to upgrade? Not me. How many times I have to say it?

drive person GIF


Look

Been here since 1999. I was on beyond3d forums and participating in GPU talk back when ATI made 2D mach series cards.

I'm interested in TECH. There's something under the hood of Blackwell that zero, zero games can use. I'm not telling you to upgrade on hope and dream, I DO NOT GIVE A FUCK. Very hard concept to grasp.

I want to see the whitepaper

I want to see the first tech implementations

You don't? Well move on and don't @ me.

Because much like this forum were arguing about the stupid fucking requirement to change a graphic card in early 2000's over programmable shaders and the first directx to support, eventually its the future of rendering, with ALL vendors in on it, because Nvidia/AMD/Intel/Microsoft are in on it with the HLSL team on it, it'll affect everything in the coming years. Future consoles included. It could be a sneak peak to what consoles do in next gen.
 
Last edited:
Who says I'm saying now is the time to upgrade? Not me. How many times I have to say it?

drive person GIF


Look

Been here since 1999. I was on beyond3d forums and participating in GPU talk back when ATI made 2D mach series cards.

I'm interested in TECH. There's something under the hood of Blackwell that zero, zero games can use. I'm not telling you to upgrade on hope and dream, I DO NOT GIVE A FUCK. Very hard concept to grasp.

I want to see the whitepaper

I want to see the first tech implementations

You don't? Well move on and don't @ me.

Because much like this forum were arguing about the stupid fucking requirement to change a graphic card in early 2000's over programmable shaders and the first directx to support, eventually its the future of rendering, with ALL vendors in on it, because Nvidia/AMD/Intel/Microsoft are in on it with the HLSL team on it, it'll affect everything in the coming years. Future consoles included. It could be a sneak peak to what consoles do in next gen.
I don't really care about your tech forum credentials.

Giant companies with a practical monopoly will always seek the path of least resistance when it comes to innovation. Intel set the CPU market back 5 years when they decided to sit on their thumbs until AMD caught them.
 

SlimySnake

Flashless at the Golden Globes
UE5 is still raster and a ton of software level rendering, what are you even talking about.

UE5 is transitioning from nanite + primary mesh shaders to more and more mesh shaders to begin with so accelerating that over software is pretty much going towards Nvidia and AMD's recommendations on mesh shader usage and RTX mega geometry is gonna be using that nanite/mesh for not simplifying the geometry for BVH.

The new pipeline will benefit UE5 or UE6 whenever they make the move. The API will support it and all vendors. I repeat, all vendors are in.

This is in UE5.



But sure, UE will not follow a massive movement with Nvidia/AMD/Intel/Microsoft...

UE5 is both Software and Hardware accelerated. Hardware accelerated Lumen utilizes hardware RT cores and is literally ray tracing. Now black myth and Stalker are not using hardware lumen, but black myth is using Path tracing from nvidia and still only saw an 11% increase. Alan Wake 2's PT mode has zero upgrades at 1440p and only 10% improvement at 4k due to the high bandwidth. Cyberpunk has a 1 fps improvement in ray tracing at 1440p. 3 fps total at 4k.

Gamer unboxed's 1440p RT average was literally 5% faster. I highly doubt hardware lumen games like Silent Hill 2 and the upcoming UE5.5 hardware lumen games this year will perform much better than actual Path traced games.

Is the future this amazing rtx AI mega geometry no one is using right now? I dont know. We will see. I dont think any dev will invest in that until next gen and thats only if consoles get it.
 
Last edited:

Buggy Loop

Member
I don't really care about your tech forum credentials.

Giant companies with a practical monopoly will always seek the path of least resistance when it comes to innovation. Intel set the CPU market back 5 years when they decided to sit on their thumbs until AMD caught them.

Oh so Nvidia will stagnate. The others won't have to put silicon in and they'll just become kings of raster :messenger_tears_of_joy:

FuVQwYP.png


Sure buddy. With AMD UDNA not even aiming to outperform a 5090, two gens out.

Charlie Day Ok GIF


Is the future this amazing rtx AI mega geometry no one is using right now? I dont know. We will see. I dont think any dev will invest in that until next gen and thats only if consoles get it.

Shouldn't have to wait long


" Alan Wake 2 is set to get a massive DLSS 4 and RTX Mega Geometry update to coincide with the GeForce RTX 50 Series launch."

Every future UE5 games supporting path tracing will probably use it. Why would you not. It'll be in the SDK, you can literally send the nanite mesh into the RT BVH structure rather than have the fall back mesh.

Neural shader is more of an unknown, because it is API dependent, directX is not ready, only supported by NV API as of now.

And again, because someone will quote me again for sure to shitpost, I'm curious for the above tech, regardless if it takes 2, 4, 6 years for it to become more popular. But it is the future of rendering. If its at 6000 series or 7000 series that it makes sense to upgrade, really not what's interesting for me here, you'll make your decision 🤷‍♂️
 
Last edited:

Buhaj

Member
Does AMD or Nvidia make a difference in photo editing? Because non-gaming workflows seem like the only reason to pick Nvidia, apart from ray tracing.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
I'm keeping my 4080s until the 5080s it seems

The RTX 5080 uses the entire GB203.
Unless they make a 5080Ti that uses GB202 then the 5080 is as big as a none Ti 5080 is gonna get.

And honestly witht he size of GB202 its highly highly unlikely that we would even get a 5080Ti.
They would use GB202 on 5090 and 5090D.

ix1dydvZR8K2HKvx.jpg
 
Last edited:

GloveSlap

Member
I need a new card, but i might just wait and see what the Super variations bring later. I'd be a lot more comfortable with 18-20gb VRAM, so hopefully they add that on the next round of cards.
 

AFBT88

Member
I wish people would stop acting like everyone is coming from a 4080. You aren’t meant to upgrade your card every gen.
If I can find a 5080 I’ll upgrade from my 3080 but not gonna try super hard
You're right. But im still disappointed that there's not a good generational uplift.
 
4090 seems like a good buy if prices drop to about 1k but if multi frame gen is good it’s tough. Also the size reduction makes things even more complicated
 
So far all the reviews have been negative. Here's a positive one.

They definitely improved architecture. The RTX5080 is able to sustain 3250MHz with ease at lower voltage compared to Ada and with such OC 5080 isnt that far behind the RTX4090.

 
Last edited:
Yikes, I figured 15-20% improvement was a given. It will really be something if the 5070 is weaker than a 4070s, as someone suggested.
 
The RTX 5000 series really isn't a new generation of GPUs, it's just a refresh.
We'll have to wait another 2 years, before having a true next generation of GPUs.
But on a positive note, the RTX 4000 will still be a great GPU line for quite a while.
So anyone with one, doesn't have much of a reason to spend more money.
It is a new generation y'all just coping cause in your heads you're living in the moores law days. You're not, it's over Moore's law is dead, next gen will be better but will be more expensive and you will still have people crying. This gen the gains are small but the cards are cheaper, still have people crying. You will never see a Pascal level price performance leap ever again as long as we still use silicon.
 

Gaiff

SBI’s Resident Gaslighter
It is a new generation y'all just coping cause in your heads you're living in the moores law days. You're not, it's over Moore's law is dead, next gen will be better but will be more expensive and you will still have people crying. This gen the gains are small but the cards are cheaper, still have people crying. You will never see a Pascal level price performance leap ever again as long as we still use silicon.
The 5080 having 50% of the core of the 5090 has nothing to do with Moore’s Law being dead. This isn’t a 5080. It’s a 5070 Ti at best.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
So far all the reviews have been negative. Here's a positive one.

They definitely improved architecture. The RTX5080 is able to sustain 3250MHz with ease at lower voltage compared to Ada and with such OC 5080 isnt that far behind the RTX4090.



Even with his positive spin because FrameChaser always has to be against the grain.
Thats at best ~17% gen on gen in games?
That is NOT good.


So literally under what most reviewers came up with on average,
Makes me wonder if he finally decided to go full shill
Because, I dont get how he can say you are getting a massive uplift at ~17% when his 4080 review was negative yet you were getting an uplift of 35 - 40%?

And:
3000mhz @ 350w

is equal in clock/voltage efficiency to

3250mhz @ 375w

So the architectural advances arent exactly something to boast about.


Then he has the audacity to say the 5080 makes the 4080S look like a xx70 class card.
Miss me with that shit:
698A0918D1C6B07556FAC18D379CC2606568C783
 

dcx4610

Member
I'm still interested despite the reviews since I have a 2000 series and been holding off upgrading for awhile. Anyone buying a new power supply? Any good recommendations? I'd like to get an ATX 3.1 but they seem to be pretty hard to find with most still being 3.0. I'm leaning towards the Asus Aura Gold but open to suggestions.
 

peish

Member
They want to skip the RT phase and go straight to path tracing everything. Its mostly RT reflections because AMD efforts since RDNA 2 and consoles are holding back.

Not saying they'll manage to get many games to go with the new neural models but much like Turing was a departure, Blackwell is a departure from Ada/Ampere rendering techniques.



On top of new tensor cores, the programmable shaders can now also be small AI networks.

There's no white paper on blackwell as of now

But directx devblog talks a bit on it


And its not just a fever dream of Nvidia. AMD & Intel are also onboard and working with the HLSL team. The future of rendering is that.

Without it, I assume that neural shaders are just not happening. Turing had made the pipeline effectively a compute one with mesh shaders in its days and now Blackwell is making it into a neural one. Can probably run on older RTX but will have a performance impact as I don't think they have the TOPs for it. You get offline shader rendering in real-time and compressed 6:1 ratio or better.

Only a technical brief for the AI server chips as of now and goes more into rack vs rack rather than a deepdive into the architectures, but we can gather than the new tensor cores can also go into FP4 and they now support acceleration for Mixture-of-experts (MoE) models much like we see with openAI o1 & Deepseek R1.

We have no idea what the 4th gen RT cores bring but likely tailored for Neural cache radiance path tracing.



To me it is. The RTX mega geometry demo and neural shader to be more specific. It might be less flashy than marbles demo, but when you understand what really is going on behind the blackwell demo its a game changer.



RTX geometry first and most interesting part imo. But also the neural shader and later the demo of restir + mega geometry @ 31:45.

Never has geometry been handled like they are doing here to begin with. Nvidia already presented mesh shaders in the past, Alan Wake 2 being the first game to use it, but it now seems that they are about to stream in clusters of meshlets in real-time and change geometry without doing a super heavy BVH recalculation for path tracing. Geometry stays in the pipeline without making a whole heavy recalculation from memory to CPU to GPU, etc. Its avoiding high latency geometry calls from system memory and stays local in GPU.

What does this mean? You know how games always have a dumbed down geometry to help BVH traversal speeds for path tracing or ray tracing?

That's gone.

You can literally throw the entire nanite mesh or a game with mesh shaders into the path tracing. Its unheard of.

Neural radiance cache path tracing was in the tech papers in 2021 and they identified back then bottenecks that would boost the rendering speeds of such a solution so I imagine Blackwell is optimized for it. Half-Life 2 RTX is said to use it and likely they'll be pushing this tech for all path tracing collaborations in the future. Its a lot less noisier than previous path tracing solutions. This combined with Mega geometry which allegedly traces up to 100x more triangles than previous solutions means path tracing should be the norm by a few years down the line.


Idk mate. Nvidia hasnt shown a good Neural Shaders On/Off demos to make me believe.

Mega geometry seems to be on the RT cores, so Nanite tech is able to include faster RT. The demo was clear and understandable.

But Neural shader? Sounds like it is done by the tensor cores but still need to pass through the cuda and rt cores. We are already using tensor cores for DLSS, will it have enough capacity? Will future RTX GPU be all tensor cores? 🤷‍♀️

Like GPT, using AI for coding, is mainly for quick results and then you fact check. So instead of the cuda/rt cores going through thousands lines of lazyass codes, the tesnsor cores is able to send compact codes for processing. But like AI so far, there is hallucination, how will that work in realtime gaming? 🤷‍♀️

Rather than writing complex shader code to describe these functions, developers train AI models to approximate the result that the shader code would have computed. Neural shaders are set to become the predominant form of shaders ingames, and in the future, all gaming will use AI technology for rendering.

The potential applications for neural shaders are not yet fully explored, which means more exciting features for faster and more realistic (or stylized) real-time rendering lie ahead.
 
Last edited:

xenosys

Member
Damn, this was being hyped up as some sort of 4090 competitor a month or two ago, and now it doesn't even fall within the 4090's ballpark in rasterization/RT performance benchmarks.

I can't wait for the 5070 review ... :messenger_tears_of_joy: :messenger_tears_of_joy:
 
Last edited:

Macattk15

Member
Probably gonna try to grab one ... I usually go every 2 gens .... going from a 3080 to 5080 should be able to be felt.

No rush though. I'll wait until I can get one at retail prices.
 
Last edited:
Even with his positive spin because FrameChaser always has to be against the grain.
Thats at best ~17% gen on gen in games?
That is NOT good.


So literally under what most reviewers came up with on average,
Makes me wonder if he finally decided to go full shill
Because, I dont get how he can say you are getting a massive uplift at ~17% when his 4080 review was negative yet you were getting an uplift of 35 - 40%?

And:
3000mhz @ 350w

is equal in clock/voltage efficiency to

3250mhz @ 375w

So the architectural advances arent exactly something to boast about.


Then he has the audacity to say the 5080 makes the 4080S look like a xx70 class card.
Miss me with that shit:
698A0918D1C6B07556FAC18D379CC2606568C783
A real 5080 should beat the RTX4090, so we can say that the card disappoints, but the RTX5080 still offers nice performance boost compared to the RTX4080 (and especially compared to the 4070ti 12GB, that supposed to be the weakest RTX4080)

My RTX4080S would never come close to the RTX4090, even with OC. The RTX 5080 is close without OC and would probably beat the stock 4090 with 12% OC.

dragon-age-veilguard-3840-2160.png
 

G-DannY

Member
A real 5080 should beat the RTX4090, so we can say that the card disappoints, but the RTX5080 still offers nice performance boost compared to the RTX4080 (and especially compared to the 4070ti 12GB, that supposed to be the weakest RTX4080)

My RTX4080S would never come close to the RTX4090, even with OC. The RTX 5080 is close without OC and would probably beat the stock 4090 with 12% OC.

dragon-age-veilguard-3840-2160.png

cherry picking at its finest
 

PeteBull

Member
Fun meme but we know 4sure 5090 is solid and will fly off the shelves, topend pc buyers arent price sensitive at all and they get roughly 35% increase over previous BiS cardd, 4090 that even in january started at 2500usd for basic worst cooling models ;)
5090 will sell every model even at well over 3k usd streetprice, which it is even in the US, outside the US many models of that card will be close to 4k usd ;X
 
Top Bottom