• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RTX 5080 | Review Thread

Gaiff

SBI’s Resident Gaslighter
Agreed, but with my 7900xt I find FSR to be good… not great, but good. If we get some level of FSR4 AI upscaling on RDNA3, then i’ll still consider it a massive win.

My use case doesn’t call for RT when my backlog is RDR2, Days Gone, Dying Light 1, Witcher 3, Dues Ex Mankind Divided, etc. People like me would absolutely look at the XTX as a viable option if running an old GPU.
7900 XT and XTX are both fine. I was saying that it's not 7900 XTX owners specifically who should be laughing. This card is a dud. Let's all laugh at it.
 

hinch7

Member
Lol abysmal. Its no wonder they kept prices under 1K. Actually thinking of just keeping hold of my 4070Ti unless AMD does something truly great with the 9070XT.
 

Buggy Loop

Member
I still think that raster vs raster is kind of pointless, it is no longer the future of rendering. Blackwell is a shift in a whole reinvention of the pipeline to go neural. Not a single game is there yet, with only Alan Wake 2 coming soon with mega geometry.

Time to upgrade? Maybe not, just like when Turing released it was a big question mark with ML & RT cores bring to the table and it was not evident until Ampere and Ada even, maybe by the time 6000 or 7000 series release we'll understand the changes under the hood of the 5000 series. But throwing a whole architecture under the bus because old ass raster is +x% is a bit too much. That's what they have to work with as of now so ok, but its missing the big picture. It'll be fun to revisit the series later on with more games implementing neural stuffs.

The mega geometry demo is out of this world. The day you can take the entire nanite mesh from UE5 game and path trace it without simplifications is when you'll see a gap. Neural shaders & neural radiance cache path tracing too one day. Half-Life 2 RTX is poised to bring neural cache radiance path tracing in. We just don't have a game so far that takes all 3 together.
 
Last edited:

CLW

Member
Then enjoy your "real" frames and no ray tracing at 4K 25fps.
h4KxGMo.jpeg

Confused Rooster Teeth GIF by Achievement Hunter


You are either (1) a shill and/or (2) an abject moron/liar

If 1 just buy the 5070 and get that guaranteed 4090 performance. If 2 I'm sorry.
 

FireFly

Member
I still think that raster vs raster is kind of pointless, it is no longer the future of rendering. Blackwell is a shift in a whole reinvention of the pipeline to go neural. Not a single game is there yet, with only Alan Wake 2 coming soon with mega geometry.

Time to upgrade? Maybe not, just like when Turing released it was a big question mark with ML & RT cores bring to the table and it was not evident until Ampere and Ada even, maybe by the time 6000 or 7000 series release we'll understand the changes under the hood of the 5000 series. But throwing a whole architecture under the bus because old ass raster is +x% is a bit too much. That's what they have to work with as of now so ok, but its missing the big picture. It'll be fun to revisit the series later on with more games implementing neural stuffs.

The mega geometry demo is out of this world. The day you can take the entire nanite mesh from UE5 game and path trace it without simplifications is when you'll see a gap. Neural shaders & neural radiance cache path tracing too one day. Half-Life 2 RTX is poised to bring neural cache radiance path tracing in. We just don't have a game so far that takes all 3 together.
What's strange is it seems to do worse relative to the 4080 in RT than in raster. That probably changes in PT, but from what Rich said, Nvidia seems to have embargoed PT performance.
 

Gaiff

SBI’s Resident Gaslighter
Confused Rooster Teeth GIF by Achievement Hunter


You are either (1) a shill and/or (2) an abject moron/liar

If 1 just buy the 5070 and get that guaranteed 4090 performance. If 2 I'm sorry.
Read very carefully what the post says. Slooooowly and you will understand.
 

AtomicStarving

Gold Member
Power consumption pretty much the same.


I'm still gonna upgrade from Gigabyte 4080 GAMING OC with vapor chamber. Just so I don't lose big money keeping old f hardware. 🤡
 

Buggy Loop

Member
What's strange is it seems to do worse relative to the 4080 in RT than in raster. That probably changes in PT, but from what Rich said, Nvidia seems to have embargoed PT performance.

Its possible they've already put existing RT & PT as legacy since future games will probably not even use this anymore. Certainly not monte carlo style path tracing like early Quake 2 RTX, nor ReSTIR Cyberpunk 2077. NRC is much faster. So if you were to build a neural monster, for future rendering, I can see that they couldn't care less to improve old methods that will likely never be implemented again.

I'm not trying to sell blackwell to anyone really, I'm not even sure I buy one this gen, but we're missing the picture of this architecture.

We cannot compare anymore like raster to raster gens did. Blackwell is a huge departure in how the entire rendering pipeline works but it'll only flex when you actually use heavy neural model. Even the shader pipeline had virtually not changed drastically since programmable shaders over 2 decades ago. Now that they can use the pipeline entirely as a neural network is something that not a single game take advantage of as of this date. Hell the API is as of now Nvidia only for support, DirectX support is coming but still not there.

I'm not sure Ada will age well with RTX mega geometry, neural cache radiance and neural shaders for example. But those Ada owners by the time games really flex these technologies, probably 6000 series will be out. A bit like Turing being the startup of RT & ML upscalers but we didn't see good implementations until years later.
 
Last edited:

Seeing the latency increase numbers there for MFG and even the old 2x all I can say is that Nvidia can seriously fuck off with their "FG = performance" marketing.
 
Last edited:

rm082e

Member
You believe you'll be able to acquire a 5080 for $1,200 anytime soon?

No, it's going to be summer at least before stock is plentiful. But again, someone who was looking to spend that much wasn't open to spending twice that.

People like myself who are open to upgrading are much more likely to stick to the top end of their budget and wait. We saw this with the 2000 series of cards when they were also a minor performance bump over the 1000 series. The difference this time is we're waiting two generations instead of one.
 

peish

Member
Its possible they've already put existing RT & PT as legacy since future games will probably not even use this anymore. Certainly not monte carlo style path tracing like early Quake 2 RTX, nor ReSTIR Cyberpunk 2077. NRC is much faster. So if you were to build a neural monster, for future rendering, I can see that they couldn't care less to improve old methods that will likely never be implemented again.

I'm not trying to sell blackwell to anyone really, I'm not even sure I buy one this gen, but we're missing the picture of this architecture.

We cannot compare anymore like raster to raster gens did. Blackwell is a huge departure in how the entire rendering pipeline works but it'll only flex when you actually use heavy neural model. Even the shader pipeline had virtually not changed drastically since programmable shaders over 2 decades ago. Now that they can use the pipeline entirely as a neural network is something that not a single game take advantage of as of this date. Hell the API is as of now Nvidia only for support, DirectX support is coming but still not there.

I'm not sure Ada will age well with RTX mega geometry, neural cache radiance and neural shaders for example. But those Ada owners by the time games really flex these technologies, probably 6000 series will be out. A bit like Turing being the startup of RT & ML upscalers but we didn't see good implementations until years later.

Seems too quick for Nvidia to put RT under legacy...we havent even touch PT. It was mostly reflections.

How does neural pipeline work? I am scratching my head.

Their new 5090 demos didnt look as nice as their cars or marbles demo imo.

I was hoping 5090 follow up with new marbles demo in full PT


 

FingerBang

Member
You believe you'll be able to acquire a 5080 for $1,200 anytime soon?
I seriously think it will be widely available. Who will fight for a card with the same price and performance as before? The stock might be limited for a month or two, but this card will be widely available at MSRP-ish until Nvidia decides to stop producing it.

The 5090 will be a completely different story.
 

Gaiff

SBI’s Resident Gaslighter
Wow, terrible gen. Makes Turing look good. There had to some huge miss here, it even clocks slower than the 4080. What happened?
The only Turing card that aged well is the 2080 Ti. Those 11GB, DLSS, and decent RT performance are still good to this day. It just took a while to take off. Everyone else with their 8GB is dying a horrible death.
 

Buggy Loop

Member
Seems too quick for Nvidia to put RT under legacy...we havent even touch PT. It was mostly reflections.

They want to skip the RT phase and go straight to path tracing everything. Its mostly RT reflections because AMD efforts since RDNA 2 and consoles are holding back.

Not saying they'll manage to get many games to go with the new neural models but much like Turing was a departure, Blackwell is a departure from Ada/Ampere rendering techniques.

How does neural pipeline work? I am scratching my head.

On top of new tensor cores, the programmable shaders can now also be small AI networks.

There's no white paper on blackwell as of now

But directx devblog talks a bit on it


And its not just a fever dream of Nvidia. AMD & Intel are also onboard and working with the HLSL team. The future of rendering is that.

Without it, I assume that neural shaders are just not happening. Turing had made the pipeline effectively a compute one with mesh shaders in its days and now Blackwell is making it into a neural one. Can probably run on older RTX but will have a performance impact as I don't think they have the TOPs for it. You get offline shader rendering in real-time and compressed 6:1 ratio or better.

Only a technical brief for the AI server chips as of now and goes more into rack vs rack rather than a deepdive into the architectures, but we can gather than the new tensor cores can also go into FP4 and they now support acceleration for Mixture-of-experts (MoE) models much like we see with openAI o1 & Deepseek R1.

We have no idea what the 4th gen RT cores bring but likely tailored for Neural cache radiance path tracing.

Their new 5090 demos didnt look as nice as their cars or marbles demo imo.

To me it is. The RTX mega geometry demo and neural shader to be more specific. It might be less flashy than marbles demo, but when you understand what really is going on behind the blackwell demo its a game changer.



RTX geometry first and most interesting part imo. But also the neural shader and later the demo of restir + mega geometry @ 31:45.

Never has geometry been handled like they are doing here to begin with. Nvidia already presented mesh shaders in the past, Alan Wake 2 being the first game to use it, but it now seems that they are about to stream in clusters of meshlets in real-time and change geometry without doing a super heavy BVH recalculation for path tracing. Geometry stays in the pipeline without making a whole heavy recalculation from memory to CPU to GPU, etc. Its avoiding high latency geometry calls from system memory and stays local in GPU.

What does this mean? You know how games always have a dumbed down geometry to help BVH traversal speeds for path tracing or ray tracing?

That's gone.

You can literally throw the entire nanite mesh or a game with mesh shaders into the path tracing. Its unheard of.

Neural radiance cache path tracing was in the tech papers in 2021 and they identified back then bottenecks that would boost the rendering speeds of such a solution so I imagine Blackwell is optimized for it. Half-Life 2 RTX is said to use it and likely they'll be pushing this tech for all path tracing collaborations in the future. Its a lot less noisier than previous path tracing solutions. This combined with Mega geometry which allegedly traces up to 100x more triangles than previous solutions means path tracing should be the norm by a few years down the line.
 
Last edited:
As predicted, AI features have now superceded actual rasterisation. Nvidia can now sell the same card for 4 years instead of 2.

AMD can actually gain some market share by pricing the 9070 at the right level, so I assume they won't.
 
Enjoy with PT!
ee7f522605940e1a79d7b4e81b1a43ca.jpg




AMD said they gonna try it. As we know they will do it! Same was with AFMF only for RDNA3
With very high texture settings instead of supreme, DLSS Performance (which still looks better than native TAA) and DLSS FB the game would run at around 90-100fps. Perfectly playable on gamepad and also decent experience on M+K.

But 4080S and 5080 run PT much better and 1440p. If I would use PT medium instead of high I would get 120-150fps. At the moment no AMD card can offer similar experience in Black Myth Wukong. The RX7900XTX has similar performance but with lumen.

PeWu39i.jpeg
 

MikeM

Member
I still think that raster vs raster is kind of pointless, it is no longer the future of rendering. Blackwell is a shift in a whole reinvention of the pipeline to go neural. Not a single game is there yet, with only Alan Wake 2 coming soon with mega geometry.

Time to upgrade? Maybe not, just like when Turing released it was a big question mark with ML & RT cores bring to the table and it was not evident until Ampere and Ada even, maybe by the time 6000 or 7000 series release we'll understand the changes under the hood of the 5000 series. But throwing a whole architecture under the bus because old ass raster is +x% is a bit too much. That's what they have to work with as of now so ok, but its missing the big picture. It'll be fun to revisit the series later on with more games implementing neural stuffs.

The mega geometry demo is out of this world. The day you can take the entire nanite mesh from UE5 game and path trace it without simplifications is when you'll see a gap. Neural shaders & neural radiance cache path tracing too one day. Half-Life 2 RTX is poised to bring neural cache radiance path tracing in. We just don't have a game so far that takes all 3 together.
Raster still forms the baseline FPS. It will absolutely matter for the foreseeable future.
 

PeteBull

Member
Interestingly Brian frpom Techyescity got more favorable uplifts in performance

I love the guy but his suite of games is heavily cherrypicked, with only 1 game with bad increase in fps(6%fps in marvel rivals).
Best to trust that 11% at 4k HUB claimed vs 4080s.
Now if we at least got 24gigs of vram, but no, still measily 16gigs which will be not enough by the time of next gen consoles in 3 years, at least in 4k, which 1k usd(msrp price, the streetprice will be higher, close to 1200usd) should provide us.
 

Buggy Loop

Member
Raster still forms the baseline FPS. It will absolutely matter for the foreseeable future.

No, that's not the future of rendering

Nvidia, AMD, Intel & Qualcomm & Microsoft don't think so. I trust them.

The whole pipeline is reinvented. Shaders & geometry are completely changing from traditional pipeline to a neural one.
 
Last edited:

MikeM

Member
No, that's not the future of rendering

Nvidia, AMD, Intel & Qualcomm & Microsoft don't think so. I trust them.

The whole pipeline is reinvented. Shaders & geometry are completely changing from traditional pipeline to a neural one.
Keyword there was forseeable future. Long term- who knows.
 

Buggy Loop

Member
Keyword there was forseeable future. Long term- who knows.

The old pipeline, programmable shaders, is effectively 24 years old. I think the new pipeline will hold off quite a long time when all vendors and API are going there. These things don't change often.
 
Wow terrible offering, in Europe, people who bought a 4070 super, 4070ti super or 4080s months or half a year ago actually got a better performance per Euro spent than what people will get from this series 5000 in the coming months lmao.

SKIP SKIP SKIP!
 

dave_d

Member
This card is legit for NOBODY. If you need to upgrade from an older card just get the 5070ti, or simply wait for the 6000 series.
Admittedly I'm in this camp since I'm coming from a 3070 and was considering a 4070ti super. (But then lost my job which nixed that. Not unemployed anymore though.) So while not great the 5070ti is pretty much a minor boost over that for slightly cheaper.(We'll see though.)
 

FingerBang

Member
Did we get any node upgrade post pandemic? Because it seems like we didn't.
We did! They went from Samsung 8nm to TSMC 4nm. The jump was massive... except they pulled a trick with their naming, and, apart from the 4090, everybody got a shittier chip than expected.

The 4080 should have been closer to the 4090. The 4070ti should have been called 4070. The 4070 should have been a 4060, and so on.

EDIT: It's also important to note that AMD will go from 5/6nm to 4nm, so they definitely have the chance of a bigger gain. The issue is going to be pricing.
 
Last edited:
Top Bottom