• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

DF : PC dormant commands from consoles to remove CPU burden

I don't know why they rushed to release this game so fast, maybe it was pressure from EA but it's not like gamers were demanding a sequel right this second. They could've taken more time to launch it in a much better state.

The Star Wars license they held expired on October 14, 2024.

They released the game in April 2023 and dedicated the remaining year to the old-gen version.
 
It is, but that's more that games are overwhelmingly a GPU affair. I have already provided my basis for the statement.

9800X3D ST is around 20% better than 12900K. And there is no chance that important work might get sent to anemic E core with 9800X3D.

We are talking about games. And the 12900K is +60% slower than a 9800X3D in jedi Survivor.

SWJS-p.webp


I didn't make a statement to the contrary.

Nevertheless compared with 5090, it's very unimpressive. 5090 being bottlenecked by it in a game that heavily uses CPUs is not notable at all by itself.

Prove me that the 9800X3D is bottlenecking a 5090.

Around half of PC gamers have 8GB or less. Was more back then.

And those can run the game with this cvar off. The rest of us, could very well run with it on and benefit from better camera smoothness.

7800X3D is tied with RaptorLake.

The 7800X3D beats the 14900K in this game by 26%.
 
Interesting find but I kinda wonder if you're better off just enabling (M)FG on PC instead. Same animation interpolation, lower latency hit, more benefits with even lower CPU cost.
 
Prove me that the 9800X3D is bottlenecking a 5090.


The 7800X3D beats the 14900K in this game by 26%.
You made a general statement that it's the second fastest CPU for gaming. Not for this specific game. People are memory holing that RPL = Zen 4 (3D) and ADL = Zen 3 (3D) was the state of the competition before Zen 5 (3D) below RPL away and ARL ate glue.

Anyway, I've said my part.
 


That is a 5800X3D.

You made a general statement that it's the second fastest CPU for gaming. Not for this specific game. People are memory holing that RPL = Zen 4 (3D) and ADL = Zen 3 (3D) was the state of the competition before Zen 5 (3D) below RPL away and ARL ate glue.

Anyway, I've said my part.

Overclocking doesn't matter, as that is only for a small minority of enthusiasts.

And even with many games, the 7800X3D is still the second fastest CPU in the market for gaming.
This and many other reviews have already proved it.
And you have only said nonsense, that is easily disproven.

Average-p.webp
 
It is, but that's more that games are overwhelmingly a GPU affair.

Games are, but this game with RT on hammers the most powerful CPUs in open world Koboh area.

This game runs on engine from 2014, if it was optimized it shouldn't run like crap on 202x CPUs.
 
That is a 5800X3D.
You didn't watch it. The video at the timestamp showed 7800X3D/9800X3D bottlenecking it.

But they actually made a follow up with a more obvious example.

here 9800X3D bottlenecking at 4K RT in SM1/Hogwarts.

NVUvNAv.jpeg

Overclocking doesn't matter, as that is only for a small minority of enthusiasts.
They are equal stock (PCGH). Sure if you OC memory, 7800X3D has a small lead.

And even with many games, the 7800X3D is still the second fastest CPU in the market for gaming.
Per HUB. I don't use them. But fair if you do.
 
You didn't watch it. The video at the timestamp showed 7800X3D/9800X3D bottlenecking it.

But they actually made a follow up with a more obvious example.

here 9800X3D bottlenecking at 4K RT in SM1/Hogwarts.

NVUvNAv.jpeg

LOL, that is a GPU Bottleneck.
How Could you make such an obvious mistake?
And why are you showing a graph of average GPU utilization, when we are discussing CPUs?

They are equal stock (PCGH). Sure if you OC memory, 7800X3D has a small lead.

No they are not. In games, at stock, the 7800X3D always wins.

Per HUB. I don't use them. But fair if you do.

You are free to show any other reputable source that places the 7800X3D under the 12900K in gaming.
 
Last edited:
80% GPU Utilization in SM1 4K RT
You think that's a GPU bottleneck.

Have a nice day.

PCI-e bottlenck. Which that game has a lot. In fact, Spider-man on PC is unique, because the BVH processing is done in the CPU and then sent over the PCI-e bus to the GPU. While in all other games, it is done in the RT cores, which include dedicated hardware for BVH traversal calculations.
Besides, in case you haven't noticed, we are talking about Jedi Survivor. Not about Spider-man.
So please, feel free to show Jedi Survivor being bottleneck by a 9800X3D. Maybe we'll have to wait for a 6900 Ultra from nVidia a couple of years from now. At 1080P.
 
PCI-e bottlenck. Which that game has a lot. In fact, Spider-man on PC is unique, because the BVH processing is done in the CPU and then sent over the PCI-e bus to the GPU. While in all other games, it is done in the RT cores, which include dedicated hardware for BVH traversal calculations.
Besides, in case you haven't noticed, we are talking about Jedi Survivor. Not about Spider-man.
So please, feel free to show Jedi Survivor being bottleneck by a 9800X3D. Maybe we'll have to wait for a 6900 Ultra from nVidia a couple of years from now. At 1080P.
Someone claims that 9800X3D bottlenecking 5090 in Jedi survivor is out of the ordinary.

I claim and show that 9800X3D bottlenecks 5090 repeatedly in RT games

You demand I show what the other guy claimed.

(Note: BVH Traversal and BVH building are not the same thing)

Lets just cut this off here. Have a nice day. Thanks for the discussion.
 
Last edited:
Someone claims that 9800X3D bottlenecking 5090 in Jedi survivor is out of the ordinary.

I claim and show that 9800X3D bottlenecks 5090 repeatedly in RT games

You demand I show what the other guy claimed.

Lets just cut this off here. Have a nice day. Thanks for the discussion.

Don't run away with excuses. We were always talking about Jedi Survivor. The reality is that the 9800X3D does not bottleneck this game.
And even with RT, the bottleneck is always on the GPU. Except with Spider-man, which has issues with it's BVH implementation.

Also, you never showed reputable benchmarks with the 12900K beating the 7800X3D.
So I got the PCGH benchmarks. Here it is, the 7800X3D beating it by 28% in gaming. A bigger difference than what Hardware Unboxed got.

RoesAeTHhcRNGGTG.png
 
The 9800X3D slightly bottlenecked the 5090 in spiderman 1, but the average frame rate was still very nice 155 fps (even without FG).


ZEiPtqt3n1buoib9.jpg




Look at core usage. There is not one single thread using more than 84%.
There is something else bottlenecking that game. It's not the CPU. It's probably the PCI-e bus.
 
CPU's don't work like GPUs. As it has to stall when it has to wait for data.

Yes 84% CPU usage is pretty much the CPU at it's limit.

If the CPU has to stall for data, then it's a memory bottleneck. Or an API bottleneck. Or a PCI-e bottleneck.
Even UE4, which can cause big CPU bottlenecks, will have one or two main threads close to 100% utilization. While the rest of the threads are at much lower utilization.
But even in this case, it's a matter of bad optimization, as UE4 has bad multi-thread capabilities.

Have a nice day.

The 12900K losing by 28% is not a tie. It's a clear defeat.
 
Last edited:
Prove me that the 9800X3D is bottlenecking a 5090.
The whole argument is kinda pointless. You put the fastest GPU with the fastest CPU then either - the fastest GPU available is getting framemmogged by the CPU or the fastest CPU available is getting framemogged by the GPU.
 
The whole argument is kinda pointless. You put the fastest GPU with the fastest CPU then either - the fastest GPU available is getting framemmogged by the CPU or the fastest CPU available is getting framemogged by the GPU.

The argument he makes, is that in this game, using this command that Alex found out is not feasible, even with the 9800X3D, it's pointless to use, because the game is bottlenecked by even a 9800X3D.
But the 9800X3D can produce over 200 fps in this game, at 1080p. But of course, no one will play this game at 1080p, on a 5090. So the GPU will always be the one more used up.
 
12900K is not Raptor Lake. That's was my point that flew over you twice

We were talking about the 12900K. You are the one that changed it to the Raptor Lake plus memory OC, because you don't like that the 7800X3D is the second fastest CPU for gaming.
So you have to make up scenarios where the RPL CPU has significantly faster memory than the 7800X3D, just to be able to catch up.
 
Maybe it got renewed because Respawn already said they are doing a third game.

Before the agreement expired, Disney created new deals with other developers (besides EA) to make games in the franchise, so we got other games like Star Wars Outlaws, Galactic Racer, Old Republic, etc.
 
Top Bottom