• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[NXGamer] Dragon's Dogma 2 - Patch 2.03 Is this really a PS5 Pro focused update?

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
PCMR are not going to like this

He is using an RX6800......once he mentions that in the video they will log off.

Hell I didn't even bother to watch the video cuz the summary says he is using a GPU that barely actually exists.
 
  • Like
Reactions: Gp1

Zathalus

Member
Looks like the reason the 5600x performs relatively poorly in this game is because it really loves more threads. The 3700x outperforms the 5600x by quite a bit in the CPU limited areas which is not something you see often. The 3700x can hold mid 50s-60 in the city area with a 5800x being in the mid 60s and up.
 

Gaiff

SBI’s Resident Gaslighter
I think optimisation and performance profiles are a lot more involved on console than people think. People tend to think of it too much like a PC and get too hung up on single components and its bottlenecks when it is a far more delicate balance to frametime on console. Cerny did a talk about optimising to power consumption/power budgets on console on base PS5.

boost.png
Obviously, but then the game would also run much better on the regular PS5 if it was simply down to optimization. Either Capcom didn’t give a fuck and didn’t fix the PS5 version or there are bottlenecks that can be overcome with the Pro’s better hardware.
 

jm89

Member
He is using an RX6800......once he mentions that in the video they will log off.

Hell I didn't even bother to watch the video cuz the summary says he is using a GPU that barely actually exists.
The 6800 was being recommended by ign as an equivalent to a ps5 pro gpu. It's useful to show this isnt really a good GPU to go with if you want to match a ps5 pro.

 
Last edited:

Gaiff

SBI’s Resident Gaslighter
The 6800 was being recommended by ign as an equivalent to a ps5 pro. It's useful to show this isnt really a good GPU to go with if you want to match a ps5 pro.

He says in the very video the game is very CPU-bound, at times having a utilization of only 70%.

As Zathalus Zathalus pointed out, it could just be that the game likes cores since the 3700x beats the 5600x. The game also features hardware-accelerated ray tracing, so no doubt the Pro would easily be faster if it leverages it somehow.


CPU%20Benchmark%20Dragon%27s%20Dogma%202%20%281080p_High%29%20GamersNexus.png.webp
 
Last edited:

Three

Gold Member
Obviously, but then the game would also run much better on the regular PS5 if it was simply down to optimization. Either Capcom didn’t give a fuck and didn’t fix the PS5 version or there are bottlenecks that can be overcome with the Pro’s better hardware.
I was agreeing with you regarding the overall system but that's not what I mean with optimisation, i don't mean fixes. I meant that console optimisation is different and it's not so much about a single bottleneck keeping back your framerate on console. It's a far closer balance with the engines mostly designed for them. Was just pointing out the base PS5 is not like a PC where some hardware might idle more due to unknown specs not scaling well with the engine not designed for it.

Console optimisation is more a balance and PS5 was designed to give more power to the CPU or GPU depending on what sort of performance profile you need for your game/engine. Optimisation is a little more involved at targeting a "whole system" with things like power usage and the TDP. It might mean a less taxed GPU in the Pro could lead to better CPU performance within that TDP. We don't know how much more CPU performance they're getting because we don't know how they balanced getting the most out of the SoC on the base PS5 when optimising their game for it.
 

PandaOk

Banned
Looking at the CPU he used, some of the delta is probably down the the thread difference between the PS5 CPU and a 5600X (less), but not much.

A completely reworked memory interface, +28% bandwidth, the tempest engine being doubled in power (remember audio devs complaining that other departments wanted to use it for non audio tasks like AI lol) and last of all the CPU boost mode? What the end user consider CPU dependent will naturally be more performant on PS5 Pro.

And in some very very select circumstances the stronger GPU can help the CPU, and GPGPU algorithms of course.

Granted there are degrees of bottlenecks both with and beyond CPU and GPU that aren’t so binary, but still.
 
Last edited:

Skifi28

Member
So from barely matching the 3600, the CPU can now potentially beat the 5600 in CPU limited scenarios. Not bad, I'd say.
 

Bojji

Member
So from barely matching the 3600, the CPU can now potentially beat the 5600 in CPU limited scenarios. Not bad, I'd say.

You can see on chart above that 5600 should be close to 3600 power in this game. This game is a mess.

I think it's the most CPU bottlenecked game in recent times.
 

PandaOk

Banned
You can see on chart above that 5600 should be close to 3600 power in this game. This game is a mess.

I think it's the most CPU bottlenecked game in recent times.
it’s the cores/threads in this case. The 3700X rarely beats a 5600X, and it’s down to the 5600X being a 6-core 12 thread design up against the weaker 8-core 16 thread 3700X.

Thanks to PC we have a pretty good idea of how Dragons Dogma 2 behaves and requires, owing to its horrible code.
 
Last edited:

Bojji

Member
it’s the cores/threads in this case. The 3700X rarely beats a 5600X, and it’s down to the 5600X being a 6-core design.

Thanks to PC we have a pretty good idea of how Dragons Dogma 2 behaves and requires, owing to its horrible code.

I think it's the first game where I see 3700x beating 5600. I remember in 2019 people saying - "get 3700x (vs. 3600), games will soon need 8 cores!". They were right but only for one game, 4 years later...

On a different spectrum we have GOW Ragnarok that performs like total shit on Zen 2 CPUs.
 
Last edited:

Skifi28

Member
it’s the cores/threads in this case. The 3700X rarely beats a 5600X, and it’s down to the 5600X being a 6-core design.

Thanks to PC we have a pretty good idea of how Dragons Dogma 2 behaves and requires, owing to its horrible code.

Maybe I'm crazy, but a game running better on more threads doesn't sound like "horrible code" to me. If anything, shouldn't it be the norm that saturating more threads leads to better performance in CPU limited scenarios? Otherwise we're back to games being bottlenecked by mostly using one or two main threads and only IPC making a difference. Sounds like the latter is the worse code to me in the era of 8 cores and 16 threads.
 
Last edited:

Bojji

Member
He is using an RX6800......once he mentions that in the video they will log off.

Hell I didn't even bother to watch the video cuz the summary says he is using a GPU that barely actually exists.

In raw power 6800 is a match for Pro but only in raster. He was testing with Ray Tracing I think.

Other than that without GPU usage date we can't see if his run was more CPU or GPU limited.
 

DenchDeckard

Moderated wildly
I'm sorry to say butt after this guys bouts of completely wrong benchmarks with pc. Using a 2700x and some strange fuckery with his performance that he ignored all suggestions from the pc community to fix, he is the the most untrustworthy analysis guy on the Internet.

He heavily leans Sony and has had care packages from them including free consoles and more. psvr etc

I can see why the sony camp loves him so much, though.
 

Ronin_7

Member
I think optimisation and performance profiles are a lot more involved on console than people think. People tend to think of it too much like a PC and get too hung up on single components and its bottlenecks when it is a far more delicate balance to frametime on console. Cerny did a talk about optimising to power consumption/power budgets on console on base PS5.

boost.png
Do we have anything like this for the Pro?
 

PandaOk

Banned
I think it's the first game where I see 3700x beating 5600. I remember in 2019 people saying - "get 3700x (vs. 3600), games will soon need 8 cores!". They were right but only for one game, 4 years later...

On a different spectrum we have GOW Ragnarok that performs like total shit on Zen 2 CPUs.
If you’re looking for another PC port oddity: The Last of Us on PC performs way higher than it should on the 6700XT. With the same settings (+oc, Sam/rebar)it matches the 6950 XT and RTX4070 at 1080 and 1440P.
 
Last edited:

Bojji

Member
If you’re looking for another PC port oddity: The Last of Us on PC performs way higher than it should on the 6700XT. With the same settings (+oc, Sam/rebar)it matches the 6950 XT and RTX4070 at 1080 and 1440P.



U373Dad.jpeg


It probably thinks that 6700/6700XT is a PS5 GPU lol.

This only shows how broken this game was (and still is).
 
Last edited:

PandaOk

Banned
Maybe I'm crazy, but a game running better on more threads doesn't sound like "horrible code" to me. If anything, shouldn't it be the norm that saturating more threads leads to better performance in CPU limited scenarios? Otherwise we're back to games being bottlenecked by mostly using one or two main threads and only IPC making a difference. Sounds like the latter is the worse code to me in the era of 8 cores and 16 threads.

That’s a fair reaction; I was unintentionally misleading.

First off, you are correct in an apples-to-apples vacuum. But consider this: keeping your CPU threads fed is often more important than how powerful those threads are. Every nanosecond of wasted time is wasted performance.

Though the 3700/X has two more threads, it is an older Zen2 architecture on a larger 7nm node. Consider then, the 5600/X is built on the Zen3 architecture on a smaller 5nm node.

Why do you want more cache on the chip? Because then the CPU can store more bits of data it needs when it needs them “on hand.”

A big penalty for games, due to their dynamic nature versus regular old applications, is that what the CPU (and the whole system) needs can change a lot per frame.

Back to split versus unified L3: this change reduced data contention and waiting. Zen3 made many other improvements (they had more room to do so because they were produced at a smaller scale, so you could fit more transistors in the same area), but the given example is more elucidative than exhaustive.

If the term “L3” sounds familiar, it’s because that’s the cache type stacked atop the chips for the X3D chips! (Also, games that are reliant on cache and scale wildly can obliterate X3D performance if they exceed the X3D cache amount. See Factorio.)

Because, god help you, if you are constantly going to main memory to fetch all the data the CPU needs—especially if you might be evicting data the CPU needs in the very next frame (or sooner).

Sorry, this is just a long-winded way of saying that the architectural improvements of Zen3 almost always outweighed a 2-thread deficit on Zen2.

I’d go on to Dragon’s Dogma being a hot mess, but I’d just be reposting things from more knowledgeable people on the subject. Plus, I need to take medication from surgery (woo!). You’re just going to have to trust the internet on that one. Consider that DD2 might be so thread-hungry because it does such an inefficient job of utilizing threads to begin with.
 
Last edited:

PandaOk

Banned


U373Dad.jpeg


It probably thinks that 6700/6700XT is a PS5 GPU lol.

This only shows how broken this game was (and still is).

Probably lol, PS5 PC ports tend to software lock VRAM usage to PS5 amounts as well don’t they? Or has that been addressed?

It’s an interesting happenstance. You can look at it simultaneously as ‘wow look at the gains from optimizing for a given architecture’ and ‘this port needs a lot of work’.
 
Last edited:
Top Bottom