It's really weird, I don't recall UE4 games being this bad a few years ago, same for Frostbite. What has changed in the last couple of years that an otherwise reliable engine like UE4 is producing shit ports on PC.
DX12? That's when all this shader compilation stutter started I think? Something to do with it being "closer to the metal" than earlier DX versions I believe, which in theory means better performance but comes with downsides like this unless you do extra work to get around it.
Its incredible how many of these recent ports are CPU bound. Its really important to invest on a good CPU for PC Gaming.
Usually mass calculation stuff that doesn't have much 'dependencies'. So yes, spreadsheets, sure. But tools from Adobe also scale up really well on multi-threaded systems.I dont know anything about cpus and threads, but if it's hard for gaming, what kind of software would adding more threads be an automatic kind of boost?
If I do giant spreadhseets all day, would that kind of program be a good example of getting boosts without the programmers reworking stuff?
It's really weird, I don't recall UE4 games being this bad a few years ago, same for Frostbite. What has changed in the last couple of years that an otherwise reliable engine like UE4 is producing shit ports on PC.
Yeah, really don't get the switch to UE, especially with Cyberpunk sequel where they have asset and development pipelines setup already and could build in what they stood up for the first game.I fear what CD project red will do with UE 5... Very sad because their internal engine now in Cyberpunk 2077 with path tracing is amazing. Performances is solid, no shader compilation at start, no stutters or very little of it and it looks insane. UE5 will be a regression, except for Nanite.
They, and everybody else, were hyped by the Matrix demo, that’s what happened.Yeah, really don't get the switch to UE, especially with Cyberpunk sequel where they have asset and development pipelines setup already and could build in what they stood up for the first game.
I am sure there are reasons though. It's just strange.
Jumping ship after being impressed by a presentation would be highly unlikely as such a major decision would need to go through a thorough evaluation by appropriate senior technicla personnel.They, and everybody else, were hyped by the Matrix demo, that’s what happened.
If I recall correctly the whole group of Xbox Game Studio devs announced that they would swap to UE5. It seemed awesome at first. Now it seems absolutely terrible. Id Software, Bethesda Maryland, Turn 10 and Playground are probably the only studios that won’t have crap performance without shader stutter issues going forward.
The Coalition went in deep and evaluated the engine, had a long talk about that, they didn’t reach their targets iirc but thought they could reach them with further tweaking and engine updates. Then Xbox Games Studios made the switch.Jumping ship after being impressed by a presentation would be highly unlikely as such a major decision would need to go through a thorough evaluation by appropriate senior technicla personnel.
Now, maybe it's a combo of easier pipeline, less costly development (in the surface at least), or something else, who knows. Results haven't been encouraging though.
Its incredible how many of these recent ports are CPU bound. Its really important to invest on a good CPU for PC Gaming.
The Coalition went in deep and evaluated the engine, had a long talk about that, they didn’t reach their targets iirc but thought they could reach them with further tweaking and engine updates. Then Xbox Games Studios made the switch.
To be honest it wouldn’t surprise me one bit if the reason Microsoft’s games output is so terrible is because almost everybody is working with UE5 and the engine just isn’t ready yet.
HmmmmmmmI'm reminded of the problems devs had in getting their UE3 games up to speed. Denis Dyack (of More than Human fame) sued Epic for delivering a non working games engine that caused lots of delays and forced them to develop their own tech. The main argument was that Epic was selling UE3 even though the engine wasn't fully functional and not performing well. They didn't get enough support from Epic, because Epic themselves were busy developing the first Gears of War in tandem with UE3.
Brute forcing only helps to a certain extent. A badly optimized game can still be CPU bound if all the game logic is running on just a handful of cores and other cores are underutilized.
Watch this section of the DF video which shows that UE4 games are CPU limited even on high end PCs. If you've got a 12900K or 13900K you can even make Redfall run better by turning off cores!
Probably. I recently went into interesting discussion on youtube and supposedly whole decals system in UE is not only running on one thread but it also is written in extremely inefficient way and for example Atomic Heart devs had to rewrite it to make it run well (Atomic Heart is only open world UE game I know that wasn't total technical failure on launch).I saw that but didn't quite understand why. Is it the case that with fewer cores active each gets a larger share of the total power budget (i.e. runs faster)?
Probably. I recently went into interesting discussion on youtube and supposedly whole decals system in UE is not only running on one thread but it also is written in extremely inefficient way and for example Atomic Heart devs had to rewrite it to make it run well (Atomic Heart is only open world UE game I know that wasn't total technical failure on launch).
I have no real knowlege of it but I think that fewer cores means not hyperthreading, wich make each physical core a little better. So the cores that are overutilised have more performance to give.I saw that but didn't quite understand why. Is it the case that with fewer cores active each gets a larger share of the total power budget (i.e. runs faster)?
I have no real knowlege of it but I think that fewer cores means not hyperthreading, wich make each physical core a little better. So the cores that are overutilised have more performance to give.
I have no real knowlege of it but I think that fewer cores means not hyperthreading, wich make each physical core a little better. So the cores that are overutilised have more performance to give.
Unreal Engine is the new unity.
No, it dependent on if the core is properly distributed better cores and thread. It's totally code dependent.I saw that but didn't quite understand why. Is it the case that with fewer cores active each gets a larger share of the total power budget (i.e. runs faster)?
No, it dependent on if the core is properly distributed better cores and thread. It's totally code dependent.
In SOME cases you can disable threads so all the attention goes to the cores. This is useful in some emulation scenarios.
I'm guessing that would be a hyper threading problem. This can be slightly solved on the users end (by disabling hyper threading in the BIOS, preferably only advise for advanced users as other games might benefit from keeping it on), but it should be properly fixed on the developer's end.But why would more cores make the game perform worse in that case, if each core still has the same performance? I would understand if it didn't improve the performance, but why does it get WORSE?
Unreal Engine is a quick fix solution. Dev studios don't have the programming talent they once had. They rely on these engines where they can easily rotate people in and out since everyone "knows" UE. I was always wary of the UE4 craze. My fears became true. The ease of use means they commit terrible mistakes from the ground up.For those keeping track:
1) Gotham Knights
2) Callisto Protocol
3) Dead Space(Frostbite)
4) Hogwarts
5) Star Wars
6) Redfall
Guess what all these games have in common?
UE4.
All of a sudden, Im not too thrilled about UE5. The traversal stutter is present in Fortnite and Matrix was CPU bound as well just like all these games.
Because UE is widely taught, easy to easy and studios don't need to rely on specific people to carry their coding. It's just business. All in the name of efficiency.If UE is so shitty, why are Developers still using it, is it because they're sluggish ?
If UE is so shitty, why are Developers still using it, is it because they're sluggish ?
If you've got a 12900K or 13900K you can even make Redfall run better by turning off cores!
Easy to use, lots of features, well documented especially for Japanese games which previously had little to no documentation.If UE is so shitty, why are Developers still using it, is it because they're sluggish ?
That's not true, you can probably get better performance by turning off hyperthreading but not with fewer cores.
Hyperthreading can often hurt performance. That's normal.
It's relevant because HT is the culprit in reducing the performance in these games.How is a video about a 5th gen Intel CPU released in 2014 relevant in a comparison test featuring a 12th gen Intel CPU that has not just HT but two kinds of processor cores: P-Cores and E-Cores?
Notice how DF's test specifically mentions P-CORES vs Full which is P-CORES + E-CORES + HT
Hey now - RW was pretty serviceable in the PS2 era. It did fail to make the jump beyond pretty badly - but I prefer to blame EA for it since they were therePerhaps we were too harsh on Renderware.
It's relevant because HT is the culprit in reducing the performance in these games.
The comparison is really
4 P-Cores, 6 P-Cores, 8 P-Cores, 8 P-Cores with HT.
If you had a 10 P Core HT off comparison for these games they 100% would not perform worse. Increasing the number of cores doesn't lower performance. HT enabled on the cores you're actually using does.
The Core i9-12900K has eight Performance- or P-cores with a base frequency of 3.2GHz and a maximum Turbo Boost of 5.1GHz. It also has eight of the Efficient- or E-cores, based on the “Gracemont” architecture; they run at a lower base frequency of 2.4GHz and scale to a maximum turbo boost of 3.9GHz.
Because those aren't a factor, physical cores would not lower performance. That's kind of my point. Question is why are you disregarding hyperthreading and blaming extra cores for lowered performance when HT is known to lower it depending on the task?Why are you constantly disregarding the lower performance E-cores on Intel 12th/13th gen CPU's as a factor for the lower performance?
Games can't tell the difference between P-cores and E-cores. They have no knowledge of them. The Intel thread director (Windows Scheduler) does. It wouldn't choose low performing E-cores if it wasn’t best either. It would in theory choose the P-Cores which is why it's really a 8 P-Core with HT comparison. The E-Cores actually help for background OS tasks but HT lowers performance on those 8 P-Cores used by the game. it lowers the single thread performance for those 8 P-Cores and it can result in lower game performance. This is the case with many games.If a games engine that's already poorly optimized for multi-core CPUs can't tell the difference between P-Cores and E-cores there's gonna be a performance hit if tasks that need high CPU frequencies are delegated to lower clocked E-cores.