Of course the resolve is higher, or did you miss the entire point of the video was that the game is using FSR to reconstruct a 4k image. Still images look great (again, pointed out in the video), the issue is when movement occurs then there is temporal instability and bad image quality breakup (you guessed it, also pointed out in the video).The resolve of the PS5 version seems to be quite a bit higher than perceived "base 720P" due to whatever 'magic' that DF somehow managed to miss. I think at this point they should just contact the developers and ask about the matter instead throwing layman's guesses if they want to stop looking somewhat inept.
Yeah PS5 definitely looks way sharper being same resolution. Im trying really hard to look for sharpenning artifacts like white halo rings, only one so far i think i found is in the hand in the first picture and maaaybe the trees in the last one and straight lines in stones/rocks in first picture. Is the AO better on PS5 too? Shadowed areas looks a bit more punchy in some pics.Hmm... you seems right... I pic each screen at 4K on this video and all are
Look the hand on the bottom right
At least, maybe is DRS or 1366x768 on PS5 native resolution (wich is not great anyway), but that's explain the framerate difference.
They they added a paragraph to the article on their website stating that yes, ps5 is sharper and no, it's still 720pHas Digital Foundry said anything about the difference in sharpening we can see in their video? For me it's super weird that they say several times in the video that the PS5 and the XSX look exactly the same when there is an obvious difference in image detail even at youtube quality. Is the difference real or is it due to a difference of how they capture the videos?
Tim Sweeney redeemed.So seems like UE5's Nanite and Virtual Texturing systems are sensitive to data I/O bandwidth.
Lots of interesting nuggets in this article.
Inside Immortals of Aveum: the Digital Foundry tech interview
Ahead of the launch of Immortals of Aveum, Alex Battaglia and Tom Morgan spoke with Ascendent Studios' Mark Maratea, Julia Lichtblau and Joe Hall about the gamewww.eurogamer.net
Unreal Engine 5 is Pushed Hard
Tim Sweeney redeemed.
the results are not impressive in this game howeverYou can access all UE5 features, or just some of it's features. UE5 pushed hard means they're using Nanite, Lumen and other intensive features that are not always active.
Couldn't you pick a smaller screenshot ?You have to realize and remember that Rofif believes that Forspoken looks like a next gen godsend. So any word or explainination coming from him about visuals, technical talk and graphical prowess is not meant to be taken seriously.
What console as problem with Async?Interesting:
"Mark Maratea: Despite [having performance] parity, Series X and PS5 handle things differently. Async compute works really well on one but not as well on the other, which changes the GPU burden."
They didn't state it specifically (NDA), but that's an easy guess.What console as problem with Async?
the results are not impressive in this game however
Interesting:
"Mark Maratea: Despite [having performance] parity, Series X and PS5 handle things differently. Async compute works really well on one but not as well on the other, which changes the GPU burden."
They didn't state it specifically (NDA), but that's an easy guess.
We also leverage async compute, which is a wonderful speed boost when coupled with DirectStorage as it allows us to load compute shaders directly on the GPU and then do magical math and make the game look better and be more awesome.
They didn't state it specifically (NDA), but that's an easy guess.
You are reading it very wrong, it has nothing to do with the number of CUs. Ask yourself those two important questions: Which console had the biggest focus and customizations on ASYNC compute the previous generation? And, which console uses DirectX? You'll have your answer.I'm guessing the PS5 due to having less compute cores. Unless there's something I'm not understanding here.
I'm guessing it doesn't work well on PS5? Supposedly, they run the same settings and resolution but PS5's post-processing is clearly better as it results in a cleaner image. Unsure what the impact of whatever is being used has on the GPU but Series X is said to perform significantly better.Interesting:
"Mark Maratea: Despite [having performance] parity, Series X and PS5 handle things differently. Async compute works really well on one but not as well on the other, which changes the GPU burden."
You are reading it very wrong, it has nothing to do with the number of CUs. Ask yourself those two important questions: Which console had the biggest focus and customizations on ASYNC compute the previous generation? And, which console uses DirectX? You'll have your answer.
Additionally: Which system has the fastest ACEs/schedulers? Which system has more ASYNC resources per CU?
Tim Sweeney redeemed.
i would say fortnite is a better representation of ue5 on these consolesMaybe, that depends on what you consider impressve. An UE5 running all these cool effects at 60fps is an impressive feat on its own, you can't have your cake and eat it too, the die hard 60fps crowd say they'll take 60fps any day and will accept the compromises, well here it is.
As to the point of my reply, Unreal Engine pushed hard does seem like a strange statement to make, but since the engine is scalable, it does make sense.
Where?... This is the entirety of the passage mentioning ASYNC compute:There is no need to guess. They are referring to Xbox doing better with Async compute due to some interaction with DirectStorage. It's literally right there in the article. If you disagree take it up with them.
I wanna say it's the Series X but the PS4 invested heavily into asynchronous compute and if the pro really did double the PS4 it should have 16 asynchronous compute engines with 8 queue per engine for 64 on the PS4 and 128 on the pro.Interesting:
"Mark Maratea: Despite [having performance] parity, Series X and PS5 handle things differently. Async compute works really well on one but not as well on the other, which changes the GPU burden."
Right and from that you would have no way in guessing what they are referring to. But earlier in the same article they kind of let it slip:Where?... This is the entirety of the passage mentioning ASYNC compute:
"Mark Maratea: Despite [having performance] parity, Series X and PS5 handle things differently. Async compute works really well on one but not as well on the other, which changes the GPU burden. Part of the console tuning [process] caused us to build the performance tool we have on PC. We charted out every single rendering variable that exists in the Unreal tuning system, all of the possible ranges, and we ran the game [with every combination of settings], 17,000 times. And we understood the performance and visual trade-off of all of these things. Then we sat down with the art department and got into a happy medium where we have what I consider to be one of the best-looking console games ever created that runs at a very very good frame-rate."
You did mention DirectStorage - are you using this on Xbox Series consoles or PC?
Mark Maratea: Yes, if [DirectStorage] is there, Unreal will automatically try to leverage it. We also leverage async compute, which is a wonderful speed boost when coupled with DirectStorage as it allows us to load compute shaders directly on the GPU and then do magical math and make the game look better and be more awesome. It allows us to run GPU particles off of Niagara without causing an async load on the CPU and doesn't cause the game thread to hit.
It's almost certainly a sharpening filter (likely CAS) as the entire image is sharper, not individual elements.I presume on PS5 we aren't seeing a sharpening filter it's actually higher quality nanite due to the I/O and Series X is running at 60 more consistently because of asynchronous compute in both cases the devs can probably improve the code with patches. DF should've said that in the video for many years now they put what's really going on in the written article.
Where?... This is the entirety of the passage mentioning ASYNC compute:
You did mention DirectStorage - are you using this on Xbox Series consoles or PC?
Mark Maratea: Yes, if [DirectStorage] is there, Unreal will automatically try to leverage it. We also leverage async compute, which is a wonderful speed boost when coupled with DirectStorage as it allows us to load compute shaders directly on the GPU and then do magical math and make the game look better and be more awesome. It allows us to run GPU particles off of Niagara without causing an async load on the CPU and doesn't cause the game thread to hit.
I'm guessing it doesn't work well on PS5? Supposedly, they run the same settings and resolution but PS5's post-processing is clearly better as it results in a cleaner image. Unsure what the impact of whatever is being used has on the GPU but Series X is said to perform significantly better.
Also, why doesn't it work really well on one but not the other? Doesn't seem like this is a very polished game...
So why is there performance parity then? Really odd.
The PS5 doesn't need to use compute for decompression since it has bespoke hardware. Also,the PS5 has to be back compat with the PS4.There is no need to guess. They are referring to Xbox doing better with Async compute due to some interaction with DirectStorage. It's literally right there in the article. If you disagree take it up with them.
I don't even think he'd know. I doubt they run side-by-side benchmarks and check if one is faster than the other. They likely run tests to see if the performance is up to par and call it a day. DF did side-by-side and the SX outperforms the PS5 by up to 20% in some instances. Coupled that with the fact that they said that DirectStorage when paired with Async compute offers a "wonderful speed boost", it's quite clear they leverage Async Computer on SX/SS but not quite as well on PS5.So why is there performance parity then? Really odd.
Despite [having performance] parity, Series X and PS5 handle things differently. Async compute works really well on one but not as well on the other
We could debate this , but the nature of series consoles is that they work better with async , more cu’s to work in tandem, ps5 has a higher clocked gpu , which runs older engines faster as they rely less on async, it’s just what it is. You see it in most games released until now. But ps5 will always be the baseline .. so I don’t think we will ever see the real difference. Maybe first party games . And to add, it also has to do with skill of the devs, time and money .. so there is that .I don't even think he'd know. I doubt they run side-by-side benchmarks and check if one is faster than the other. They likely run tests to see if the performance is up to par and call it a day. DF did side-by-side and the SX outperforms the PS5 by up to 20% in some instances. Coupled that with the fact that they said that DirectStorage when paired with Async compute offers a "wonderful speed boost", it's quite clear they leverage Async Computer on SX/SS but not quite as well on PS5.
Series X is said first and PS5 second and he then says "one" (that works really well), and the other (that presumably doesn't work as well". SX would be one (said first), and PS5 would be the other (said second).
Seems to me the SX is leveraging Async Compute better than the PS5 in this particular game.
But one has 52 CUs and the other 36. Async benefits from more CUs , even if they run slower. As you can run more parallel.Let me remind the audiences of PS4's hardware ASYNC compute customizations which should at least partially had to carry over to PS5 even if for BC reasons only (in a similar vain to PS4 PRO's hardware ID buffer):
- An additional dedicated 20 GB/s bus that bypasses L1 and L2 GPU cache for direct system memory access, reducing synchronisation challenges when performing fine-grained GPGPU compute tasks.
- L2 cache support for simultaneous graphical and asynchronous compute tasks through the addition of a 'volatile' bit tag, providing control over cache invalidation, and reducing the impact of simultaneous graphical and general purpose compute operations.
- An upgrade from 2 to 64 sources for compute commands, improving compute parallelism and execution priority control. This enables finer-grain control over load-balancing of compute commands enabling superior integration with existing game engines.[47]
That is a smart implementation. Future proof.What about specific resolution targets on consoles - is it FSR 2 on ultra performance targeting 4K, or is it a higher setting?
Mark Maratea: On consoles only, it does an adaptive upscale - so we look at what you connected from a monitor/TV standpoint... and there's a slot in the logic that says if a PS5 Pro comes out, it'll actually upscale to different quality levels - it'll be FSR 2 quality rather than standard FSR 2 performance.
Welp there it is a soft admission from a dev
the results are not impressive in this game however
Wouldn't that benefit from caches scrubbers which the PS5 has?Let me remind the audiences of PS4's hardware ASYNC compute customizations which should at least partially had to carry over to PS5 even if for BC reasons only (in a similar vain to PS4 PRO's hardware ID buffer):
- An additional dedicated 20 GB/s bus that bypasses L1 and L2 GPU cache for direct system memory access, reducing synchronisation challenges when performing fine-grained GPGPU compute tasks.
- L2 cache support for simultaneous graphical and asynchronous compute tasks through the addition of a 'volatile' bit tag, providing control over cache invalidation, and reducing the impact of simultaneous graphical and general purpose compute operations.
- An upgrade from 2 to 64 sources for compute commands, improving compute parallelism and execution priority control. This enables finer-grain control over load-balancing of compute commands enabling superior integration with existing game engines.[47]
What about specific resolution targets on consoles - is it FSR 2 on ultra performance targeting 4K, or is it a higher setting?
Mark Maratea: On consoles only, it does an adaptive upscale - so we look at what you connected from a monitor/TV standpoint... and there's a slot in the logic that says if a PS5 Pro comes out, it'll actually upscale to different quality levels - it'll be FSR 2 quality rather than standard FSR 2 performance.
Welp there it is a soft admission from a dev
Is it even possible to consider a PS4 or Xbox One version of the game, in light of Jedi: Survivor being announced for last-gen consoles?
Absolutely not. There's not a version of Lumen that works on last-gen, even in software. If someone drove up with a dump truck full of cash [and said] we want you to rip apart all of your levels, and make it work with baked lighting, and dumb down all of your textures so that you can fit in the last gen console memory footprint. That would [have to] be a big dump truck. And this is after Joe and Julie and I went on our tropical vacation for six months, then we would come back and I would make your last-gen port. I mean, you're essentially asking can we rebuild the entire game turning off a bunch of key features and cut our art budget down to a quarter?
Yup I hope more devs do this it's really smartWoah ok - nice! I was tilting towards getting this on my Series X but now I think I’ll wait to get it on the PS5 Pro.
52 CUs with each one of them being hampered by lower L1 cache bandwidth/amount also affecting Async throughput directly or indirectly. Both have the same number of ACEs, HWS and GCP going by the standard RDNA/2 design, being 4+1+1. Every single one of these components is 20% faster on PS5. Now divide those resources by the number of CUs. Which system has more of those resources available per CU? Which system can run more processes asynchronously per CU and therefore has more to gain from an application using async compute extensively in the context of CU saturation/compute efficiency?But one has 52 CUs and the other 36. Async benefits from more CUs , even if they run slower. As you can run more parallel.
If. He's just speculating.What about specific resolution targets on consoles - is it FSR 2 on ultra performance targeting 4K, or is it a higher setting?
Mark Maratea: On consoles only, it does an adaptive upscale - so we look at what you connected from a monitor/TV standpoint... and there's a slot in the logic that says if a PS5 Pro comes out, it'll actually upscale to different quality levels - it'll be FSR 2 quality rather than standard FSR 2 performance.
Welp there it is a soft admission from a dev
Don't think so? They confirmed adaptive upscaling which we already knew.So dynamic resolution after all as confirmed by devs? hmmm
Don't think so? They confirmed adaptive upscaling which we already knew.
And yet the Developer is pretty cut and dry in the statement he made. You should email him to tell him he is wrong.52 CUs with each being hampered by lower L1 cache bandwidth/amount to begin with also affecting Async directly or indirectly. Both have the same number of ACEs, schedulers and GCP going by the standard RDNA/2 being 4+1+1. Everyone of these components is 20% faster on PS5. Now divide those resources by the number of CUs. Which system has more of those resources per CU? Which system can run more processes asynchronously per CU therefore has more to gain from an app using async compute extensively in the context of CU saturation/compute efficiency ?