• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF: The Matrix Awakens: Demo vs UHD Blu-ray Movie, Series S Cutbacks, SSD Speed Tests

Darsxx82

Member
If you go through the intro sequence, do car windows have proper RT reflections or are they still stuck with screen space only?

I was referring to the glass of buildings. Car windows still SSR. The demo is like this in origin, and the reflections of the car windows are produced by SSR on all plataforms.

As Alex from DF said, it is a point with "debatable results" especially when you place yourself at some angles and the GI creates that kind of "strange reflection"
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
The black noise is a bug, confirmed.
FHPNQPzXIAETaK3

FHPNXPtWUAcOpsS


The only thing I’ve done here is going back to the main menu and choosing explore city.

That triggers the black noise.

If you go back to the main menu and go through the intro with the shooting it’ll be gone.

I’m guessing it’s that denoiser thing that don’t get set up properly when you choose explore city, a messed up settings file or something.

Edit: Only show how easy it is for misinformation the spread when the consoles are this close, a bug is all it takes for videos and picture comparisons to flood the discussions and give birth to all kinds of tech explanations.
I’ve edited my previous posts. I’ll look into the other differences now. 👍



Hopefully this comment will stop people from acting like it's not just a bug.
 

Shmunter

Member
You are somewhat confused about data management and rendering. The PS5 I/O system doesn't help in any way to improve rendering (which is entirely done by the GPU). What it allows is fast movement of the highest detail assets possible. Look, it's simple to understand: just imagine the best possible graphic quality the PS5 could render, in most detailed, closed space, on rail game possible (so, all the GPU resources used to render the maximum level assets with the most possible quality). Now, in the past generations, the weight of the most detailed assets would be too high to be possibly used in an open world, because they fill RAM with only few detailed assets and the streaming capabilities of the previous generation cannot load those assets quickly enough to move in an open world at a reasonable speed.
In this generation, the I/O didn't simply grow proportionally to GPU power, but ten times more. There is now the possibility to stream the most detailed assets and geometries on the fly (even Z-brush ones, if one wants), allowing to have the maximum graphical quality of closed, on rail games, on open worlds also. That in theory, though, because an antire open world with super detailed assets would have a final dimension weighting on the terabytes, so trade offs will always be necessary. The point is, that data management speed in no more a limiting factor on the level of detail. And that it's only one possibility: you can link seamelessly different kind of gameplays (for example, maximum detailed flught combat, going seamelessly to ground, on feet combat, or looking from above on an entire 3d map like Skyrim one, and diving directly on a spot being able to play from the sky to the ground; or freeing game design from elevators, door bumpings, or other trick necessary to hide the loading of the next area (which also forces gameplay to be limited to the area present in Ram - a thing that made me furious, when I discovered that GOW Ragnarock will come out on ps4 also).
Rendering detail is no more the limit, now (as some wonderful games and the Matrix demo already showed). This gen limit is the mass storage memory. But if the developers want, they can create games with virtually any level of detail. Graphical level can reach nearly photorealism anyway.
That’s the crux off it
 

SlimySnake

Flashless at the Golden Globes
You are somewhat confused about data management and rendering. The PS5 I/O system doesn't help in any way to improve rendering (which is entirely done by the GPU). What it allows is fast movement of the highest detail assets possible. Look, it's simple to understand: just imagine the best possible graphic quality the PS5 could render, in most detailed, closed space, on rail game possible (so, all the GPU resources used to render the maximum level assets with the most possible quality). Now, in the past generations, the weight of the most detailed assets would be too high to be possibly used in an open world, because they fill RAM with only few detailed assets and the streaming capabilities of the previous generation cannot load those assets quickly enough to move in an open world at a reasonable speed.
In this generation, the I/O didn't simply grow proportionally to GPU power, but ten times more. There is now the possibility to stream the most detailed assets and geometries on the fly (even Z-brush ones, if one wants), allowing to have the maximum graphical quality of closed, on rail games, on open worlds also. That in theory, though, because an antire open world with super detailed assets would have a final dimension weighting on the terabytes, so trade offs will always be necessary. The point is, that data management speed in no more a limiting factor on the level of detail. And that it's only one possibility: you can link seamelessly different kind of gameplays (for example, maximum detailed flught combat, going seamelessly to ground, on feet combat, or looking from above on an entire 3d map like Skyrim one, and diving directly on a spot being able to play from the sky to the ground; or freeing game design from elevators, door bumpings, or other trick necessary to hide the loading of the next area (which also forces gameplay to be limited to the area present in Ram - a thing that made me furious, when I discovered that GOW Ragnarock will come out on ps4 also).
Rendering detail is no more the limit, now (as some wonderful games and the Matrix demo already showed). This gen limit is the mass storage memory. But if the developers want, they can create games with virtually any level of detail. Graphical level can reach nearly photorealism anyway.
But the Matrix demo shows there IS a limit to what you can render before you start to drop frames even at a low resolution like 1080p. So the GPU here is the bottleneck.

Yes, it's great that the PS5 and XSX ssds and IO can pull in higher resolution textures on the fly, but you still need the GPU power to render those high res textures. If you didnt, the series s would be running 4k textures, no?

I was actually just watching Final Fantasy 7 remake's tech review on DF. They talk about the infamous door textures not loading and other pop-ins that plagued the city hubs in that game. With the PS5 IO and SSD, those issues are a thing of the past, but if the PS4 Pro had an SSD, it would need its GPU to render those assets within the game's frame budget. Except that those extra high res textures, NPCs and other LODs would have stressed the GPU to a point where it would start dropping frames left and right because all of a sudden you are asking it to render highly detailed objects that it previously didnt have to.

Now, I am not discounting the fact that the PS5 I/O (not the SSD) is handling the symphony between the memory, CPU and GPU in a more elegant and efficient way than previous gens or even the xsx. It is definitely a possibility which is why I would love to see DF or NXGamer NXGamer test FF7 with a 6600xt or even a 5700xt to see just how much the PS5's IO helps with the GPU performance. The best/worst thing about this PC port is that it is virtually identical to the PS5 version with literally no new graphics setting so testing this should be pretty straight forward. If the 10.6 tflops 6600xt paired with a CPU like the 2700x fails to match PS5 performance (1512p 60 fps) then we can safely say that the PS5 IO is indeed helping the PS5 GPU perform better than its tflops rating.
 

Stooky

Member
But the Matrix demo shows there IS a limit to what you can render before you start to drop frames even at a low resolution like 1080p. So the GPU here is the bottleneck.

Yes, it's great that the PS5 and XSX ssds and IO can pull in higher resolution textures on the fly, but you still need the GPU power to render those high res textures. If you didnt, the series s would be running 4k textures, no?

I was actually just watching Final Fantasy 7 remake's tech review on DF. They talk about the infamous door textures not loading and other pop-ins that plagued the city hubs in that game. With the PS5 IO and SSD, those issues are a thing of the past, but if the PS4 Pro had an SSD, it would need its GPU to render those assets within the game's frame budget. Except that those extra high res textures, NPCs and other LODs would have stressed the GPU to a point where it would start dropping frames left and right because all of a sudden you are asking it to render highly detailed objects that it previously didnt have to.

Now, I am not discounting the fact that the PS5 I/O (not the SSD) is handling the symphony between the memory, CPU and GPU in a more elegant and efficient way than previous gens or even the xsx. It is definitely a possibility which is why I would love to see DF or NXGamer NXGamer test FF7 with a 6600xt or even a 5700xt to see just how much the PS5's IO helps with the GPU performance. The best/worst thing about this PC port is that it is virtually identical to the PS5 version with literally no new graphics setting so testing this should be pretty straight forward. If the 10.6 tflops 6600xt paired with a CPU like the 2700x fails to match PS5 performance (1512p 60 fps) then we can safely say that the PS5 IO is indeed helping the PS5 GPU perform better than its tflops rating.
Yes this is correct in some aspects. I wouldnt base UE5 performance on a demo , UE5 hasn't shipped yet its still be worked on and will continue to upgraded and worked on after release. Devs will use UE5 and intergrate their own solutions, thats how devs use these engines. The demo it just to show possibilities. I would argue a good dev with time could get rid of most of these frame drops. Devs know the gpu budget, but when comes to data management thats a big hurdle. I/0 like ps5 makes that data management mountain a mole hill. Not my opinion but the opinion of the programers and scripters i work with. I know you guys may not understand how this I/O helps, its more of a huge quality of life upgrade for devs, were all saying Hallelujah! You guys get faster load times, reduced pop in, and there are some cool effects you can do to with that I/0.
 
Last edited:

Heisenberg007

Gold Journalism
But the Matrix demo shows there IS a limit to what you can render before you start to drop frames even at a low resolution like 1080p. So the GPU here is the bottleneck.

Yes, it's great that the PS5 and XSX ssds and IO can pull in higher resolution textures on the fly, but you still need the GPU power to render those high res textures. If you didnt, the series s would be running 4k textures, no?
This is correct. And before PS5/XSX launched, I had this thought that one of the two companies has undershot/overshot their specs -- because of this very reason.
  • PS5 apparently had a "less powerful" GPU, but it could stream 2x more data. So was PS5's GPU powerful enough to stream all that data? Did PlayStation overshoot SSD and undershoot GPU?
  • OTOH, XSX apparently had a "more powerful" GPU, but it could stream 0.5x data that PS5 can. What will that powerful GPU do then if it doesn't have enough data to render? So did Xbox overshoot GPU and undershoot SSD?
I still don't have a concrete answer, of course. However, the UE5 2020 PS5 Demo eliminated my concerns for the PS5.

That demo had a lot of data, trillions of polygons, and PS5 was able to render all that at 1440p and 40-45-ish frames per second. That's more than good enough for that kind of visual fidelity. Imagine well-optimized games like that in a couple of years (with that level of poly count, Lumen + Chaos + Niagra + overall fidelity) at 1440 60 FPS on PS5. I don't think anybody would complain.

GPU will always be bottlenecks -- there is always room for more resolution and higher frame rates. But all these features have to be viewed in context of the console price. I'm happy with the current trade-offs at $399.

Coming back to the point, the Matrix demo's performance does come down to optimizations. It is not as impressive as the UE5 demo was. And if that demo could run on the PS5 at 1440p 40 FPS, then hardware has the power. The Matrix demo, with good optimization, should run at least at 1440p stable 30 (after all, it has been 18 months and devs must be more experienced with the engine and consoles by now).
 

ABnormal

Member
But the Matrix demo shows there IS a limit to what you can render before you start to drop frames even at a low resolution like 1080p. So the GPU here is the bottleneck.

Yes, it's great that the PS5 and XSX ssds and IO can pull in higher resolution textures on the fly, but you still need the GPU power to render those high res textures. If you didnt, the series s would be running 4k textures, no?
Of course there's a limit, but that's not due the weight of assets. That's the reason I wrote "the maximum POSSIBLE graphical level"(which depends by GPU). What I wrote is that the I/O allows the same level of graphical quality in closed, small spaced games and open world games (if the latters could benefit from the max quality assets multiplied used on the entire world, which would be hugely weighty).
The present UE5 version just shows how much is possible to achieve on the rendering side of things in this moment. But, for example, when you fly and see that there are many simpler poligonal structures on roofs, lower resolution textures and so on, you can already know that, making the file size much bigger, you would have the maximum detail everywhere, potentially using all the graphic-allocated RAM for every turn of sight. The first UE5 reveal was exactly showing that. For example, the Z-brush-level statue used in the demo, was used to demonstrate the huge amount of triangles that is possible to handle with nanite, but it doesn't show a huge quantity of streamed data (since the same model is used for all the other statues). But, without changing the number of triangles that the GPU can render, if the maxed I/O would also be used in that demo, it would be possible to load and render many different Z-brush models, all different, the number of variations depending by the size of avaliable RAM. All that at the same GPU cost of using the same statue everywhere. Same thing applies on games, being them closed, on rail ones, or being them open world games (always keeping in mind that the final file size will prevent developers to use highest quality assets on open worlds, aside on some interest points). If the next Horizon will be confirmed to be very weighty, it could be an indication of the limit of assets detail and variation proportionally to the size of the world. Developers will probably set some limits to the average detail and variation in order to stay within some boundaries. With an immaginary unlimited SSD, they could cram extremely high quality assets everywhere, leaving to I/O the task to load and unload them just by moving a few metres or turning around, using different LOD assets or one simple highest quality asset handled in geometry by nanite, in UE5.
And even in UE5 is possible to use lower LOD meshes at distance to save RAM, if developer wanted to. Textures weight on RAM, rather than on GPU. On UE5, res and frame rate fall mainly due to lighting and applied effects, while geometry management is always stable thanks to nanite (the engine automatically calculates and shades only the maximum of a triangle per pixel, so it's never overwelmed by unseen triangles).
Red Dead cave very close to having a graphical level on par with the best on rail games, due to extremely optimized and clever data streaming. But in this generation, it's easily possible to have the maximum graphical level regardless the type of game (always considering the mass storage limit).
 
Last edited:

Kataploom

Gold Member
Its a fundamental thing because one of the major reasons UE5 stuff looks as good as it does is because the engine is capable of taking huge assets and intelligently chopping them down into chunks such that only the stuff that matters ends up being drawn.

Its not creating detail, its simply preserving it way more efficiently along the render/rasterization path than older technologies.

What this means is that essentially its a data solution as much as anything else, and so the i/o side is under way more stress than is usual.
What I don't get is why is this different from traditional occlusion culling? It's been done all the time afaik
 

SlimySnake

Flashless at the Golden Globes
Not my opinion but the opinion of the programers and scripters i work with. I know you guys may not understand how this I/O helps, its more of a huge quality of life upgrade for devs, were all saying Hallelujah! You guys get faster load times, reduced pop in, and there are some cool effects you can do to with that I/0.
Oh yeah, I remember that from the dev reactions to the Road to PS5 show when all the forums were melting down over that snoozefest of a conference and the disappointment of 10 tflops. Meanwhile the devs were like wait a second, this thing is kinda amazing. Found these in the next gen thread I was browsing last night in the immediate aftermath of the Cerny presentation. I originally dismissed this as copium but at the very least, the PS5 is able to keep up with the XSX in many games so some of this has already been proven right.

x1OL4o6.png








And before people dismiss Jason, he knew about the ps5 tflops number beforehand and even the fact that the conference was going to be dry and boring.

 
Last edited:

SlimySnake

Flashless at the Golden Globes
Coming back to the point, the Matrix demo's performance does come down to optimizations. It is not as impressive as the UE5 demo was. And if that demo could run on the PS5 at 1440p 40 FPS, then hardware has the power. The Matrix demo, with good optimization, should run at least at 1440p stable 30 (after all, it has been 18 months and devs must be more experienced with the engine and consoles by now).
I dont disagree that with a few years and more optimizations, UE5 could run at a solid 30 fps instead of the 20-24 fps range while driving through the streets. But thats like getting 20-30% more performance while still rendering at 1080p. I dont see how they can up the resolution by 70% AND improve the framerate by 30%. Optimizations can very rarely offer 2x or 100% more performance.

As for the first UE5 demo, that along with the valley of the ancients demo was using their software based Lumens which is targeting 1440p 30 fps on next gen consoles according to Epic. The Matrix demo uses hardware accelerated lumens which targeted 1080p 30 fps in the valley of the ancients demo. It also uses ray traced reflections which the first UE5 demo did not.

I think games that NEED hardware accelerated lumens and ray tracing will likely be GTA6, and other games set in urban environments. For games like Horizon, i dont see the need to use hardware accelerated lumens or ray traced reflections. Those could be your 1440p 30 fps titles.

So 1440p 30 fps:

SE7CzsQ.gif


1080p 30 fps:

VlQchtR.gif
 

SlimySnake

Flashless at the Golden Globes
And even in UE5 is possible to use lower LOD meshes at distance to save RAM, if developer wanted to. Textures weight on RAM, rather than on GPU. On UE5, res and frame rate fall mainly due to lighting and applied effects, while geometry management is always stable thanks to nanite (the engine automatically calculates and shades only the maximum of a triangle per pixel, so it's never overwelmed by unseen triangles).
Red Dead cave very close to having a graphical level on par with the best on rail games, due to extremely optimized and clever data streaming. But in this generation, it's easily possible to have the maximum graphical level regardless the type of game (always considering the mass storage limit).
If I am understanding you correctly, you are talking about something this ND dev alluded to, but I just havent seen it in action so far so I am still a bit skeptical.



He's attributing the massive leap in graphical fidelity between U1 and TLOU to better data management which is what the PS5 IO does best. That's great to hear. I just need to see this since we have yet to see any cross gen games show this kind of leap in graphical rendering between last gen and next gen consoles.. Maybe Horizon is the first cross gen game to utilize the PS5 IO.
 

MonarchJT

Banned
You are somewhat confused about data management and rendering. The PS5 I/O system doesn't help in any way to improve rendering (which is entirely done by the GPU). What it allows is fast movement of the highest detail assets possible. Look, it's simple to understand: just imagine the best possible graphic quality the PS5 could render, in most detailed, closed space, on rail game possible (so, all the GPU resources used to render the maximum level assets with the most possible quality). Now, in the past generations, the weight of the most detailed assets would be too high to be possibly used in an open world, because they fill RAM with only few detailed assets and the streaming capabilities of the previous generation cannot load those assets quickly enough to move in an open world at a reasonable speed.
In this generation, the I/O didn't simply grow proportionally to GPU power, but ten times more. There is now the possibility to stream the most detailed assets and geometries on the fly (even Z-brush ones, if one wants), allowing to have the maximum graphical quality of closed, on rail games, on open worlds also. That in theory, though, because an antire open world with super detailed assets would have a final dimension weighting on the terabytes, so trade offs will always be necessary. The point is, that data management speed in no more a limiting factor on the level of detail. And that it's only one possibility: you can link seamelessly different kind of gameplays (for example, maximum detailed flught combat, going seamelessly to ground, on feet combat, or looking from above on an entire 3d map like Skyrim one, and diving directly on a spot being able to play from the sky to the ground; or freeing game design from elevators, door bumpings, or other trick necessary to hide the loading of the next area (which also forces gameplay to be limited to the area present in Ram - a thing that made me furious, when I discovered that GOW Ragnarock will come out on ps4 also).
Rendering detail is no more the limit, now (as some wonderful games and the Matrix demo already showed). This gen limit is the mass storage memory. But if the developers want, they can create games with virtually any level of detail. Graphical level can reach nearly photorealism anyway.
i exactly know how the pipeline of rendering work and and if you had read my previous posts I also agree that generally a faster i/o is always to be preferred to a slower one if matched with a GPU that can support it. That said we were talking specifically about what the demo looks like and how to double the through put of the i / o it may or may not bring graphical benefits. The problem with this ue5 demo in general and with the PS5 GPU are respectively ..that the engine has already been demonstrated during test of the demo on PC and now confirmed with this demo on both top consoles, it does not use or need very large amounts of data for the stream. So run at full speed around the city with the maximum quality it doesn't even need 1/3 of the I / O the PS5 is capable of. This would seem positive as it would be easy to think "then we can increase the quality and number of poliyons for increase the quality or render more !!!" but unfortunately thats not how it works and this demo kind of data stream already makes those GPUs kneel and it drops the framerate to a measly 21fps at just 1080p. Okay the only way to render on the fly and make the most of that huge I / O capability what would it be then? It's clear. rendering less and then go (for what is worth) faster ... does it have an advantage? Sure in more games there linear and not as detailed as in this demo will work wonders and we already had an example in rachet where developers can indulge themselves with new ideas ..... but the idea that a better i / o can improve quality of the scene render without taking first into account the capabilities of the GPU is ridiculous in itself and this demo proved it (and ue5 in general) and brought everyone with down to earth....the GPU of the PS5 is the bottleneck even before taking in account the amount of extra data it COULD transfer it is already the bottleneck with the "slow" data stream that ue5 utilize to work at his best ....good luck to the first party trying to optimize and make their engine to render more stuff at that GPU., the generation is long and I am curious to see how it will go.

I reread what you say and it's all correct but reading, the mistake (if we really want to call it that) is that once again seem that you don't have to deal with the GPU .... if the GPU had headroom to work with ..paired with the incredible difference in i/o we would have seen better performance (fps) on the PS5 than on the xs, so it is evident that the problem is the GPU. We should understand the (low) limit that the GPU has before talking about imaginary situations where the same limiting GPU can render things that others cannot thanks to the e extra data brought by the difference in performance / per second of the I / O . Sorry but I have my very big doubts and I start to really think that the PS5 i / o is disproportionate to the GPU but as I said before I am here and I will be happy to be proved wrong
 
Last edited:

Shmunter

Member
I dont disagree that with a few years and more optimizations, UE5 could run at a solid 30 fps instead of the 20-24 fps range while driving through the streets. But thats like getting 20-30% more performance while still rendering at 1080p. I dont see how they can up the resolution by 70% AND improve the framerate by 30%. Optimizations can very rarely offer 2x or 100% more performance.

As for the first UE5 demo, that along with the valley of the ancients demo was using their software based Lumens which is targeting 1440p 30 fps on next gen consoles according to Epic. The Matrix demo uses hardware accelerated lumens which targeted 1080p 30 fps in the valley of the ancients demo. It also uses ray traced reflections which the first UE5 demo did not.

I think games that NEED hardware accelerated lumens and ray tracing will likely be GTA6, and other games set in urban environments. For games like Horizon, i dont see the need to use hardware accelerated lumens or ray traced reflections. Those could be your 1440p 30 fps titles.

So 1440p 30 fps:

SE7CzsQ.gif


1080p 30 fps:

VlQchtR.gif
We need to keep in mind UE5 Matrix is heavy on resources for a variety of reasons ; because it uses all the latest rendering fruit available. Raytracing, GI, physics, etc It is nowhere near overtaken by a 20gig/s ssd stream which I'm sure you agree.

As ABnormal ABnormal rightfully states, fast i/o is about the gpu having the right assets available at the right time without the traditional limits and constraint's of limited ram. In practice the gpu should have access to the best quality asset to render at all times WITHIN its computational boundary and not just what’s already been loaded - it puts the ability within the devs hands, as opposed to tying them up.

It is also not about sustained 20gig/s. Gig/s is a measure of speed, which can be a split second here and there when needed.
 
Last edited:

Fess

Member
Hopefully this comment will stop people from acting like it's not just a bug.
It’s triggered every time by the explore city main menu choice. Hopefully they fix it since it can only be fixed right now by going through the intro chase sequence from what I can tell, which is a few minutes long. Haven’t looked into the other differences yet.
 

MistBreeze

Member
Do people really expect all UE5 games to be like the demo? LOL

the choice always in the hand of developers to implement the engine feature as they see fit

as for both systems ( ps5 and series x ) both have advantages and disadvantages to each

but all in all they are practically the same

I think that with most games are cross gen now we can not see in practice what are these next gen machines are really capable off

and yes a demo presentation not necessarily translate into real world game

I advise waiting for real next gen UE5 flagship game before drawing any conclusions
 

Clear

CliffyB's Cock Holster
What I don't get is why is this different from traditional occlusion culling? It's been done all the time afaik

I suspect its something to do with how UE5 manages to handle these immensely complicated meshes. I mean when that first demo was shown and they had all those insanely high-poly statues I suspect people didn't quite get their heads around what that meant in practice; specifically that to the engine its just data and so in theory those statue models could be equally dense buildings, or cars, or characters.

My point is that you can't do this with conventional occlusion culling, so it has to be doing some really clever stuff to selectively extract the visual details that are appropriate to show within the frustum, then decimate them down into an efficient mesh and somehow preserve appropriate texture detail. That's just nanite, and they also have their lumen lighting system that needs to work in conjunction with that which I have to assume using some sort of ray-marching technique to produce such naturalistic GI.

All just speculation on my part, but what its doing evidently involves a lot of reconstruction because otherwise they'd never be able to fit everything in memory.
 

sinnergy

Member
No one said that. Stop making shit up. Some of you were downplaying ssd benefits from the beginning and how 2tf difference would make up for more than 40fps difference.
Some said that, including me . The concept is very simple : A GPU, renders the images on screen. We totally got burned . You can put a SSD even twice as fast as a Ps5 Ssd in the Ps5, but it are the other components that need to be as fast, otherwise it is diminished by slowest factor (s)

that’s why you see the slower SSD of the Xbox keeping up in most cases . Without krakken for example.
 
Last edited:

Arioco

Member
Yes, he found out by running the PC demo which is actually heavier than the PS5 demo. It never used much bandwidth.

https://www.neogaf.com/threads/ps5s...ording-to-epic-games-ceo-tim-sweeney.1541181/

Sweeney is a grade A bullshitter, I'll give him that.


Dude, that's just not true. Sweeney said PS5 I/O system was ahead of what you can find on PC (which was and continues to be true), and he even explained the reasons why this was the case. But he NEVER said PS5 SDD or I/O system were needed to run UE5 as you imply. If that's what some of you understood you definitely weren't paying attention. We are talking about Unreal Engine, the most popular multiplatform engine in the world, it HAS to scale well across a range of different devices, some more powerful than others. How could it possible require something that's only present on one platform? That's something that was off of the table from the very beginning, Epic just can't focus on one system, it just doesn't make sense at all.
 
Dude, that's just not true. Sweeney said PS5 I/O system was ahead of what you can find on PC (which was and continues to be true), and he even explained the reasons why this was the case. But he NEVER said PS5 SDD or I/O system were needed to run UE5 as you imply. If that's what some of you understood you definitely weren't paying attention. We are talking about Unreal Engine, the most popular multiplatform engine in the world, it HAS to scale well across a range of different devices, some more powerful than others. How could it possible require something that's only present on one platform? That's something that was off of the table from the very beginning, Epic just can't focus on one system, it just doesn't make sense at all.
He never said it, but he heavily implied it. That's why many people believed it, some even to this day.
 

Darsxx82

Member
I dont disagree that with a few years and more optimizations, UE5 could run at a solid 30 fps instead of the 20-24 fps range while driving through the streets. But thats like getting 20-30% more performance while still rendering at 1080p. I dont see how they can up the resolution by 70% AND improve the framerate by 30%. Optimizations can very rarely offer 2x or 100% more performance.

As for the first UE5 demo, that along with the valley of the ancients demo was using their software based Lumens which is targeting 1440p 30 fps on next gen consoles according to Epic. The Matrix demo uses hardware accelerated lumens which targeted 1080p 30 fps in the valley of the ancients demo. It also uses ray traced reflections which the first UE5 demo did not.

I think games that NEED hardware accelerated lumens and ray tracing will likely be GTA6, and other games set in urban environments. For games like Horizon, i dont see the need to use hardware accelerated lumens or ray traced reflections. Those could be your 1440p 30 fps titles.

So 1440p 30 fps:

SE7CzsQ.gif
That demo worked on PS5 and XSX at 1080p with TSR just like Matrix's. In the DF video about it it is indicated.

The aspect that differentiates them and that has been a celebration is that in the Matrix demo the hardware for RT of the consoles is used and that has meant a great improvement in performance and optimization when implementing Lumen + RT that several months ago seemed impossible.

In the last interview with Epic and The Coalition they said they were excited about the advances and improvements of optimization of UE5 in consoles, but it is clear that their target for large visual productions will be 1080p TSR on XSx and PS5. Maybe I'm being a little pessimistic, but surely we will have to wait for PS5 Pro and XSeries Z to enjoy games with UE5 of high visual values at higher resolution.
 
Last edited:

Darius87

Member
Dude, that's just not true. Sweeney said PS5 I/O system was ahead of what you can find on PC (which was and continues to be true), and he even explained the reasons why this was the case. But he NEVER said PS5 SDD or I/O system were needed to run UE5 as you imply. If that's what some of you understood you definitely weren't paying attention. We are talking about Unreal Engine, the most popular multiplatform engine in the world, it HAS to scale well across a range of different devices, some more powerful than others. How could it possible require something that's only present on one platform? That's something that was off of the table from the very beginning, Epic just can't focus on one system, it just doesn't make sense at all.
i only have problem with this quote:
“[The Unreal Engine 5 tech demo] would absolutely not be possible at any scale without these breakthroughs that Sony’s made.” - Tim Sweeney
maybe it's just semantics but it sounds like Unreal Engine 5 tech demo only possible on PS5.
 

oldergamer

Member
Depending on how deep you design it in your system it can improve performance as it decreases the impact data missing at one level of the hierarchy will have as data is fetched or prefetched at lower levels.

Games are massive data moving machines, layout and efficient manipulation of data flowing through your pipeline being a big obstacle to performance. You are asking a very broad question… what could be better if getting data to memory were faster? Well, with speed and latency of SSD matching RAM you effectively add extra RAM, between HDD and RAM speed wise you are at least allowing to keep the streaming buffers smaller and make more effective use of the available RAM.
You have mentioned the speed/latency argument in the past, before PS5 launch. However after PS5 launch I'm not seeing a huge difference between the two approaches. The end consumer isn't see the difference either. Which is what I suspected before both systems launched with them being closer to each other in real world scenarios regarding impact the SSD's are making. My question was broad, and you provided an additional usage (however I'm not sure it applies in this case, but its still a valid usage). Perhaps the real question, is how quickly do you need to fill empty and refill RAM, and how would that translate to what the end users see in a game?

I suspect a few things here:
1. MS in general have performed a lot more research into hard drive & SSD performance/cost/benefit & io, then people are likely giving them credit for. They honestly deal with a lot of different hardware out there from the OS perspective.

2. I think sony may have over-engineered this aspect of the console to a small extent, and could have gone with a slower speed RAM that would be more cost effective. Before people jump on that comment, we should consider some of the past playstation hardware that had over-engineered components.

Jack Tretton even admitted PS3 was over engineered when it came to Cell. I know saying that is sparking reply posts, but its a valid counter when people say " do you think sony are stupid??" or " do you think sony doesn't know what they are doing??". I'd answer "no" to both, but people seem to assume that MS, a software company that works with all the hardware manufacturers, develops numerous graphics and hardware patents is somehow "stupid" or "doesn't know what they are doing".
 

Arioco

Member
i only have problem with this quote:

maybe it's just semantics but it sounds like Unreal Engine 5 tech demo only possible on PS5.

I think that's taking a sentence out of context.

In the same interview with IGN he also said:

While Epic wouldn’t comment on any potential performance differences between the PS5 and Xbox Series X, Sweeney confirmed that the features shown today, like real-time global illumination and virtualized geometry, are “going to work on all the next-generation consoles.”

How is that even possible if according to some of you he was impliying the demo could only possibly run on PS5 thanks to its SDD?


https://webcache.googleusercontent....h-beats-high-end-pc+&cd=5&hl=es&ct=clnk&gl=es
 

oldergamer

Member
i only have problem with this quote:

maybe it's just semantics but it sounds like Unreal Engine 5 tech demo only possible on PS5.
Fully agree with you. Certain statements sweeny made were intentionally making people think that. Reasonable people saw through it at the time.
 

MonarchJT

Banned
This is correct. And before PS5/XSX launched, I had this thought that one of the two companies has undershot/overshot their specs -- because of this very reason.
  • PS5 apparently had a "less powerful" GPU, but it could stream 2x more data. So was PS5's GPU powerful enough to stream all that data? Did PlayStation overshoot SSD and undershoot GPU?
  • OTOH, XSX apparently had a "more powerful" GPU, but it could stream 0.5x data that PS5 can. What will that powerful GPU do then if it doesn't have enough data to render? So did Xbox overshoot GPU and undershoot SSD?
I still don't have a concrete answer, of course. However, the UE5 2020 PS5 Demo eliminated my concerns for the PS5.

That demo had a lot of data, trillions of polygons, and PS5 was able to render all that at 1440p and 40-45-ish frames per second. That's more than good enough for that kind of visual fidelity. Imagine well-optimized games like that in a couple of years (with that level of poly count, Lumen + Chaos + Niagra + overall fidelity) at 1440 60 FPS on PS5. I don't think anybody would complain.

GPU will always be bottlenecks -- there is always room for more resolution and higher frame rates. But all these features have to be viewed in context of the console price. I'm happy with the current trade-offs at $399.

Coming back to the point, the Matrix demo's performance does come down to optimizations. It is not as impressive as the UE5 demo was. And if that demo could run on the PS5 at 1440p 40 FPS, then hardware has the power. The Matrix demo, with good optimization, should run at least at 1440p stable 30 (after all, it has been 18 months and devs must be more experienced with the engine and consoles by now).


sorry wrong translation )
 
Last edited:

Darius87

Member
I think that's taking a sentence out of context.

In the same interview with IGN he also said:



How is that even possible if according to some of you he was impliying the demo could only possibly run on PS5 thanks to its SDD?


https://webcache.googleusercontent....h-beats-high-end-pc+&cd=5&hl=es&ct=clnk&gl=es
it's not out of context, to me it sounds like he's contradicting himself.
What breakthroughs he's talking about? UE5 runs on XSX and PC without any breakthroughs at the same scale.

https://www.ign.com/articles/ps5-ssd-breakthrough-beats-high-end-pc
 

Boglin

Member
Jack Tretton even admitted PS3 was over engineered when it came to Cell. I know saying that is sparking reply posts, but its a valid counter when people say " do you think sony are stupid??" or " do you think sony doesn't know what they are doing??". I'd answer "no" to both, but people seem to assume that MS, a software company that works with all the hardware manufacturers, develops numerous graphics and hardware patents is somehow "stupid" or "doesn't know what they are doing".

I'm not going to try to say conclusively that the PS5 isn't over engineered because I don't know. But I don't think it's fair to bring up the work of the design team from over 15 years ago who don't even work there anymore. Sony was incredibly arrogant at the time and the CEO said they deliberately wanted to make a convoluted machine.

It's like saying Gamepass is bad because look at what Microsoft did with Games for Windows Live.

I'm not sure I've seen a prevailing attitude from Sony fanboys implying that Microsoft is stupid or has designed useless features the same way that Xbox fanboys have done towards the PS5 since before launch, but I obviously don't see everything.
I think the closest I've seen is when some people argue that hardware VRS isn't a massive benefit over software VRS.

In my opinion, it's really easy to justify and see what a well designed APU the Xbox has. Microsoft designed the chip to be useful beyond just the home consoles and they are even able to salvage chips that fall below the XSX spec. The Series X has more industry standard features in their hardware, more memory bandwidth, and has 20% more compute all while being in a smaller, quieter box that uses less energy. I could not be more impressed with how well Microsoft achieved their own goals and vision.

I hope that when I personally make arguments about why I think Sony designed a useful feature with their I/O that it doesn't come off as me denigrating Xbox or its architects.
 
Last edited:
Coming back to the point, the Matrix demo's performance does come down to optimizations. It is not as impressive as the UE5 demo was. And if that demo could run on the PS5 at 1440p 40 FPS, then hardware has the power. The Matrix demo, with good optimization, should run at least at 1440p stable 30 (after all, it has been 18 months and devs must be more experienced with the engine and consoles by now).

The fact that Epic made a point of telling us that this new demo had enough CPU overhead to run a real game in this environment makes me believe the first demo did not have this headroom. Plus, the streaming pool was not very large for the first demo either.

I think we will see good things out of the engine, so, that's a good thing for us all.
 

winjer

Gold Member
Dude, that's just not true. Sweeney said PS5 I/O system was ahead of what you can find on PC (which was and continues to be true), and he even explained the reasons why this was the case. But he NEVER said PS5 SDD or I/O system were needed to run UE5 as you imply. If that's what some of you understood you definitely weren't paying attention. We are talking about Unreal Engine, the most popular multiplatform engine in the world, it HAS to scale well across a range of different devices, some more powerful than others. How could it possible require something that's only present on one platform? That's something that was off of the table from the very beginning, Epic just can't focus on one system, it just doesn't make sense at all.

That's only partly right.
The SSD on the PS5 is already behind in some metrics, such as maximum read speed and IOPs.
But it's also ahead in others. Such as hardware compression and the file system.
In the case of hardware compression, this could be done on PC on the GPU, either by shaders or eventually, by adding custom units.
Then there's the issue with the file system on Windows, that is basically the same for several decades now. And it doesn't scale with nVME SSDs.
And since MS is more concerned with the new UI and other useless crap, we are still waiting for Direct Storage.
Soon we'll have PCIe gen5 with even more bandwidth, and NVME controllers with more capabilities, but we'll probably still be waiting for MS to stop slacking off.
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
You have mentioned the speed/latency argument in the past, before PS5 launch. However after PS5 launch I'm not seeing a huge difference between the two approaches. The end consumer isn't see the difference either. Which is what I suspected before both systems launched with them being closer to each other in real world scenarios regarding impact the SSD's are making. My question was broad, and you provided an additional usage (however I'm not sure it applies in this case, but its still a valid usage). Perhaps the real question, is how quickly do you need to fill empty and refill RAM, and how would that translate to what the end users see in a game?

I suspect a few things here:
1. MS in general have performed a lot more research into hard drive & SSD performance/cost/benefit & io, then people are likely giving them credit for. They honestly deal with a lot of different hardware out there from the OS perspective.

2. I think sony may have over-engineered this aspect of the console to a small extent, and could have gone with a slower speed RAM that would be more cost effective. Before people jump on that comment, we should consider some of the past playstation hardware that had over-engineered components.

Jack Tretton even admitted PS3 was over engineered when it came to Cell. I know saying that is sparking reply posts, but its a valid counter when people say " do you think sony are stupid??" or " do you think sony doesn't know what they are doing??". I'd answer "no" to both, but people seem to assume that MS, a software company that works with all the hardware manufacturers, develops numerous graphics and hardware patents is somehow "stupid" or "doesn't know what they are doing".

You don't feel a little silly saying the bolded though this early into this generation's lifecycle? Like I'm 100% sure no AAA developer think this, considering most of them haven't even put out their first game on these next-gen consoles.
 

oldergamer

Member
I'm not going to try to say conclusively that the PS5 isn't over engineered because I don't know. But I don't think it's fair to bring up the work of the design team from over 15 years ago who don't even work there anymore. Sony was incredibly arrogant at the time and the CEO said they deliberately wanted to make a convoluted machine.

It's like saying Gamepass is bad because look at what Microsoft did with Games for Windows Live.

I'm not sure I've seen a prevailing attitude from Sony fanboys implying that Microsoft is stupid or has designed useless features the same way that Xbox fanboys have done towards the PS5 since before launch, but I obviously don't see everything.
I think the closest I've seen is when some people argue that hardware VRS isn't a massive benefit over software VRS.

In my opinion, it's really easy to justify and see what a well designed APU the Xbox has. Microsoft designed the chip to be useful beyond just the home consoles and they are even able to salvage chips that fall below the XSX spec. The Series X has more industry standard features in their hardware, more memory bandwidth, and has 20% more compute all while being in a smaller, quieter box that uses less energy. I could not be more impressed with how well Microsoft achieved their own goals and vision.

I hope that when I personally make arguments about why I think Sony designed a useful feature with their I/O that it doesn't come off as me denigrating Xbox or its architects.
I'm not disagreeing with you. Sony didn't do this with PS4, as it had less custom work then consoles that came before or after. However PS5, they did put a lot of engineering into the audio chip and the io, both in places where the end users may not see or hear the benefit. Offloading the CPU makes sense any where possible, but at least on the audio side they may have been solving some problems that were already handled by other standards. All IMO though. I get excited for specs as well, but the real test is always real-world usage.
 

oldergamer

Member
You don't feel a little silly saying the bolded though this early into this generation's lifecycle? Like I'm 100% sure no AAA developer think this, considering most of them haven't even put out their first game on these next-gen consoles.
Silly? no. I know its a hot take, but i can take the heat. Epic, a AAA developer just released a AAA tech demo we can compared between both consoles.
 

Boglin

Member
I get excited for specs as well, but the real test is always real-world usage.
I agree wholeheartedly. Even if I'm right about some of the hypothetical scenarios I conjure up where the faster I/O could be useful, it really doesn't matter if games in reality don't run into those types of scenarios. For consumers, practice is far more important than theory.

Also, if Sony's first party studios don't eventually show compelling examples of how they've benefitted from the I/O then it would prove that it's a feature that no developers asked for and I'll be eating tear-soaked crow.
 

Dodkrake

Banned
ktg it is exactly like mosterxmedia there is nothing more to say so I would not read even two words lined up by that person and bring it as proof of anything it is a insult to the seriousness of the discussion.

Fortunately you don't need to read anything, or even listen. Just look at the images.
 

RaySoft

Member
Ssd speed no matter what the system always only results in less waiting time. Think about it, what its really doing is allowing something to be loaded into memory faster.

What other advantage could you think about?
Sorry, your reply slipped through the cracks here:-/
I would rather say that no matter what aspect of a given hardware that goes beyond an incremental upgrade, usually opens some doors of how you could take advantage of that in other ways than to simply follow the same road as before. (doing it the same way, but faster/better etc) I can't tell you what it ends up with, since I'm no game developer, but what I do know is that this kind of disruptive hardware upgrade will be harnessed and exploited in the end. To what purpose though, time will tell. In the right hands, it will disrupt the process and make others follow and copy.
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Silly? no. I know its a hot take, but i can take the heat. Epic, a AAA developer just released a AAA tech demo we can compared between both consoles.

But Epic's main job at the end of the day is to make a game engine that scales perfectly for many different types of hardware. So I don't think they'll even care to release or make a demo that takes the best advantages of either console.
 
Last edited:

sircaw

Banned
I just watched the Ktg demo/ analysis video, i must admit there are some very noticeable differences going on between the two consoles versions of the game.

I would have thought that the Xbox version of it would have been pretty much perfect as they had one of Microsoft's premier studios, the coalition i think, helping out on the demo.

They are a top-tier studio, so I was really surprised to see so much pop in on the big building as i thought that would have been something they would have been able to address immediately.

There are noticeable differences in image quality and reflections in his video too.

I am not tech-savvy enough to know the reasons behind it but i do find it rather perplexing there is not parity at least as the xbox is meant to be on paper the more powerful machine.
 
Last edited:
Just curious but what big differences were people expecting from this between the premium systems?
Both systems came out days within each other with the same CPU and GPU made by the same manufacturer. There was never going to be significant differences between the platforms especially when 3rd parties prefer parity.

Big thing is the XSS doing a fantastic job on this demo when many declared if it would be canceled soon after releasing, predicted its complete failure, and even stated the X1X and PS4 Pro were more powerful. To see it running this demo with raytracing for only $300 is quite impressive. And it's only one year into the generation. Devs will only get more familiar with the hardware going forward.
 
Top Bottom