azertydu91
Hard to Kill
What about this guy?
Source?
Some of you will remember, I know it.
What about this guy?
LmaoSource?
Some of you will remember, I know it.
lol ofc noGuys, does anyone from believe in what @SenjutsuSage is saying regarding SFS?
What about this guy?
The onboard DRAM is next to the flash controller, the I/O complex within the APU has SRAM in it.Completely forget about the extra dram onboard the i/o complex, thanks for adding it in.
Horizon ZD and tons of other old games use view frustum culling for rendering what's in view, etc. But its all still in RAM on PS4.This is exactly what Cerny spoke of in the road to PS5. Essentially doing what horizon zero dawn did in its engine just on a faster more detailed level.
I have no idea, sorry.Lmao
you know where I can find that original tweet?
Horizon was using a technique called "frustum culling", this culls geometry and textures outside of the players view frustum, here's a illustration to show you what's going on.This is exactly what Cerny spoke of in the road to PS5. Essentially doing what horizon zero dawn did in its engine just on a faster more detailed level.
Horizon ZD and tons of other old games use view frustum culling for rendering what's in view, etc. But its all still in RAM on PS4.
On PS5 tho, they can move that data not being rendered out of RAM, as Cerny was saying. And then load it back into RAM as the player turns the camera. Its a fucking massive difference. The PS5 I/O is like a multiplier on RAM capacity, in a way.
The interesting thing about Remedy's statements are they said it should get a lot better once we are past cross-gen.If half the devs are complaining about it (ie remedy, Id tech, etc), then that’s not a good thing.
The people pulling up Series S being called out by devs don't like it when there are an equal amount of devs saying Series S won't be a problem. Everyone just wants to stick to their own narratives.
What about this guy?
Another FUDeer lol
lo both are reliable.Eh? J. Stevenson is the community manager who works at Insomniac.
Are you confusing him with J.Schrier?
Yep. There is a lot of data just sitting there in VRAM taking up space because it simply needs to be available in case the player decides to go in a different direction. not a problem in linear games, but in something like Spiderman, they need to load far more than whats in front of the player. That comes out to be the next 30 seconds of gameplay the player may or may not traverse through.Horizon ZD and tons of other old games use view frustum culling for rendering what's in view, etc. But its all still in RAM on PS4.
On PS5 tho, they can move that data not being rendered out of RAM, as Cerny was saying. And then load it back into RAM as the player turns the camera. Its a fucking massive difference. The PS5 I/O is like a multiplier on RAM capacity, in a way.
I actually like that SenjutsuSage posts because I enjoy and learn from the responses like yours. So even if he doesn’t read them I do.Did you even read that post dude?
Seriously.. what is the point of letting people like you post in here? You don't read posts.. you don't actually respond to anything.. just act like a total douche about everything.
But thanks for so quickly letting me know how pointless it is to respond to anything you say.
Yeah. If I wasn't clear enough as well, I DO think it's a cool feature. Just not something I personally feel is a necessity, since these systems load games so quickly anyway. Still very cool though!Quick Resume was my most requested next gen feature. I remember some telling me in speculation threads it was either not possible or will be hard. Few very said it was a possibility.
And I lost it when MS announced it, so happy it was confirmed. lol. I agree when folks say that feels next gen to them, I just havent used it yet. Maybe I'll turn on the XSX today.
I want Sony to do it too. If Sony cant do QR, so be it. If it takes away from what Sony goals are with the PS5, then dont do it.
It will just go on the list of pros n cons for the consoles, thats it.
PS5 footage of Metro.
Just give me MotorStorm I'll die happy, can't believe I waited the whole last generation in vain for it! would be great to see WipeOut and DriveClub make a return tooAccording to Hermen Hulst, a Guerrilla cofounder whom Jim Ryan tapped to lead PlayStation Studios in 2019, the group has more than 25 titles in development for the PS5—nearly half of which are entirely new IP. “There’s an incredible amount of variety originating from different regions,” Hulst says. “Big, small, different genres.” https://www.wired.com/story/playstation-5-six-months-later/
I mentioned this about Returnal, lol.Why I get the impression almost no one here had played hands-on Housemarque's Returnal... The teleportation stuff through gates and portals, and loading in just a second when dying with skipping cutscenes immediately... These are all live proofs that are in the hands of gamers.... Then we have DirectX diary videos with some demo screenshots of some alphabet word mash stuff like Velocity SFS UltraDeluxe Smart Plus XXX Intelligent Max etc. show the games pal, as some very hated suit once said, "let the games do the talking"
The PS5 culls triangles before they reach the geometry pipeline.Horizon was using a technique called "frustum culling", this culls geometry and textures outside of the players view frustum, here's a illustration to show you what's going on.
This method saved performance and memory a lot on PS4 however these kind of techniques we're extremely held back, mainly by two things, the first being the slow HDD's which meant that geometry and textures would take too long to stream in and out of the view frustum which lead to a lot of work by developers to ensure it functioned properly. The second thing holding it back was the extreme lack of programatic control over triangles and polygons on previous gen hardware, this lead to triangles being culled late in the pipeline and there was no fine grain level of control which cost a lot of performance. Apparently, this was one of the reasons why God of War (2018) ran so hot on the PS4 Pro.
Now thanks to features like PS5's Geometry Engine, Primitive and Mesh Shaders, developers can now have full control over which triangles/polygons are being rendered or culled, this means that the GPU won't be stressed with unnecessary workloads such as processing vertices for culled triangles. The fast SSD also ensures that data like geometry and textures will arrive on time as the players frustum moves and turns which can allow for significant increases in performance, fidelity and effecieny.
EDIT: Also this !
The point is that it gets called out at all. That shit doesn’t happen, especially with a new console. For limitations to be pointed out from release speaks volumesThe people pulling up Series S being called out by devs don't like it when there are an equal amount of devs saying Series S won't be a problem. Everyone just wants to stick to their own narratives.
First high profile one - also I guess just calling it PRT example is underselling what it was doing a bit They had entire world as a virtual-texture and implemented a completely new content-pipeline for it, including custom compression scheme (arguably superior to what SX/PS5 have in hw in terms of compression-ratios) and PRT scheme on GPUs that had limited hw-support for it.For some reason I thought Rage was a PS4/Xbox ONe game (running of hard drive.) I'm aware it's basically the first use of the tech with it's megatextures.
You can see it in the video that occasionally textures load in with slight delay when camera turns. It's really easy to demonstrate in the initial area since it's flipping between vista/near-field detail. The engine was intelligently caching data to HDD as well, so Optical drive wouldn't be trashed by repeat access, but that's a method many streaming engines of the era utilized already.But again, was it really pulling textures as you turned your head?
What I meant is RAM didn't really drive the fidelity decisions here - Rage was ultimately limited by available storage size, I/O bandwidth, and GPU throughput.My point being they only had so much RAM available which is hardly irrelevant because it drove the fidelity decisions
Seriously that’s why I hate “discussing” these things now. I’d rather just simply agree to disagree and move on. Before the consoles launched, we had all these debates but now these consoles have launched and it’s time to show not talk. PS5 exclusives have made a case for how fast the PS5 SSD and IO are. These games launch very fast, loading in 1.5 seconds. 3rd party games have shown PS5 to be every bit as capable as Series X. Others are busy arguing about acronyms.Why I get the impression almost no one here had played hands-on Housemarque's Returnal... The teleportation stuff through gates and portals, and loading in just a second when dying with skipping cutscenes immediately... These are all live proofs that are in the hands of gamers.... Then we have DirectX diary videos with some demo screenshots of some alphabet word mash stuff like Velocity SFS UltraDeluxe Smart Plus XXX Intelligent Max etc. show the games pal, as some very hated suit once said, "let the games do the talking"
That's because it has a direct sister console that is the more powerful / on par with the competition in terms of capability. No other console launch has had two tiers.The point is that it gets called out at all. That shit doesn’t happen, especially with a new console. For limitations to be pointed out from release speaks volumes
Series X does the exact same thing that Tim Sweeny is suggesting the PS5 does. Notice he says "without CPU decompression" and without the driver extraction overhead. Series X does the exact same thing to the letter. The key difference is its SSD raw speed, and that to get the full benefit developers would need to design their games around Sampler Feedback Streaming. But they already get exactly the same thing as what's mentioned for PS5 BEFORE Sampler Feedback Streaming.
Sampler Feedback Streaming being in the mix would only make things that much faster on the Series X side because now the SSD is having the burden of unneeded data being transferred into video memory removed entirely.
He just can spread the PR without thinking about it.When Math is not your forte. If you need 4Gb, you load 4Gb. It won't "feel" like 12 by any mean. It's math.
He just said that the scene will require 12 gb by loading the whole texture package without any savings from sfsHe just can spread the PR without thinking about it.
That's an easy one Bo: Having them on RAM? What do you think games do when load a level? Fetch the data it needs to render to RAM and then render from there.How the fuck can you render 20M polygons per 1-0.5 millisecond if you're not being fed that to begin with
Plz go tell XSX technical engineer of how things are in reality. He might learn a lesson from youWhen Math is not your forte. If you need 4Gb, you load 4Gb. It won't "feel" like 12 by any mean. It's math.
No, it does not. It's a powerful feature and has it's own merit. But it is NOT the same as the PS5 I/O solution.
Please, just drop it. XSX/S are NOT the same as the PS5: Microsoft's architecture has its own merit.
What I meant is RAM didn't really drive the fidelity decisions here - Rage was ultimately limited by available storage size, I/O bandwidth, and GPU throughput.
That's an easy one Bo: Having them on RAM? What do you think games do when load a level? Fetch the data it needs to render to RAM and then render from there.
Notice that you are talking about rendering 20M polygons not 20M unique polygons per frame. That means that the you could render a 1M statue 20 times with only fetching the data once. That's why I'm saying that the I/O speed does not necessarily affect the polygon budget you are able to render and that's why I say your assumption regarding Xbox is completely wrong.
It might only be able to load to RAM 4/5M polygons per frame but that doesn't mean it's limited to render 4/5M polygons because once the meshes are in memory you can render them as many times as the GPU is able to.
Another thing to note is that you are talking about 4/5M polygons as if you knew how much data a polygon needs. Spoiler: it depends entirely on the game.
Plz go tell XSX technical engineer of how things are in reality. He might learn a lesson from you
And? They weren’t making excuses for it. They straight up called it out as an issue that would hold back the entire gen. They didn’t have to say anything…but they did.That's because it has a direct sister console that is the more powerful / on par with the competition in terms of capability. No other console launch has had two tiers.
Source?
Some of you will remember, I know it.
Ok let's clear something here.He just said that the scene will require 12 gb by loading the whole texture package without any savings from sfs
Well he was trying to explain something in a way so more people will understand of how it works, poor choice of words maybe.In this case, yes he would. He wrote something non exact. Loading 4GB will NOT feel like loading 12GB. But he is right on the advantage of loading just that little piece of data that is needed, instead of the whole humongous file.
Cerny said a similar thing: with the SSD you don't duplicate data, there is no seek time. You just load what you need. Microsoft is probably embedding this "feature" within the OS. You map the big file and ask the OS to load up just a piece of the big file. The feature is good, but it will not "feel like loading 12GB", you are just not wasting resources to load 12GB and then discard 8GB just because you needed 4.
Once again: the feature is good, the statement is wrong.
I agree with everything you said except this last point. You are making the same wrong assumption as Bo.
Memory ocupation is not directly related to the polygon count that's being rendered. If it has about 25% less RAM for what's in view means that it will have a 25% less unique polygons (i.e. different meshes), not rendered polygons. You can have a single mesh in memory and render it a million times. The number of polygons in memory does not limit the number of rendered polygons.
Thanks for your explanation.Ok let's clear something here.
Almost all games today uses PRT to just load the part of the texture it needs.
Plus it uses minimaps based in the distance view you are from the texture.
You won't load 12GB of textures for a scene.
PRT was used before via software only but fast all GPU hardware added hardware support to it.
So why SFS or PRT+ was created by nVidia?
Because the PRT present in most if not all GPU hardware estimated the parts of the texture it needed to load.... that estimate was have a high probability of failures.... so you could load a part of texture you don't need and had artifacts or discard it and load again.
Então nVidia created the PRT+ that added the feature to read the info of which part of the texture needs to be applied to the pixel
So it basically fix the main failure of PRT that resulted in artifacts and/or waste of memory.
So what the difference between the MS SFS and nVidia PRT+ implementation?
Very few to be fair... they are not different in use or result itself but MS added caches destined to the sampling process to in their words to better storage and fast load.
If I have to say in terms of capabilities there is no difference... but the MS implementation seems to be a big more precise.
Outside that there is filters used for the sampling... well in GPU you have a series of filters that can be used but neither is hardwae optimized... while on Series X MS added units to that work with specialized filters (so they are faster than the common filters used in PC GPUs).
The gain are nowhere near the amount of data that twitter try to say.
In fact the gain between PRT and SFS is basicaaly none... it is just that SFS and PRT+ is more accurate and have way less failures (artifacts) compared with PRT.
Most if not all games today uses PRT.
No. 4GB texture loading won't feel like 12GB texture loading even with SFS.
It is hard to say the differences because the functions and results should be the same.Thanks for your explanation.
Actually prt has a hard time to predict (if at all) what texture to load next as the player can be quite unpredictable (like not having a fixed camera).
SF was created to counter that by sampling and guessing the next mip to load.
SFS or PRT+ do both of the above, while having a specialized filter to temporarily replace mips that have yet to load with lower quality ones. (At least on Xbox)
Now we know where that 12GB/s number was coming from....When Math is not your forte. If you need 4Gb, you load 4Gb. It won't "feel" like 12 by any mean. It's math.
No, it does not. It's a powerful feature and has it's own merit. But it is NOT the same as the PS5 I/O solution.
Please, just drop it. XSX/S are NOT the same as the PS5: Microsoft's architecture has its own merit.
They don't actually.Almost all games today uses PRT to just load the part of the texture it needs.
The same engineers who said to their fans series X was a next generation machine 4k native 60 FPS capable as series S for 1440p?Plz go tell XSX technical engineer of how things are in reality. He might learn a lesson from you
The thing is that most engines were designed for HDD in mind as well so while having prt for years no one was using it effectively.It is hard to say the differences because the functions and results should be the same.
It just the internal implementation is different from each vendor.
It is the same case for VRS... you can do it own implementation via software... then nVidia added it to hardwarae with it own implementation in API Extensions... then MS added to DirectX with it own implementation using nVidia/AMD hardware, etc.
How these different implementations affects the performance and overall result? Well... it is hard to predict or even test because I believe each implementation works better for determined code path... so an Engine can have better results with MS's implementation while other Engine will be better with nVidia's implementation.
SFS and PRT+ are the same.
Plus the MS hardware features like additional caches and fixed texture filter units found on Series won't be present on PC GPUs.
So on PC SFS will probably be very very similar to PRT+... at least more similar than what you have on Series where you have add hardware to speed up things.
But the point I was trying to explain is that the save from a traditional load the full texture won't exists because PRT is already a thing in most if not all games, APIs, Engines, etc developed.... SFS will be more reliable (less artifacts and waste of memory) and faster (due the specific hardware features on Series).
Maybe in Xbox One?They don't actually.
That's what people are missing; it's right in MS's presentation actually. Their version of PRT (tiled resources) was not heavily used on Xbox One. That's why they aren't comparing against PRT.
I don't think they are BS'ing either.. ID was one of the big users of it and they had major pop-in problems and abandoned the technique later in the generation.
Nice moves! I can do the same with the 8k logo on the front box of PS5. Not a really good argument by your sideThe same engineers who said to their fans series X was a 4k native 60 FPS machine as series S 1440p?
Support 8k screen on the box it's like say in a presentation to have an hardware next generation capable of 4k native at 60 FPS? For real? How dense it can be to say such idiocy?Nice moves! I can do the same with the 8k logo on the front box of PS5. Not a really good argument by your side
Xbox One had the feature too. .. I don't know why it would be underutilized but not on PS4.Maybe in Xbox One?
But it was in PC and PS4.
It is one of the new features added in PS4 over the PS3.
Slide 25.