No you twat, it isn't.
This really isn’t a good look for you...
No you twat, it isn't.
You may do like many of us did and spend a good amount of time playing Astro's Playroom. I still havent played a full VR games yet on PSVR, still on the demos, lol.
Enjoy that, and get ready to be blown away by some PS5 games., PS5 versions of games.
From the eurogamer article:
Oh dear... Always do your research.
Probably said Pixar looks better. lolWhat did this Buffon say?
No you twat, it isn't. Some people may feel it's underrated while others may think it's been overhyped.
Looking silly would be "I loves my haptics. It's so awesome! If you don't agree with me, you're just downplaying it and looking silly. Hur dur."
Grow up, and discuss the things you like about it, but don't pretend you're so important as to be any sort of authority figure on... well, anything really.
Not a hard number either so it is safe to assume that the max rate is between 6 to 6.5 gb/sec. But it is still in an ideal situation, so the average still is at 4.8gb/sec.G Godfavor
Xbox Series X’s BCPack Texture Compression Technique 'might be' better than the PS5’s Kraken
Isn't it 30.2GB on xbox? 35GB according to some posts I've seen. So yeah, on both consoles its small. Just smaller on PS5. EDIT, ok...it does say 30 on the XSX.www.neogaf.com
You know that during Eurogamer interview Xbox architect Andrew talked about SFS, BCPack and what not regarding SSD. If Andrew Goossen said over 6, surely he was careful with words. Over 6 can mean 6,1, 6,2, 6,3.....if it is 6,8 or 6,9 surely he would say closer to 7. If it is 7, he would say 7 and so on. And therefore, Digital Foundry immediatelly after the interview in their video about XSX mentioned highest number for XSX SSD is 6 GB/s. No need to spin otherwise.
Btw. I'm banned from that thread. Looks like spreading crap is allowed
Not a hard number either so it is safe to assume that the max rate is between 6 to 6.5 gb/sec. But it is still in an ideal situation, so the average still is at 4.8gb/sec.
Probably said Pixar looks better. lol
Not a hard number either so it is safe to assume that the max rate is between 6 to 6.5 gb/sec. But it is still in an ideal situation, so the average still is at 4.8gb/sec.
Oh dear... Always do your research.
Got bad news tho, Quality mode is 30fps and Performance has screen tearing….NOOOOOOO! (Death Vader Noooo)They are possibly using better compression, but something tells me there was plenty of fat that could not be trimmed on PS4 (data duplication) and lack of incentive (storage was a lot cheaper on PS4).
Got Good news now, the old good al dev consoleGot bad news tho, Quality mode is 30fps and Performance has screen tearing….NOOOOOOO! (Death Vader Noooo)
Some warrior bullshit, Rich does zero research just farming the views, he has an absolutely worthless channel.What did this Buffon say?
What did this Buffon say?
He gets his math wrong after getting spinned by Microsoft PR buzzed words multipliers bullshits. The actual math is different story.Makes sense but I fail to see how the Series I/O is faster. The Sage guy tried to answer that question but he appears to ignore the physical limitations of the drive.
Maybe I'm just struggling to understand how it's possible after the comparisons that we had.
He gets his math wrong after getting spinned by Microsoft PR buzzed words multipliers bullshits. The actual math is different story.
There was a random user on Era, saying that the game will run the 60fps mode at 1080p with no RT. A dev from Insomniac quoted the post of that user with a "yikes" emoticon. Dude went ahead and made a whole video about that "leak".What did this Buffon say?
I'd like to know how he has so many subscribers.There was a random user on Era, saying that the game will run the 60fps mode at 1080p with no RT. A dev from Insomniac quoted the post of that user with a "yikes" emoticon. Dude went ahead and made a whole video about that "leak".
There was a random user on Era, saying that the game will run the 60fps mode at 1080p with no RT. A dev from Insomniac quoted the post of that user with a "yikes" emoticon. Dude went ahead and made a whole video about that "leak".
That I/O unit alone is equivalent to 12 zen 2 cores according to CernyRoad to ps5It's amazing how Cerny incorporated the Kraken decompression being its a Lossless decompressor, otherwise a Lossy decompression route would have been the alternative... Which would have been so problematic.
Didn't Cerny say it was equivalent to 9 Zen 2 cores?That I/O unit alone is equivalent to 12 zen 2 cores according to Cerny
Didn't Cerny say it was equivalent to 9 Zen 2 cores?
I wonder if devs can tap into the power of the IO just as Cerny said they can use the Tempest Engine for additional compute.
I've looked into your much earlier suggestion about nanite probably using signed distance functions (fields volumetric rendering), and it does seem plausible IMO, now I think I understand it.No, they say developers don't have to author LOD levels. That doesn't mean LOD levels aren't used, it means developers don't have to precreate them. UE5 will automagically create discrete LOD levels from a single mesh and use those when needed.
Epic words are that LOD levels won't be authored anymore, not that they won't be used. Thinking LOD levels or geometry refinement won't be used is absurd. Do you think a 1M mesh that is projected to a single pixel will be rendered at full precision?
You are right that SDF in the mathematical sense have unlimited resolution but almost all the big implementations of it store them discretized in a grid-like volume texture.From what I understand, SDFs are very much not using polygons for rendering either - unless my topic understanding is way off - so the 1M asset encoded as a combination of many SDFs would actually be rendered in a fragment shader AFAIK and would only ray march a small number of rays per pixel - because the 4 triangles per pixel that Epic mention are presumably the SDF primitive unit that they've procedurally used to encoded the megascanned/atomview assets with, and are not triangle primitives in the way we would normally understand a triangle to mean.
As I understood it, the LODs wouldn't exist for SDF rendering, because the whole point is that the mathematical representation doesn't lack tessellation, in the way that even the finest mesh - say representing a sphere - still would, because polygons are inherently flat, where as fragments(points) are inherently smooth, and the fidelity of the equation output automatically scales to the framebuffer - no more, no less - giving perfectly tessellated geometry.
Removing manual LOD authoring (very big win game production side wise) and removing LOD’s completely even only for some parts of the scene (UE5 video was clear that Nanite was not being used for dynamic geometry like player characters) are already big steps forward.You are right that SDF in the mathematical sense have unlimited resolution but almost all the big implementations of it store them discretized in a grid-like volume texture.
The voxel size is what effectively constitutes the LOD level. With a bigger grid size you are going in bigger steps when raymarching but the volume has less resolution.
I used polygonal meshes in the example described in the post you quote because I think folks are more used to polygons and would be easier to understand. And even if Nanite used SDF, those are not really well suited for everything. Animated characters for example would be hard so I'm sure it will be a mixed approach where some meshes are rendered via SDF and others aren't. These last kind of meshes will be using LODs IMHO.
Having said that, I can obviously be wrong. I think it aligns quite well with the restrictions Epics claims about Nanite and with Brian Karis prior research but it might be personal bias about interest for that kind of rendering.
Even if that weren't the case, though, I think we can all agree that "having no LOD system" is bullshit and that traditional LOD methods won't be removed (except for a few simple cases) for the near future. Improving the progressive-meshes-like techniques to further refine a LOD and to make transitions between them almost invisible and not having to author LODs manually, does not mean they won't be used, which is the initial claim I was debunking.
Let me take advantage of this post to ask you something in return: I'd love if the few forum posters with more graphics computing knowledge here like you wouldn't let these misconceptions to be spread. It's great to speculate about what they are doing, but spawning technical discussions with things that are obviously wrong only brings confusion to the table and are only used as fuel for trolling (see all the "R&C is still using traditional LOD systems", "PS5 won't use LODs" or "Xbox can only render 5x less polygons than the PS5" posts across the forum in other threads).
Completely agree, I don't think I have ever downplayed what UE5 is doing. Anything that gives developer time to focus on what really matters is a big win.Removing manual LOD authoring (very big win game production side) and removing LOD’s completely even only for some parts of the scene (UE5 video was clear that Nanite was not being used for dynamic geometry like player characters) are already big steps forward.
It is not just about the rendering part, but content authoring speed matters a great deal and thus how quickly abd thus how often much you as a developer can iterate over the game during production that will help max these consoles out and advance the medium.
Technically LOD’s will always be used, but you’re thinking of how it works in the old way. The new way from UE5 (and probably Sony studios also have this tech already) forward is that there are no old fashioned lods anymore. The scene will be rendered with a budget and the engine will choose what to render on what pixel, technically creating lods on its own. The difference with now though is that you won’t see any lod changes because they’re subpixel anyway and you don’t have to manually create them anymore saving lots of time and space.Completely agree, I don't think I have ever downplayed what UE5 is doing. Anything that gives developer time to focus on what really matters is a big win.
I'm just trying to debunk the claim that no LODs will be used and that that will be the way going forward. I think I already made my point clear: LODs will keep being used because they bring performance improvements and their downsides can be worked around, like what mip-mapping does on the texturing side. There are multiple things that can be fixed regarding them (transitions betweens different LOD levels being one) and I hope there's progress in there but I think we can all agree that no LODs is never happening, at least in the next 10 years. Even Pixar is using LODs in their movies.
See? This is what I'm against. What is the opposite of "old fashioned lods"?The new way from UE5 (and probably Sony studios also have this tech already) forward is that there are no old fashioned lods anymore. The scene will be rendered with a budget and the engine will choose what to render on what pixel, technically creating lods on its own
Solid debate. Would like to see if auto LOD scaling is possible. Absolutely have no idea if things like mesh shaders etc. make scaling down from a single source feasible and cheap.See? This is what I'm against. What is the opposite of "old fashioned lods"?
Engines have always choosed what LOD used. Them being created on the fly or not do not equal to LODs not being used. Them being progressively refined is also nothing new, the progressive meshes paper was published in 1996 and has been used extensively since then.
Let me write down an example, again. Imagine we have a 1M polygon mesh which is drawn twice, one near the camera and another one far away, spanning a couple of pixels. Do you really think it makes sense to derive the second one from the first one at runtime? It doesn't. You will be wasting cycles in simplifying something that you could precompute. No LOD authoring means developers won't have to create them but I'm sure UE5 will create simplified versions of meshes prior to execution and use those when needed. Even if you could do it in realtime, there are more important places to waste cycles on. And again, this doesn't mean you'll see LOD transitions, you can work around that without removing LODs entirely.
We can have a link? 1080p for 60 FPS without raytracing seems extremely exaggerated.There was a random user on Era, saying that the game will run the 60fps mode at 1080p with no RT. A dev from Insomniac quoted the post of that user with a "yikes" emoticon. Dude went ahead and made a whole video about that "leak".
It is, the paper I was referring from 1996 defined a set of operators to do exactly that. It has been used extensively. It's nothing new.Would like to see if auto LOD scaling is possible
The logic involved in deciding which LOD level to use will still need to be there even if LODs are created as needed. That's because you'll want to know which is your simplification target.Keeping in mind, today there is also logic involved in deciding which LOD at which time to use and possibly even overhead in fetching it if not readily in ram. Cache hit implications etc.
We can have a link? 1080p for 60 FPS without raytracing seems extremely exaggerated.
All makes sense, but don’t gpu’s have dedicated hardware to do some of the heavy lifting now?It is, the paper I was referring from 1996 defined a set of operators to do exactly that. It has been used extensively. It's nothing new.
The logic involved in deciding which LOD level to use will still need to be there even if LODs are created as needed. That's because you'll want to know which is your simplification target.
And yes, there's overhead in fetching them if not in RAM but if it hasn't been a problem until now, it won't be a problem with the I/O bandwidth of the new consoles.
But see? As the I/O bandwith grows, the case of precomputing LODs and fetching them to RAM when needed gains weight. Using a single mesh available in memory and spend cycles simplifying it made more sense when requesting discrete LODs from disk was more expensive due to limited bandwith or disk space. The cicles you spend doing something that can be precomputed are better spent on something else (refining the discrete LODs progressively as needed to minimize pop-in for example)
Again, the better example is Mip mapping. Games could use textures at full resolution and downsample them when needed but they don't because you can precompute them and spend your power on something else far more interesting. Also, take into account that downsampling a texture is a much cheaper operation than simplifying geometry. But again, games are not doing that and there's no talk about removing texture LODs AFAIK.
To do mesh simplification? No, they don't. You can do it with shaders but there's no fixed function to do it.don’t gpu’s have dedicated hardware to do some of the heavy lifting now?
Yup, only taking the I/O into account but then you have to compute the simplification.The speed of the i/o also makes sense in the opposite direction. If you’re no longer reserving as much for things outside of the viewport and can bring in the highest quality with speed, do it.
Decompressor is 9. Dma is is up to 2. And I think the coherency is the last. I could be wrong. Haven't watched road to ps5 in awhileDidn't Cerny say it was equivalent to 9 Zen 2 cores?
I wonder if devs can tap into the power of the IO just as Cerny said they can use the Tempest Engine for additional compute.
Basically he said the 60FPs mode was 1080P without any proof. I'm not saying that it isn't possible but as a journalist he needs to provide proof with his claims.
Isn't Miles Morales and Spider-Man almost native 4K at 60fps? Even the RT 60fps mode had the game running at resolution higher than 1080p. Why would he sound so preposterous
wtf is THAT??
No, they'll do a combination of both. They'll do real time simplification between discrete LOD levels.
I think we tell the same story here, maybe I didn’t use the right words for you though . I also say there’ll be lod but decided by the engine, and on a sub pixel level so you won’t ever see it as a gamer.See? This is what I'm against. What is the opposite of "old fashioned lods"?
Engines have always choosed what LOD used. Them being created on the fly or not do not equal to LODs not being used. Them being progressively refined is also nothing new, the progressive meshes paper was published in 1996 and has been used extensively since then.
Let me write down an example, again. Imagine we have a 1M polygon mesh which is drawn twice, one near the camera and another one far away, spanning a couple of pixels. Do you really think it makes sense to derive the second one from the first one at runtime? It doesn't. You will be wasting cycles in simplifying something that you could precompute. No LOD authoring means developers won't have to create them but I'm sure UE5 will create simplified versions of meshes prior to execution and use those when needed. Even if you could do it in realtime, there are more important places to waste cycles on. And again, this doesn't mean you'll see LOD transitions, you can work around that without removing LODs entirely.
ad LOD, pop up etc .. watching 1080p shit YT IQ ..
Ratchet PS5
9:37 - grass pop up /middle centre part of the screen around the rock/
10:25 - abrupt LOD change /middle centre right, branches, bushes round tree/
12:45 - shadow pop up /middle centre, on da road/
Of course the gameplay footage isn’t faked oh my goodness what some people believe lol. Ratchet makes use of traditional lod so yes there’ll be some pop in and lod transitions noticeable . For the rest, Pixar movieI think that's proof that the footage wasn't faked. A render farm wouldn't leave those imperfections. If anything it's good news for PS5 owners.