Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
Then why the focus? This is so weird.
Compared with last generation.
Cerny talk specifically about GE to perform primitive shading that is indeed new with RDNA and Turing.
The API side the same.
For example Turing have hardware support to Mesh Shaders since Turing launch in 2018 while AMD added Primitive Shader Units to GE with RDNA in 2019.
MS only supported it with DX12U released few weeks ago... 2 years after the hardware.
Sony will support that now in their API... something new and not present in PS4/PS4 Pro.

There is nothing weird at all... it is really new to consoles.

On PC even without the feature being supported directly by the API you have extension to use it in DX11, DX12, Vulkan and OpenGL.

Consoles lacked hardware and software support for Mesh/Primitive Shader.
 
Last edited:
If this is true than Nvidia would basically double their CUDA Cores and Tensor Cores and even tripple their RT-Cores and additionaly boost the clock speed with 200MHz. And than there is the die shrink from 12nm to 7nm. That would be a massive performance increasement
Well 12nm is just 16nm+++ rebranded so 7nm is a big move for them and probably allow that much more into the GPU at same time.
Clocks should reach new levels too.

That is the biggest advantage from nVidia over AMD... they have a better GPU tech even with a dated lithography process.
The scenario is different from Intel that always had the lithography process advantage and when they lose it (Intel 10nm lol) they suffered a big loss to AMD.
 
Last edited:
Mesh Shaders are done by Primitive Shader unit inside the Geometry Engine.

RDNA can do Mesh Shaders.
You could be right. However from memory, culling triangles and vertexes was discussed as the task of the Geometry Engine. Obviously mesh/primitive shaders do the opposite and could be a feature of a single unit.

Whether the full feature set Sony was talking about is part of stock RDN2 is inconclusive tho unless I'm missing something.
 
Last edited:
Didn't cerny in road to ps5 say big chunk of apu is devoted to IO which has equivalent of 11 zen 2 cpu cores working to ensure main cpu is not needed and run its game task at peak all the times ?

He doesn't seem to ever say 'APU'. Just 'main custom chip' or 'silicon die'. He does say the following that makes me pretty sure it is a APU:

Mark Cerny said:
Inside the main custom chip is a pretty hefty unit dedicated to I/O
 
I almost sure the trailer of the new AC will be CGI but If you have to bet in case to be gameplay where do you think was capture ?
a) PC (I think is this one)
b)XSX
c)Xbox one X
d)PS5
e)PS4 pro

BTW I love some the CGI trailers of AC.
So many generations and people still think the first footage is console gameplay? Exceptions apart(GOW first trailer), everything comes from PC first, ye olde downgradable material.
 
But the GPU can only produce 10TF of information so if you have LOD 10 objects in the background, surely they would load quickly off the SSD but now you have 5 MIllion+ triangles for every asteroid on screen which doesn't make sense and would make your game engine grossly inefficient. The point of the LOD system isn't because it's difficult pulling up assets, but to ease the strain of the GPU on distant objects. THe only place the SSD has in all this is changing that object from a LOD1 to a LOD10 depending on how close it is to the player. Having an SSD doesn't magically give the PS5 more GPU power to display more triangles, it simply allows the loading of many different types of assets quickly, but you're still held back by the GPU and what it can render.
There are different bottlenecks for rendering a frame. It's not all about the teraflop count. You can have all the teraflops in the world, but if you don't have the data for them to work on, you can't do anything. That's why loading screens and other tricks exist. It's not that the GPU can't draw the frame. It's that the full data to draw it isn't ready. The fact that data needs to be streamed is one of the main reasons why open world games look less graphically complex than linear games even though they both run on the same hardware.

The PS5's SSD greatly reduces the streaming bottleneck. It allows the 10TF to more efficiently be utilized. With it you don't have to limit the graphical complexity of the game because of streaming issues. You don't have to use as much duplicate gemorty and textures. You can load more animation data. You can load more sound. You can have more light probes to improve illumination. And so on. That kind of stuff doesn't significantly affect computational requirements, but it does make the final results look and feel a lot better.
 
There are different bottlenecks for rendering a frame. It's not all about the teraflop count. You can have all the teraflops in the world, but if you don't have the data for them to work on, you can't do anything. That's why loading screens and other tricks exist. It's not that the GPU can't draw the frame. It's that the full data to draw it isn't ready. The fact that data needs to be streamed is one of the main reasons why open world games look less graphically complex than linear games even though they both run on the same hardware.

The PS5's SSD greatly reduces the streaming bottleneck. It allows the 10TF to more efficiently be utilized. With it you don't have to limit the graphical complexity of the game because of streaming issues. You don't have to use as much duplicate gemorty and textures. You can load more animation data. You can load more sound. You can have more light probes to improve illumination. And so on. That kind of stuff doesn't significantly affect computational requirements, but it does make the final results look and feel a lot better.
It's like putting a Ferrari engine in a Mini Cooper.
 
unfortunately whatever this guy reports it's always the opposite of what he says. I don't think anything he's said in the past have ever come true.

Only recently found him, so have no idea what about his claims; but others don't seem to find him credible.

It's interesting how two Youtubers came to the same conclusion about 6 months apart (foxy and moore's). Either they talked to each-other/ same source OR found out info independently from different sources OR both are bullshitting but with the same scoop.

Either way, time will tell us who are credible among the sea of insiders.



Thanks to Snowdonhoffen Snowdonhoffen for posting this video here first.
 
You could be right. However from memory, culling triangles and vertexes was discussed as the task of the Geometry Engine. Obviously mesh/primitive shaders do the opposite and could be a feature of a single unit.

Whether the full feature set Sony was talking about is part of stock RDN2 is inconclusive tho unless I'm missing something.
Microsoft's amplification shader can paradoxically be used to cull geometry. Mesh/primitive shaders can do it to. It's all part of RDNA2 because all they are is new capability to allow some of the gemetry processing functionality that was hardwired into the GPU to now be under the control of a shader program written by the devs. With them, the devs can do whatever they want to the geometry. They can programmatically add geometry to add detail, or they can take it away to improve performance based on whatever criteria they decide.
 
Last edited:
As per the NV Mesh Shading demo:





Mesh-Shading.jpg



This is how both consoles will handle the geometry, A LOT of resources saved on things no one would even noticed. The close-ups/cut-scenes will be glorious on next-gen systems.


When Cerny was talking about the Geometry Engine he made reference to eliminating back facing triangles - presumably he meant the drawcalls for back facing triangles getting eliminated before the mesh shading happens.

So in the example from the Nvidia mesh shading video, using LOD 1, the model has 20 polygons and is roughly cuboid with 6-sides, and only 3-sides facing the camera at anyone one time, and their picture seems to show 10 visible polygons as expected. But in their video, the model will still be 20 polygons and the polygons facing away from the camera are a 50% waste in drawcalls. So I'm now thinking, that what Mark was trying to highlight is that the GE will save those drawcalls giving mesh shading 30-50% more capability, meaning that the polygon counts in the GE could be double a GPU without GE at the same LOD, or could render with a lower LOD, but same visible detail.
It might seem like a pretty small benefit, but with cube maps and shadow maps all wanting the best LOD level for models in their own frustum/view, this feature could be a major win (IMHO). It will be interesting to see if it is complimentary to RT BVH acceleration structures, too.
It reminds me of ATI providing an offline model optimisation tool (in the PS3/360 generation) to re-wind polygons in a model to reduce overdraw – by winding them so that polygons closer to the model origin were typically sent later for drawing, so that outer (front facing) polygons were already rendered – causing redundant fragments to statistically fail the depth test more frequently – irony being that the nvidia products like the PS3's RSX benefited the most because of the lower fill-rate and higher polygon throughput.
 
Last edited:
It's like putting a Ferrari engine in a Mini Cooper.

I think a better way to think about it is without the SSD (HDD) these consoles would be like sports cars, but with the SSD/IO they get turned into F1 cars.

If Sony/MS had spent their budget on further GPU/CPU power they would be selling you a Buggatti, but by going this route they built a Formula 1 car. Not as fast on a straight line, but they gave power the freedom it needs to express itself in other ways.
 
Last edited:
It's like putting a Ferrari engine in a Mini Cooper.
A CPU without data is like a car without gas. Neither is doing anything. The equivalent of before SSDs is a Ferrari that had a very tiny gas tank, which limited where it could go. If there were many gas stations around, it could go all out, but if there weren't, either it couldn't go there at all or had to reduce its speed to improve its gas millage.

So a better car analogy is that the PS5 gave the Ferrari a big enough gas tank that it could go wherever it wanted at full speed even if there weren't gas stations around.
 
Can the culls be done before draw calls?
Absolutely. A game only tries to draw what is immediately around the player. So if you are outside, the game isn't going to try to draw the interior of room two blocks away. Most of the level data is culled this way because it's easy to determine that you are nowhere near it. Things go to the GPU for culling when they are closer and you don't really know if it should be culled until you do some more calculations at greater detail. For example it's possible that you could see the back side of a box in front of you. You only know you can't when you do the calculation to find out that side is facing away.
 
Compared with last generation.
Cerny talk specifically about GE to perform primitive shading that is indeed new with RDNA and Turing.
The API side the same.
For example Turing have hardware support to Mesh Shaders since Turing launch in 2018 while AMD added Primitive Shader Units to GE with RDNA in 2019.
MS only supported it with DX12U released few weeks ago... 2 years after the hardware.
Sony will support that now in their API... something new and not present in PS4/PS4 Pro.

There is nothing weird at all... it is really new to consoles.

On PC even without the feature being supported directly by the API you have extension to use it in DX11, DX12, Vulkan and OpenGL.

Consoles lacked hardware and software support for Mesh/Primitive Shader.
But am i wrong or software (as games exploiting the feature) don't exist at all for now even on pc ?
 
Then why the focus? This is so weird.
i think ps5 GE is customized from rdna2.0 that's why they talked about like it's big deal ms talked about VRS which we know GE is superior to VRS and that's why sony didn't mention VRS even if ps5 supports it, also that would suggest that this customized GE will be only in ps5 other way ms also would had it if was standart RDNA 2.0 feature but ms didn't even talked about it so asume it's not there. at this point all major features are out of the bag for both consoles.
 
Last edited:
But am i wrong or software (as games exploiting the feature) don't exist at all for now even on pc ?
Games can uses Mesh Shaders via extension on Turing GPUs.
If there is any that already use I have do to more research.

This is a demo from 2018:



This demo uses Mesh Shaders with Vulkan Extension:
 
Last edited:
i think ps5 GE is customized from rdna2.0 that's why they talked about like it's big deal ms talked about VRS which we know GE is superior to VRS and that's why sony didn't mention VRS even if ps5 supports it, also that would suggest that this customized GE will be only in ps5 other way ms would also should have it if was standart RDNA 2.0 feature but ms didn't even talked about it so asume it's not there. at this point all major features are out of the bag for both consoles.

VRS and Primitive shaders are different forms of culling, they should be used together. MS spent a great deal of time talking about primitive shaders, so I'm not sure why anyone would say they didn't.
 
But with ray-tracing, what happens with the culling. Take the reflections for an example, you can't have a reflections of an object outside of the viewpoint if it's culled?
I don't understand how it can work with ray-tracing. For a good ray-tracing, you need to simulate the light bounces even on the back face of the objects?
 
Games can uses Mesh Shaders via extension on Turing GPUs.
If there is any that already use I have do to more research.

This is a demo from 2018:



This demo uses Mesh Shaders with Vulkan Extension:

probably i wasn't clear . i know this demo i was asking if i missed a game using it.
 
i think ps5 GE is customized from rdna2.0 that's why they talked about like it's big deal ms talked about VRS which we know GE is superior to VRS and that's why sony didn't mention VRS even if ps5 supports it, also that would suggest that this customized GE will be only in ps5 other way ms would also should have it if was standart RDNA 2.0 feature but ms didn't even talked about it so asume it's not there. at this point all major features are out of the bag for both consoles.
The Geometry engine is just the PS5 API for using geometry shaders on the RDNA2 GPU. Microsoft calls them mesh and amplification shaders in their DirectX Ultimate API. It's just different names for the same thing.

The geometry engine is not superior to VRS. They each do different things and will have different performance effects based on what you are trying to draw in the frame. It's almost a certainty that the PS5 and XSX will have the capability to do both.
 
Maybe we're tired of the same subjects being brought back over and over.
All i did was claim Sony was being vague; I did not bring a subject up. Maybe people should take issue with the over-sensitive types that literally immediately jumped on me for saying that instead of insulting me over and over.
 
Last edited:
Looks like that new Assassins Creed Trailer did not have any gameplay footage. Schrier is right, most likely we'll have to wait till next week to see something, also looks like the Series X event will be next week as well. And Sony will will likely be doing their event on June 4th if that ResetEra leak was correct.
 
But with ray-tracing, what happens with the culling. Take the reflections for an example, you can't have a reflections of an object outside of the viewpoint if it's culled?
I don't understand how it can work with ray-tracing. For a good ray-tracing, you need to simulate the light bounces even on the back face of the objects?
That's a real problem. I saw a video, don't remember where, talking about a bug in the ray traced Minecraft caused by something like this. Culling was removing ceiling blocks at the far end of a tunnel because they couldn't be seen. However, that made the ray traced lighting think it was open to the sky, so it made the end of the tunnel light up. My guess is that the solution to this is to code in hints that certain things shouldn't be culled.
 
Thnks for the info. Also trust me I'm well aware that ice team was created to be the goto specialists amongst all Sony game studios. I just know that some people on Ice are made up of one or two people from those top studios such as guerilla and Naughty. As I stated if anyone can pull it off it's one of those dev studios as they are the cream of the crop so much so individuals from those studios have been tasked with being on Ice and helping others in time of need.

Interestingly enough dude (and I'm sure you already know this but still awesome to read again anyways):

Yoshida again brought Cerny to help plan out a means for the new console to share some of the same functionality as the previous consoles as to reduce the burden and cost for developers. Cerny along with Sony and Naughty Dog to form the Initiative for a Common Engine (ICE) Team, with part of the team working directly with Sony's hardware developers in Japan to bring about Yoshida's vision.

sources:

1 - https://www.videogameschronicle.com/features/who-is-mark-cerny/

2 - https://en.m.wikipedia.org/wiki/Mark_Cerny
 
Last edited:
?

He did not claim that this "leak" is his, he is just speculating about it.

In fact, it is public how he knows about that "info":


MLID does say he is getting leaks about the new Horizon. Correcting his wording from "I'm seeing" to "I'm getting". I didn't think he talked about leaks from other Youtubers without mentioning them.

 
I'm open to bets that nothing Microsoft shows will touch Horizon 2. Despite the TF "gap" Sony has the hardware configuration they think will fuel next gen. It's a different approach, but I think after seeing it in action it will be impossible to continue justifying TF as the sole measurement of power. It's something the pundits are going to have to see in action to believe, but I have a feeling all the TF press in the world will be put to rest...
 
MLID does say he is getting leaks about the new Horizon. Correcting his wording from "I'm seeing" to "I'm getting". I didn't think he talked about leaks from other Youtubers without mentioning them.


Man, it is public how he knows about that. I just posted his tweet.

So if you want to discredit the guy for something he did not do, go ahead, this thread is full of FUD and false claims.

I am not here to talk about grammar or provide moral lessons about how other people need to credit the info/leaks they get.
 
Last edited:
So, they don´t show first party titles? because the tweet says "Check out First Look next-gen gameplay from our global developers " So they will show third party titles?
MS usually calls their First Parties that way. In addition, it may be that they have partnerships, which are not necessarily XBOX exclusive, that will use the MS stage for Marketing, so they end up having to use this term to encompass everyone.
 
Status
Not open for further replies.
Top Bottom