PS5 Die Shot has been revealed

I had to buy a TV anyway as my 1080p Panny plasma died, so it made sense to get a HDMI 2.1 set with the new consoles coming out.
In your case it's good reason, I agree. But I have a perfectly good 4k display, that works well. If my console didn't have a stable framerate I would have a bad experience.
 
I had to buy a TV anyway as my 1080p Panny plasma died, so it made sense to get a HDMI 2.1 set with the new consoles coming out.
Don't bother with him - he's one of these "everybody who has something better than I do wastes their money" guys.

You could have got a C9 over a year ago for $1999 USD and have had all of these things. Deals like that always show up if you have the patience to wait and the intelligence to get the most for your dollar.
4K is great.
So is real HDR.
So is 120HZ.
So is VRR.

That's the real experience honestly - and in 2 years this won't even be an argument.
 
But it seems that the only way to get stable framerates on the XSX is by using VRR. it's sad, really.
I will give you the benefit of doubt, and put this down to being badly informed.

I needed a 4k set for the new consoles. I bought an LG nano. But after 6 mths it needed a new panel and all the backlighting changed. In the middle of a pandemic and no idea when the parts would be available. So I got my money back and got the Samsung.
 
Last edited:
Don't bother with him - he's one of these "everybody who has something better than I do wastes their money" guys.

You could have got a C9 over a year ago for $1999 USD and have had all of these things. Deals like that always show up if you have the patience to wait and the intelligence to get the most for your dollar.
4K is great.
So is real HDR.
So is 120HZ.
So is VRR.

That's the real experience honestly - and in 2 years this won't even be an argument.

Absolutely. You are free to use your money how you want. But I don't see why I should spend MY money for a VRR/120hz gimmick. And I am not the only one that needs to make sure that things are done in budget.

I will give you the benefit of doubt, and put this down to being badly informed.

Oh well, I am not the one with a console with unstable framerates
 
Last edited:
The PS5 has plenty of examples where the framerate is dropping in titles. AC: Valhalla, Control, DMC5, RE: Village, and Destruction AllStars. VRR would help in each of those games.
Sure it is nice to have VRR... it is an option for heavy expensers players.

But I feel like devs are relying too much in VRR and not optimization games enough.
 
Last edited:
Sure it is nice to have VRR... it is an option for heavy on expensers players.

But I feel like devs are relying too much in VRR and not optimization games enough.
I feel like no one knows there are drops when a game goes to 57fps for a second. This is all academic. A few years ago we were playing games at 24fps with no VRR.
 
I do want to say that as far as ML support goes, it's probably a well-educated and probable guess to assume Sony's support for it on PS5 extends to FP16 and INT16;

I wonder if lower than INT16 is even usable for an upscaling technique based on machine learning. Pixel color calculations are either at FP32 or FP16 up to the moment the framebuffer translates into color space coding where it needs to be from 14 to 24 bits per pixel (or more if HDR is enabled).

Quad-rate INT8 may be useful for a lot of neural network inferencing, but I'd bet DLSS2 mostly uses the tensor cores' maximum precision of FP16, with some operations resorting to the full FP32 shader processors.
So while "ML hardware" in the form of 4x INT8 and 8x INT4 throughputs are nice to have for ex. character behavior or procedural generation of assets, I don't know if those are especially useful for resolution upscaling.
 
ML is more about software than hardware.

Higher precision or low integer helps to speed up the process but the key is the smart coded software that makes it good... logic is what makes ML a thing.

Said that there a lot of conflitante reports about PS5 supporting or not INT3/INT8.
 
Last edited:
Absolutely. You are free to use your money how you want. But I don't see why I should spend MY money for a VRR/120hz gimmick. And I am not the only one that needs to make sure that things are done in budget.



Oh well, I am not the one with a console with unstable framerates

It's not a gimmick, it's the future norm, all TV's will eventually be HDMI 2.1, it'll be standard.

All consoles ever have had framerate dips, completely locked games are very rare. Dropping a small amount of frames isn't actually a big deal, it's how your TV reflects it that is actually the problem as it displays 60hz regardless, that's what leads to screen tear and stuttering.
 
Last edited:
The irony is that when an upscaling technology like DLSS is deployed, its more beneficial to the gpu with a lower potential fillrate than one with a surplus of it!
Sure it is nice to have VRR... it is an option for heavy expensers players.

But I feel like devs are relying too much in VRR and not optimization games enough.

The reality is that all VRR does is stop tearing, it doesn't do shit for latency and is completely cosmetic.
 
The irony is that when an upscaling technology like DLSS is deployed, its more beneficial to the gpu with a lower potential fillrate than one with a surplus of it!


The reality is that all VRR does is stop tearing, it doesn't do shit for latency and is completely cosmetic.
You are underselling it... if you aren't missing frames it helps latency
 
I know that Tilo Pruckner died last year. Maybe your saying this thread is dead?

The Neverending story was pretty cool when it came out.
My guess is his meaning was much less esoteric than what you thought.

Along the lines of ... this argument is a "Never Ending Story."

Aaaaohhaah-aahhahha-ahhhooahh.
 
Mostly agree. Xbox Series X just plain looks better designed overall than Playstation 5 based on all information presented from official sources. This isn't to be taken that PS5 is a bad system or anything, but it doesn't appear anywhere as thoughtfully constructed as Xbox Series X has been from a SoC architecture viewpoint. Xbox Series X just plain seems a lot more advanced. The most impressive looking piece of PS5 is it's SSD and overall I/O architecture. All else seems very plain jane.

And by magic it runs games better, how do you figure that one out ?

You realise Sony CU cores process vertices and data differently in a different order and logic which allows more efficient post effects and makes the CU more efficient ? Must be magic.

Having 14 CU share a parameter cache and LDS is well thought out ? LOL, I guess it works well for GPU compute, but is the definition of a bottleneck GPU design
 
Last edited:
I wonder if lower than INT16 is even usable for an upscaling technique based on machine learning. Pixel color calculations are either at FP32 or FP16 up to the moment the framebuffer translates into color space coding where it needs to be from 14 to 24 bits per pixel (or more if HDR is enabled).

Quad-rate INT8 may be useful for a lot of neural network inferencing, but I'd bet DLSS2 mostly uses the tensor cores' maximum precision of FP16, with some operations resorting to the full FP32 shader processors.
So while "ML hardware" in the form of 4x INT8 and 8x INT4 throughputs are nice to have for ex. character behavior or procedural generation of assets, I don't know if those are especially useful for resolution upscaling.

They may not be particularly useful for resolution upscaling but they can certainly help; AMD don't have any Tensor core equivalents in hardware, so their Super Resolution (when it's ready) will have to leverage generic compute on the GPU. This applies to both Microsoft and Sony's platforms. If there are specific use-cases where the INT8 and INT4 precision can be of benefit, I'm certain they'll be used.

One immediate benefit I can think of is lowered memory usage compared to single-precision and double-precision values. It also brings with it lower memory bandwidth compared to FP16 and FP32. But that is with data particularly suited for INT8 and INT4, of course. I think Microsoft's thinking with including extended support for INT8 and INT4 as it relates to games is, for data that can be expressed in those data types, it overall will help lower memory usage and bandwidth usage for that data freeing up more memory capacity and bandwidth for data in other data typesets. So effectively, it will improve performance in the long run but it comes down a lot to if the data can fit those types or not. Also, since INT8 and INT4 are simpler, computational time is reduced compared to FP16, let alone FP32.

For things outside of gaming, such as AI, it can have its uses in DNNs; Intel trained a neural network solely on INT8 at some point, for example. INT8 tends to be found more in CPUs, FPGAs etc. rather than GPUs, but if a GPU has support for those datatypes in hardware, I'm guessing this could be combined with leveraging TMUs for purposes outside of textures to accelerate computation of certain data representations, data arrays, etc. and this can extend to benefits for gaming code too.

The irony is that when an upscaling technology like DLSS is deployed, its more beneficial to the gpu with a lower potential fillrate than one with a surplus of it!


The reality is that all VRR does is stop tearing, it doesn't do shit for latency and is completely cosmetic.

What do you mean by "potential fillrate"? Pixel fillrate? Texture fillrate? TF (which aren't an indication of either)?

Also VRR has a positive impact on improving latency for the actual user because visual communication of the framebuffer sent to the display device is more accurate in terms of the synchronization and timing, allowing them to be more precise in their inputs.
 
Last edited:
Don't bother with him - he's one of these "everybody who has something better than I do wastes their money" guys.

You could have got a C9 over a year ago for $1999 USD and have had all of these things. Deals like that always show up if you have the patience to wait and the intelligence to get the most for your dollar.
4K is great.
So is real HDR.
So is 120HZ.
So is VRR.

That's the real experience honestly - and in 2 years this won't even be an argument.
What's REAL HDR?
 
Where can I read about this?
Knock yourself out, the long words will make Riky Riky laugh

Thats why CU per shader array is limited to 10 for performance in gaming, and ps5 goes a step further. From Cerny and Naughty Dog :

jUCSi3B.png

Some GPU Compute and certain tasks are not bottlenecked...and it gets worse the more data that shares caches in a shader array.

Thats why wide means more shader arrays, not more CU, and why 6800 will always smoke consoles = 6 shader arrays vs 4. It is what it is.
 
Last edited:
Knock yourself out, the long words will make Riky Riky laugh

Thats why CU per shader array is limited to 10 for performance in gaming, and ps5 goes a step further. From Cerny and Naughty Dog :

jUCSi3B.png

Some GPU Compute and certain tasks are not bottlenecked...and it gets worse the more data that shares caches in a shader array.

Thats why wide means more shader arrays, not more CU, and why 6800 will always smoke consoles = 6 shader arrays vs 4. It is what it is.

JackMcGunns, don't doubt geordiemp.
 
Last edited:
All consoles ever have had framerate dips, completely locked games are very rare.
Are you kidding? almost all of the Sony's 1st party games are lock framerates and performance dipping are very rare. Don't use VRR as an excuse for Dev's laziness.
 
HDR with a television that can do contrast properly (proper backlight or OLED) and display a wide color

A good FALD/LED does HDR better than OLED with a significantly higher peak brightness and sometimes higher color gamut.

C9 is well below 1000 nits and like other OLED's suffers from ABL which makes it not able hold it's already low peak brightness(in comparison)for long.

However
New OLED tech from Panasonic is supposed to reach up to 1000 nits with special cooling.
🤙
 
A good FALD/LED does HDR better than OLED with a significantly higher peak brightness and sometimes higher color gamut.

C9 is well below 1000 nits and like other OLED's suffers from ABL which makes it not able hold it's already low peak brightness(in comparison)for long.

However
New OLED tech from Panasonic is supposed to reach up to 1000 nits with special cooling.
🤙

All correct... except the subjective use of the word "better". I moved from a bright Samsung panel with HDR to the OLED and found it to be a big improvement. My TV is in a dark room and in that case peak brights of the C9 are more than enough... especially with the added impact of perfect blacks.
 
Last edited:
All correct... except the subjective use of the word "better". I moved from a bright Samsung panel with HDR to the OLED and found it to be a big improvement. My TV is in a dark room and in that case peak brights of the C9 are more than enough... especially with the added impact of perfect blacks.
It's preference.

Dolby Vision content on my Z9D is breathtaking.
 
Absolutely. You are free to use your money how you want. But I don't see why I should spend MY money for a VRR/120hz gimmick. And I am not the only one that needs to make sure that things are done in budget.
While I agree people will probably not feel rushed to buy mid to high end TVs to get access to VRR (or even proper HDR). and this is why very little attention should be paid to the feature in evaluating a title's performance on consoles, VRR is brand new, we have had people getting brand new 4K TVs (not even all are HDR) for the last few years... Now that you took out the benefits of an higher resolution, you expect people to fork out money for 120fps and VRR (apparently sets that do support don't even all do it properly).

I would expect VRR/120 to be on less than 10% of gamer's TVs by the end of the generation.
 
Knock yourself out, the long words will make Riky Riky laugh

Thats why CU per shader array is limited to 10 for performance in gaming, and ps5 goes a step further. From Cerny and Naughty Dog :

jUCSi3B.png

Some GPU Compute and certain tasks are not bottlenecked...and it gets worse the more data that shares caches in a shader array.

Thats why wide means more shader arrays, not more CU, and why 6800 will always smoke consoles = 6 shader arrays vs 4. It is what it is.
I might be missing something but that text only explains that some bottlenecks can happen while using pixel and vertex shaders. It doesn't say anything regarding the PS5 having custom CUs.

Can you provide the full text?
Thanks!
 
Knock yourself out, the long words will make Riky Riky laugh

Thats why CU per shader array is limited to 10 for performance in gaming, and ps5 goes a step further. From Cerny and Naughty Dog :

jUCSi3B.png

Some GPU Compute and certain tasks are not bottlenecked...and it gets worse the more data that shares caches in a shader array.

Thats why wide means more shader arrays, not more CU, and why 6800 will always smoke consoles = 6 shader arrays vs 4. It is what it is.

We've heard several of you theories and they've all turned out to be rubbish, this one leads to Hitman 3 having 44% more resolution and higher shadows settings on Xbox.
 
I might be missing something but that text only explains that some bottlenecks can happen while using pixel and vertex shaders. It doesn't say anything regarding the PS5 having custom CUs.

Can you provide the full text?
Thanks!
I do not think that argument is about custom CU's, but how many CU's are being fed by the same sized shared cache all CU's in the Shader Array do share together. The more clients the more chances of cache trashing and/or misses which add latency.
 
Last edited:
I might be missing something but that text only explains that some bottlenecks can happen while using pixel and vertex shaders. It doesn't say anything regarding the PS5 having custom CUs.

Can you provide the full text?

Its the Naught dog and Cerny patent on explaining limitations of caches in CU and the solution- you can google it, which is a custom CU solution as the hardware and order of processing is very much changed. Also parameters are stored differently.

The important part is if you understand the paper, it explains clearly bottlenecks in the shader array - and goes along way to highlight why 14 CU does not always perform better than 10 CU in SOME but not all work loads (try explaing that to Riky Riky , good luck).

The maintaining of the paremeter vertices clearly states it is beneficial in post processing efficiency (less work).

Now, the next arguement is does it exist in ps5 - well most likely, we have seen evidence of ps5 performance on post processing and some work loads that match well with this.

Or its magic, Does not matter either way, performance is there, this together with cache scrubbers and whatever GE optimisation exists matches well performance that we see on ps5.

Most of the ps5 unexpected performance such as vertices patent, GE and cache scrubbers you wont see in a die shot, as most are just looking for repeated patterns of metalisation to identify die functions..

NjgdjcF.png
 
Last edited:
Its the Naught dog and Cerny patent on explaining limitations of caches in CU and the solution- you can google it.

The important part is if you understand the paper, it explains clearly bottlenecks in the shader array - and goes along way to highlight why 14 CU does not always perform better than 10 CU in SOME but not all work loads (try explaing that to Riky Riky , good luck).

The maintaining of the paremeter vertices clearly states it is beneficial in post processing efficiency (less work).

Now, the next arguement is does it exist in ps5 - well most likely, we have seen evidence of ps5 performance on post processing and some work loads that match well with this.

Or its magic, Does not matter either way, performance is there, this together with cache scrubbers and whatever GE optimisation exists matches well performance that we see on ps5.

Most of the ps5 unexpected performance such as vertices patent, GE and cache scrubbers you wont see in a die shot, as most are just looking for repeated patterns of metalisation to identify die functions..

NjgdjcF.png

Why do you keep tagging me in your drivel?
Your obsession with me is really odd🤣
 
Last edited:
We've heard several of you theories and they've all turned out to be rubbish, this one leads to Hitman 3 having 44% more resolution and higher shadows settings on Xbox.

Have you thought what is different about hitman3 vs the other games, unique engine, over 300 NPC, not a large team with their own engine written specfically around large NPC, is the engine optimised for DX11/12 / GNM, how much compute might favour the XSX, is the engine optimised for ps5 or justa quick port.

We dont know, neither do you. But keep hanging on to your Hitman 3 power narrative and ignore every other game, it gives us a laugh.

Try thinking for yourself.

Ok I got real work to do, see ya.
 
Last edited:
Have you thought what is different about hitman3 vs the other games, unique engine, over 300 NPC, not a large team with their own engine written specfically around large NPC, is the engine optimised for DX11/12 / GNM, how much compute might favour the XSX, is the engine optimised for ps5 or justa quick port.

We dont know, neither do you. But keep hanging on to your Hitman 3 power narrative and ignore every other game, it gives us a laugh.

Try thinking for yourself.

Ok I got real work to do, see ya.

You missed the Control benchmark after your meltdown absence, where again the GPU gap was proven beyond doubt.
When a PS5 game runs at higher settings with a 44% resolution advantage come back to me.
You keep holding onto your PS4 games will load in one second, PS5 has Infinity Cache, PS5 has special big L3 cache, PS5 secret sauce will be revealed at AMD RDNA2 reveal along with other unproven tripe.

Talking of laughs you gave all everyone could handle when you demanded proof of people owning a console before they were allowed to talk about it, even suggesting people owning a PS5 should be mod verified before they could comment🤣🤣🤣
 
Top Bottom