• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro Specs Leak are Real, Releasing Holiday 2024(Insider Gaming)

Gaiff

SBI’s Resident Gaslighter
Honestly the only reason I'm on here going back and forth with people is out of love and cognizant of the fact that in another week or so this thread will effectively die and I'm sad about it
A lot of my hope for the console was brought about by your enthusiasm. If by the end of the month we haven't heard anything, you'll have a little problem.

casino wtf GIF by O&O, Inc
 
Last edited:

Bojji

Member
Honestly the only reason I'm on here going back and forth with people is out of love and cognizant of the fact that in another week or so this thread will effectively die and I'm sad about it

Sad Married At First Sight GIF by Lifetime



Anyways, back to arguing...




Lol 1080p? Are you crazy my friend!? Try 4k output always. Even when upscaling with PSSR it will always be a 4k output aka 4k size framebuffer and memory requirements.

The whole point of PSSR/FSR/DLSS/XeSS etc. is to reduce all requirements including memory bandwidth. Final 4K framebuffer adds some stuff but it's obviously nowhere near as heavy as actual native resolution.

So L3 cache (infinity cache) is VERY, VERY important in native 4K, VERY important in 1440p and ~semi important in 1080p, that's why cards with 128 bit buses targets 1080p or less.

I thought you were saying that L3 is not important because PS5 Pro will use relatively low native res for games.
 

sachos

Member
Man i used to think Astro Bot could be a launch title for the PS5 Pro just like Astro's Playroom was for PS5. It would have been cool. Show off RTGI and everything, making it look pre-rendered.
 

ChiefDada

Gold Member
The whole point of PSSR/FSR/DLSS/XeSS etc. is to reduce all requirements including memory bandwidth. Final 4K framebuffer adds some stuff but it's obviously nowhere near as heavy as actual native resolution.

So L3 cache (infinity cache) is VERY, VERY important in native 4K, VERY important in 1440p and ~semi important in 1080p, that's why cards with 128 bit buses targets 1080p or less.

I thought you were saying that L3 is not important because PS5 Pro will use relatively low native res for games.

I don't think this is true. A 4k framebuffer should be the same size whether it's rendered natively or inferenced via software algorithm/ML. The same amount of pixels have to move around as data. I am saying the opposite, the PS5 Pro will use PSSR 99% of the time to achieve 4k resolution and in addition to the other high graphical settings will go beyond the capabilities of the 7700xt memory subsystem.

I recently brought up how 3070 (and 2070S) chokes in HFW at ~PS5 settings (even less since running at high textures vs PS5 very high/max textures). DF ran at 1800p DLSS DRS with high settings and concluded it's not optimal as it drops to mid 30s. Even at 1440p DLSS with DRS those cards had GPU drops. For comparison, PS5 performance mode runs 1800p CB with DRS.

 

winjer

Gold Member
During the rendering pipeline several buffers are used. But using an upscaler, most of these buffers can be rendered at a lower resolution. The upscaling is only done near the end of the rendering pipeline.
This means using an upscaler will save processing power, memory usage, memory bandwidth, etc.
 

kevboard

Member
If Sony really Will release PRO this month their ninjas are doing a great job, Zero leaks , just rumors...nothing palpable.
it probably wont release this month. it will probably be revealed during the last third or so of this month, and then launch almost exactly a month after the presentation
 

Gaiff

SBI’s Resident Gaslighter
During the rendering pipeline several buffers are used. But using an upscaler, most of these buffers can be rendered at a lower resolution. The upscaling is only done near the end of the rendering pipeline.
This means using an upscaler will save processing power, memory usage, memory bandwidth, etc.
This goes without saying. I'm not quite sure I really understand what Chief is saying.
 

saintjules

Gold Member
Man i used to think Astro Bot could be a launch title for the PS5 Pro just like Astro's Playroom was for PS5. It would have been cool. Show off RTGI and everything, making it look pre-rendered.

Yeah I was saying that earlier that it would have been awesome to see Astro Bot launch together with the Pro. Maybe they'll have it preinstalled with the Pro. Then again, maybe not.
 

ChiefDada

Gold Member
During the rendering pipeline several buffers are used. But using an upscaler, most of these buffers can be rendered at a lower resolution. The upscaling is only done near the end of the rendering pipeline.
This means using an upscaler will save processing power, memory usage, memory bandwidth, etc.

This goes without saying. I'm not quite sure I really understand what Chief is saying.

All I'm saying is 4k DLSS performance has a higher memory footprint than native 1080p. I mean Alex practically proves as much in the video. Unless you guys are saying 3070 has a compute disadvantage against PS5 hmmmmmm????


Eyebrows Mafs GIF by Married At First Sight Australia
 

kevboard

Member
All of the customization they do just does not exist to you?

Tonight Show Yes GIF by The Tonight Show Starring Jimmy Fallon

well, it barely does anything 🤷
and the changes these days are mostly removing things that they think they don't need to save costs.

all these custom modifications in the end resulted in maybe a 5% increase in performance compared to an equivalent PC part. so you can basically look at a PC with similar specs, add 4~6 fps to the average, and you got your PS5 performance. and it will probably be the same with the Pro
 
Last edited:

Zathalus

Member
All of the customization they do just does not exist to you?

Tonight Show Yes GIF by The Tonight Show Starring Jimmy Fallon
Nothing really significant when it comes to additional GPU or CPU performance. Not non-existent mind you, but the last console GPU that was dramatically different from what you could get on PC was Xenos.

Plenty of customization for I/O and audio of course, but nobody really talkes about that.
 

Skifi28

Member
All I'm saying is 4k DLSS performance has a higher memory footprint than native 1080p. I mean Alex practically proves as much in the video. Unless you guys are saying 3070 has a compute disadvantage against PS5 hmmmmmm????
It does, but it's still much lower than native 4k.
 

winjer

Gold Member
All I'm saying is 4k DLSS performance has a higher memory footprint than native 1080p. I mean Alex practically proves as much in the video. Unless you guys are saying 3070 has a compute disadvantage against PS5 hmmmmmm????

That is correct, in that case, DLSS will use around 300-400 Mb more.
But it will use significantly less memory and compute than 4K native. While looking as good.
 

kevboard

Member
Nothing really significant when it comes to additional GPU or CPU performance. Not non-existent mind you, but the last console GPU that was dramatically different from what you could get on PC was Xenos.

Plenty of customization for I/O and audio of course, but nobody really talkes about that.

and Xenos was only different because ATi speedran the development of it so that Microsoft had it ready for their console launch. it was like a pre-launch for their PC GPUs that came a year later
 

IDWhite

Member
But that is for data transfer from the SSD.
We are talking about the CPU to Gddr6.

To transmit from the SSD to the SoC there is the communication of the external controller with the I/O coprocessors, coherency engines, DMAC and decompressors.

The coherency engines are responsible for assisting the coprocessors with the data arriving from the SSD controller. But they are also responsible for consulting the GDDR6 memory for memory mapping, data deletions in caches and other data synchronization tasks between different memory levels.
 
Last edited:

saintjules

Gold Member
To transmit from the SSD to the SoC there is the communication of the external controller with the I/O coprocessors, coherency engines, DMAC and decompressors.

The coherency engines are responsible for assisting the coprocessors with the data arriving from the SSD controller. But they are also responsible for consulting the GDDR6 memory for memory mapping, data deletions in caches and other data synchronization tasks between different memory levels.

Eddie Murphy What GIF by Amazon Prime Video
 

Fafalada

Fafracer forever
I don't think this is true. A 4k framebuffer should be the same size whether it's rendered natively or inferenced via software algorithm/ML. The same amount of pixels have to move around as data.
That would be true if game-rendering was scanline based (or a TBDR - but we don't use that word around here anymore) - and the last time a 3d console did that was 2 decades ago, and noone even talks about it because it was a handheld.
But I digress - 4k upscale will touch pixels on 4k write a minimum amount of times (ideally only once, but who knows what happens when you run inference), and read from 2-4x fewer pixels, plus being a full-screen op, you can do it in optimally cache-coherent way.
Meanwhile 4k native will have multiple writes and reads from 4k, and in a random fashion - so bandwidth demands of the two are massively different - possibly an order of magnitude+.
 
Last edited:

winjer

Gold Member
To transmit from the SSD to the SoC there is the communication of the external controller with the I/O coprocessors, coherency engines, DMAC and decompressors.

The coherency engines are responsible for assisting the coprocessors with the data arriving from the SSD controller. But they are also responsible for consulting the GDDR6 memory for memory mapping, data deletions in caches and other data synchronization tasks between different memory levels.

We all know that.
But we were not talking about that part. Just the Gddr6 to CPU latency.
 

ChiefDada

Gold Member
Ok thanks for all the knowledge everyone. So then how do we explain Horizon Forbidden West performance running on 3070 at ~1200p internal with only high textures? Again, PS5 is running 1800p CB so roughly 1600 X 1800, aka 10% more pixels assuming 3070 is using dlss quality. And highest textures for PS5.
 

winjer

Gold Member
Ok thanks for all the knowledge everyone. So then how do we explain Horizon Forbidden West performance running on 3070 at ~1200p internal with only high textures? Again, PS5 is running 1800p CB so roughly 1600 X 1800, aka 10% more pixels assuming 3070 is using dlss quality. And highest textures for PS5.

Is the 3070 running into it's vram limits?
 

Mr.Phoenix

Member
Ok thanks for all the knowledge everyone. So then how do we explain Horizon Forbidden West performance running on 3070 at ~1200p internal with only high textures? Again, PS5 is running 1800p CB so roughly 1600 X 1800, aka 10% more pixels assuming 3070 is using dlss quality. And highest textures for PS5.
Isn't DLSS quality 1440p? Balanced is 1253p.

There is a like 20%+ internal rez difference in favor of DLSS quality over 1800p CB
 

Fafalada

Fafracer forever
Isn't DLSS quality 1440p? Balanced is 1253p.
With 4k target.
At 1800p it's only 1200p, so yea that's 10% less pixels.

But I don't know how the different GPUs scale in HZFW. Maybe compare how 1200p native runs compared to DLSS on that 3070/2070 if the upscaler is meaningfully impacting the result.
 

Loxus

Member
To transmit from the SSD to the SoC there is the communication of the external controller with the I/O coprocessors, coherency engines, DMAC and decompressors.

The coherency engines are responsible for assisting the coprocessors with the data arriving from the SSD controller. But they are also responsible for consulting the GDDR6 memory for memory mapping, data deletions in caches and other data synchronization tasks between different memory levels.
This is how you must look at it.
Bz9dID6.jpeg


The CPU itself isn't transferring any data, it's looking for data and the time it takes to find that data can hurt gaming performance.

This is why CPUs with 3D v-cache performs better in games, it has more L3 Cache to store the data it needs and doesn't have to look for it in DDR memory as often.

The PS5's CPU only has 8MB of L3 Cache, so it has to access the GDDR6 memory more often than desktop CPUs and to make it worse, looking for data in the GDDR6 memory can have high latency penalties than DDR memory. Around 250ns or more if I'm not mistaken.

Sony at least doubling the CPU L3 Cache can increase fps in games.

I for one hope Sony take a look at HBM for the PS6 like they did with SSDs. HBM4 seems to attempt to dramatically reduce costs by removing the interposer in favor of 3D stacking.
SK Hynix on track to explore interposer-free HBM4 production

And the latency is after L4 Cache.
 
Last edited:

ChiefDada

Gold Member
Is the 3070 running into it's vram limits?

It is. Which is insane to me at well below 4k and sub max textures. But with Alex saying 1800p DLSS DRS mode was "basically" a great 60fps experience suggests to me there are gpu limited scenarios where 3070 dropped below 60 and presumably internal resolution dropped to 900p.

Isn't DLSS quality 1440p? Balanced is 1253p.

There is a like 20%+ internal rez difference in favor of DLSS quality over 1800p CB

In the DF analysis Alex is running 1800p. DLSS DRS which can go as low as 900p.
 

ChiefDada

Gold Member
PlayStation Spectral Super Resolution.

Lol! If you realized who this guy most likely is I don't think you'd bother. I'm just waiting for him to make his return appearance in the graphics thread.

Easy way to understand it, the AMD solution you are talking about is the hardware part, PSSR is the software part.

"Fully custom design "... just saying
 

Aaravos

Neo Member
Lol! If you realized who this guy most likely is I don't think you'd bother. I'm just waiting for him to make his return appearance in the graphics thread.



"Fully custom design "... just sayin

Lol! If you realized who this guy most likely is I don't think you'd bother. I'm just waiting for him to make his return appearance in the graphics thread.



"Fully custom design "... just saying
I'm genuinely new neogaf
 

nial

Member
Honestly the only reason I'm on here going back and forth with people is out of love and cognizant of the fact that in another week or so this thread will effectively die and I'm sad about it
I actually liked reading all of you nerds.
Looking forward to Sony completely destroying my September 5th prediction tomorrow.
 

Loxus

Member
Lol! If you realized who this guy most likely is I don't think you'd bother. I'm just waiting for him to make his return appearance in the graphics thread.



"Fully custom design "... just saying
A fully custom design would mean it isn't even using a AMD GPU anymore.
 

ChiefDada

Gold Member
Did you see the comment from Keplerl2 yesterday I don't know how put on here he was inferring that it's most likely amd hardware I don't know if that's good or bad

Kepler_L2 is a great source and I think it's pretty cool we have him as a member. I personally don't agree with his 7700XT prediction or dependence of AMD hardware for use in PSSR as RDNA 3 already implements WMMA and Sony describes the ML architecture in PS5 Pro as fully custom. As far as PSSR vs DLSS, I believe PSSR will be superior at least in Sony 1st party titles.
 

Gaiff

SBI’s Resident Gaslighter
Kepler_L2 is a great source and I think it's pretty cool we have him as a member. I personally don't agree with his 7700XT prediction or dependence of AMD hardware for use in PSSR as RDNA 3 already implements WMMA and Sony describes the ML architecture in PS5 Pro as fully custom. As far as PSSR vs DLSS, I believe PSSR will be superior at least in Sony 1st party titles.
His predictions are based purely on the hardware I believe. Can't make many predictions on the software cause nobody knows what Sony will really do with it when it comes to the Pro.
 
Top Bottom