• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry The PS5 GPU in PC Form? Radeon RX 6700 In-Depth - Console Equivalent PC Performance?

problem is actually more complex than that

here's why

at 4k 30 fps modes, ps5 probably uses lower CPU clocks/lower CPU power to fully boost GPU clock to 2.2 GHz
at 60 fps modes, it is highly likely that CPU needs to be boosted to its maximum clock, and my guess is that GPU clock gets a hit as a result (probably to 1.8 ghz and below). so technically, IT IS quite possible that PS5 in performance and quality mode framerate targets do not have access to the SAME GPU clock budget. think of like GPU downgrading itself when you try to target 60 FPS and have to "allocate" more power to CPU.

this is just one theory of mine, can't say for certain. but likely. why ? because PS5 does get benefit from reducing the resolution, even at below 900p. if so, then it means it is... GPU bound. if it it is gpu bound even at such low resolutions, then it means a better CPU wouldn't actually help, at least in those cases. but the reason it gets more and more GPU bound at lower resolutions is most likely because of the dynamic boost thing. PS5 has a limited TDP resource and it has to share it between CPU and GPU.

Imagine this: if your game is able to hit 50 or 60 FPS with 3.6 GHz Zen 2 CPU, it means you can reduce CPU clock up to 1.8 GHz and still retain 30 FPS. This means that at 30 FPS, you can reduce CPU TDP as much as possible and push as much as peak GPU clocks possible at 4K mode.

in performance mode, you need all the power you can deliver be delivered to CPU to make sure it stays at 3.6 GHz peak boost clock. that probably reduces the potential limit of GPU clocks as a result.

this is a theory / perspective that actually no one has brought up in this thread yet, but it is potentially a thing. we know that smart shift exists, but we can't see what kind of gpu and cpu clocks are games running at in performance and quality modes. if CPU clocks gets increased and GPU clocks get decreased in performance mode, you will see weird oddities where a big GPU bound performance delta happens between these modes. this is coupled with the fact tht upscaling is HEAVY. And when you combine these both facts, it is probable that PS5 while targeting 60 FPS is having a tough time in general, whether it is cpu limited or not, that's another topic.

this is why you cannot think ps5 as a simple cpu+gpu pairing.

if there was a way to see what kind of GPU clocks PS5 is using in performance and quality modes in these games, that would answer a lot of questions about what is being observed here.

and this is why seeing comments like "he didnt even matched the GPU clocks to 2.2 ghz" funny. this is assuming ps5 is running at 2.2 ghz gpu clocks in performance mode. why be certain ? why is smart shift a thing to begin with ? can it sustain 3.6 ghz cpu clocks and 2.2 ghz gpu clocks at the same time while CPU is fully used? I really don't think so. See how much CPU avatar is using. it must be using a lot of CPU on PS5 at 60 FPS mode. so that means a lot of TDP is being used by CPU. Then it means it is unlikely that PS5 runs its GPU at 2.2 ghz in performance mode in avatar and skull and bones (another Ubisoft title i'm pretty sure uses %70 of a 3600 to hit 60 fps on PC, just go check out). maybe we can extrapolate how much of a GPU clock reduction PS5 gets. but even then, it would be speculation.

it didnt matter up until this point, because even 2 GHz zen 2 CPU was enough to push 60 fps in games like spiderman and god of war ragnarok. (considering you can hit 30 fps with 1.6 ghz jaguar in these titles). so even at 30 and 60 fps, games like gow ragnarok and spiderman probably had 2.2 ghz gpu clock budgets to work with. and now that new gen games are targeting BARELY getting 60 fps on the very same CPU while using all of it (unlike crossgen games like gow ragnarok or hfw), we probably see GPU budget being challenged by CPU budget more aggresively with some of the new titles.
I actually forgot about the dynamic clocks thanks for reminding me still I’m unconvinced the 13900k isn’t having some level of effect I wish there was a ps5 that existed with the exact same specs but had say a 7800x3d for the cpu then we could really seal if t
 
If the GPU is pushing every single frame it can then putting in a more powerful CPU will net you exactly 0 fps. If the game is being limited by the CPU then absolutely a faster CPU would then help, but all the games tested in the original video can do over 60fps with an ancient Ryzen 5 2600 so I highly doubt the CPU of the PS5 matters here.

If the test included actual games that are heavy on the CPU then I would absolutely agree the test is useless, Flight Simulator drops on the XSX because of the CPU but, as mentioned before, none of the games tested hit the CPU particularly hard.
Eh I don’t believe that for all games if they tested cod that absolutely hammers the cpu
 

yamaci17

Member
I actually forgot about the dynamic clocks thanks for reminding me still I’m unconvinced the 13900k isn’t having some level of effect I wish there was a ps5 that existed with the exact same specs but had say a 7800x3d for the cpu then we could really seal if t
all that i wrote about that turned out to be unnecessary, it seems worst case scenario it will be 2.1 ghz opposed to 2.25 gh.z it shouldnt have that much of an impact

regardless 13900k is a bit extreme, but a much cheaper 7600 or 13400 would get you there. I'm sure something like 5600 that is nowadays sold below 150 bucks would also do a great job. in the end though, I will stay say that I can't be sure if it is CPU bound in avatar or not. not that I care anymore anyways. after seeing how some people blatantly ignore the valuable time I put in to do a specific benchmark and spit in my face disregarding certain solid proofs, there's no way I can convince of anyone of anything if they're determined what they're believe to be true. when they spit in my face like that, I lose will to discuss anything about it anyways. because it is indeed what it was: a spit in the face and disregarding everything I know and they know. it is like saying water is not wet when seeing 230w consuming GPU being pegged at %99 and somehow thinking that a better CPU would get a better result there.

i've literally showed here for example how going from 28 fps to 58 fps and from 19 fps to 44 fps required native 4k to ultra performance upscaling (720p internal). one is 2.07x perf increase and other is 2.3x increase. and frankly, you won't get any better than that in Starfield. because as I said, upscaling itself is heavy and games do not scale as much as you would expect when you go from 4k to 1440p to 1080p and finally 720p. less so with upscaling... yet people here blatantly refused to acknowledge how my GPU was running at 230w at 4k dlss ultra performance (my gpu is 220w TDP, which means all of its compute units were fully being utilized, so it was not a hidden CPU bottleneck.). the 3070 literally gives all of whatever it can in that scene. anyone who refuses to see past that, well, this discussion becomes pointless to talk about. if i had a higher CPU ready and prove that same thing happening with that CPU as well, discussion would be moved to a different goalpost.

it is often in these discussions that if someone has bad faith in argument, they corner you to specific points where you cannot exactly prove or show. i cant find a 4k dlss ultra performance starfield benchmark that is done with a high end CPU. we will have to find someone who has such a CPU and a 3070 to test that out. best one I could find is the 13600k 5.5 ghz and 3060ti one and that should prove my point, native 4k gets you 30 FPS there and 4k upscaling performance uplifts the performance to 52 FPS. The SlimySnake keeps insisting that 720p or 1080p is "1 millions and 2 millions of pixels". based on his logic, 3060ti should've gotten much, MUCH more than 52 FPS with 4k dlss performance. but it doesn't, even with a 13600k, it is HEAVILY gpu bound at 4k dlss performance at around 52 FPS, and to hit 60 FPS, you still need more aggresive uspcaling, despite being able to get 30 FPS at native 4k. similar proof with a much higher end CPU is there but it is up to them to acknowledge it or not.

to hit 60 FPS in starfield with a 3060ti in this scene, you need internal resolutions lower than 1080p (lower than 2 millions of pixels) when upscaling to 4K EVEN with a 13600k 5.5 GHz CPU. despite THE VERY SAME GPU being able to get 27-30 FPS at native 4k (8.2 millions of pixels)


Its Over Basketball GIF by NBA


 
Last edited:
Y
all that i wrote about that turned out to be unnecessary, it seems worst case scenario it will be 2.1 ghz opposed to 2.25 gh.z it shouldnt have that much of an impact

regardless 13900k is a bit extreme, but a much cheaper 7600 or 13400 would get you there. I'm sure something like 5600 that is nowadays sold below 150 bucks would also do a great job. in the end though, I will stay say that I can't be sure if it is CPU bound in avatar or not. not that I care anymore anyways. after seeing how some people blatantly ignore the valuable time I put in to do a specific benchmark and spit in my face disregarding certain solid proofs, there's no way I can convince of anyone of anything if they're determined what they're believe to be true. when they spit in my face like that, I lose will to discuss anything about it anyways. because it is indeed what it was: a spit in the face and disregarding everything I know and they know. it is like saying water is not wet when seeing 230w consuming GPU being pegged at %99 and somehow thinking that a better CPU would get a better result there.

i've literally showed here for example how going from 28 fps to 58 fps and from 19 fps to 44 fps required native 4k to ultra performance upscaling (720p internal). one is 2.07x perf increase and other is 2.3x increase. and frankly, you won't get any better than that in Starfield. because as I said, upscaling itself is heavy and games do not scale as much as you would expect when you go from 4k to 1440p to 1080p and finally 720p. less so with upscaling... yet people here blatantly refused to acknowledge how my GPU was running at 230w at 4k dlss ultra performance (my gpu is 220w TDP, which means all of its compute units were fully being utilized, so it was not a hidden CPU bottleneck.). the 3070 literally gives all of whatever it can in that scene. anyone who refuses to see past that, well, this discussion becomes pointless to talk about. if i had a higher CPU ready and prove that same thing happening with that CPU as well, discussion would be moved to a different goalpost.

it is often in these discussions that if someone has bad faith in argument, they corner you to specific points where you cannot exactly prove or show. i cant find a 4k dlss ultra performance starfield benchmark that is done with a high end CPU. we will have to find someone who has such a CPU and a 3070 to test that out. best one I could find is the 13600k 5.5 ghz and 3060ti one and that should prove my point, native 4k gets you 30 FPS there and 4k upscaling performance uplifts the performance to 52 FPS. The SlimySnake keeps insisting that 720p or 1080p is "1 millions and 2 millions of pixels". based on his logic, 3060ti should've gotten much, MUCH more than 52 FPS with 4k dlss performance. but it doesn't, even with a 13600k, it is HEAVILY gpu bound at 4k dlss performance at around 52 FPS, and to hit 60 FPS, you still need more aggresive uspcaling, despite being able to get 30 FPS at native 4k. similar proof with a much higher end CPU is there but it is up to them to acknowledge it or not.

to hit 60 FPS in starfield with a 3060ti in this scene, you need internal resolutions lower than 1080p (lower than 2 millions of pixels) when upscaling to 4K EVEN with a 13600k 5.5 GHz CPU. despite THE VERY SAME GPU being able to get 27-30 FPS at native 4k (8.2 millions of pixels)

You Actually have been pretty awesome I really appreciate the detail you put in your posts I guess we can never know for sure maybe the ps5 pro will really answer if it was cpu bound
 

Kataploom

Gold Member
argument like cpu doesn't matter is so stupid and unfair, even if it's true, but nothing is allways true to 100%, if it doesn't matter so why didn't richard use close to ps5 cpu? it should make no differences.
also ps5 has infinity cache just in another form it's called cache scrubbers.
Stop exaggerating things,we know the current gen consoles are CPU limited,less than the previous gen but limited nonetheless. The Intel chip used is 2-3x faster than PS5 and is about the same or more for the CPU alone.

Richard has a Ryzen 4800 so he could've similar the PS5 to whether the CPU or GPU was the limit, within reason.
You've answered your own question - it gives the 6700 an unfair advantage. It's why when PC GPUs are reviewed they are always done on the same system with same motherboard, CPU and RAM, so there are no other variables to skew the results. You can say it doesn't matter because it's GPU limited, well if Richard had confidence in that then he would have used a CPU as close to the PS5 as practicable, like a 3600. Better yet, not bothered at all because the whole thing is stupid..
  • System: i9-13900K+32GB DDR5 6000 MT/s

This is nonsense

Ps5 uses a crappy zen2 cpu, zen3 with its improved l3 cache and turbo boost crushed zen2 in gaming
yes, I believe i linked the tweet of that Avatar tech director who said the same thing. Now tell me if comparing a 6.0 Ghz CPU with a 3.5 Ghz console CPU would not make a difference even at 50 fps instead of 100.
That's exactly why he can't be harsh or overly critical and has to do this with kids' gloves. We can call him out all we want, but it's ultimately not our money that's on the line and he has to walk a thin line and strike a right balance between criticism and alienating his sponsors and industry contacts. You can't just ask him to shit on their partners like GN does.

What matters isn't how harsh he is, it's how truthful and accurate his results are.
Legit. Budget builds would see an insane resurgence if AMD just gave us these supercharged power efficient APUs. They'd make insane money from this.

Instead we're getting excited for 1060 equivalent performance while they're probably cooking a 3070 equivalent apu for the ps5 pro. Make it make sense.
He's using top of the line CPU in i9-13900K, with ridiculously fast 32GB of DDR5 RAM at 6000 MT/s.

Absolutely pointless comparison. Reminds me of the time Richard got an exclusive interview with Cerny before PS5 launch whereby he asked how variable frequency works, and after being explained by Cerny himself. He decided to use RDNA1 cards by overclocking them and comparing them to PS5 specs (which obviously used RDNA2 and designed to run at significantly higher clock speeds) only to arrive at a conclusion how PS5 would be hamstrung due to "workload whereby either GPU has to downclock significantly or the CPU"

They are so transparent with their coverage, it's hilarious. Fully expect them to do comparison videos containing PS5 Pro with $1000 5xxx series GPUs from nVIDIA later this Fall.
Eh I don’t believe that for all games if they tested cod that absolutely hammers the cpu
You're all missing the point in the video. You can get a similarly powerful GPU and games can run better because PC can overcome console's bottleneck easily, basically no effort these days.

You don't even need a $600 to do that, almost any low end CPU you can currently get in the market is enough, memory sticks are cheap.

Optimization these days is not a magic wand either, as long as you GPU is powerful enough, you mostly can't run games worse than on PS5 since you really need to try hard in order to find CPU and RAM that aren't way more powerful than console's in today's market.
 

yamaci17

Member
Y

You Actually have been pretty awesome I really appreciate the detail you put in your posts I guess we can never know for sure maybe the ps5 pro will really answer if it was cpu bound
I have my mistakes here and there and I too keep learning about stuff. reason I got offensive and worked up because I provided credible proof and somehow it got denied. regardless I shouldn't have crossed the line so I apologise to everyone involved regardless.

I wholeheartedly, and repeatedly agree that 13900k 6 ghz thing is really overkill, at least while testing rx 6700.

starfield on the other hand is a super heavy game on the GPU. here's a benchmark at 1440p with 3070 and... 7800x 3d



at 1440p ultra, gpu is struggling, barely gets 41 fps. with fsr quality, it barely pushes 46 fps average instead. you can see how heavy this game and its upscaling is. 960p rendering means 1.6 millions of pixels (slimsnake logic) against the 1440p native which has 3.6 millions of pixels. you sacrifice 2 millions of pixels and only get 5 fps avg. difference. with a 7800x 3d. so a better cpu does not magically get you more framerates. do you think going from 3.6 millions of pixels to 1.6 millions of pixels should provide more than a mere %12 performance.

if i did the above test on my end, and reported that i was only getting 5 fps difference between 1440p and 1440p fsr quality, people here would lay all the blame on my lowend cpu.

also, im not saying this is the norm anyways. it is just that avatar and starfield actually has heavy upscaling. alan wake 2 and cyberpunk, you will get insane performance bumps from DLSS/fsr because in those games, ray tracing resolution also gets lowered which lifts a huge burden upon low/midrange GPUs.
 
Last edited:

yamaci17

Member
so one last example to make it very clear. ryzen 7600 + rtx 3070 this time at 1440p native and 1440p dlss perf (ultra quality dlss in this game should be dlaa, probably)



native 1440p 3.6 millions of pixels = 45 fps
1440p dlss perf 0.9 millions of pixels = 73 fps

1.62x performance increase for a 4x pixel count reduction (huge!)

i dont know how impactful graphical settings would be but you will only get stable 60 fps in this game at those settings if you render the game at 0.9 millions of pixels with a 3070 and a ryzen 7600.

dlss quality for example often drops below 60 comfortably (1.6 millions of pixels).

so i dont know why it is an alien concept for some people to even think that 720p on ps5 could be a GPU limitation in avatar. as in example above, with a "competent" CPU, you still need EXTREME amounts of pixel count reduction to get some kind of noticable performance increase. even with 4x pixel reduction, it got pushed from 45 fps avg to 73 FPS. based on the data above, it is likely that you would barely get 90 fps with dlss ultra performance (480p, 0.4 millions of pixels). now interpolate these data to a 30 FPS scenario and understand how much "upscaling" you have to do to DOUBLE the framerates in Avatar.

drFEzOo.png
and even at 0.9 millions of pixels (!), game is HEAVILY gpu bound. 3070 literally uses near its full TDP (218w) at 0.9 millions (!) of pixels rendering. look at this scene. it barely hovers over 60 FPS. an RTX 3070. a freaking RTX 3070.

if i did this test and reported here the exact same the way it is above, this whole thread would pile on me. but as you can see, it is a ryzen 7600, a cpu that people can reliably combo with 4070 and alike. so you cannot even say it is a cpu bottleneck. because it is not. gpu is fully pegged. it is not a low end CPU either. it is a perfectly competent CPU that can push 100+ fps in avatar with STRONGER GPUs. so in this case, RTX 3070 BARELY GETS 65 FPS at these settings with 0.9 millions (!!!) of pixels. how unthinkable, right?

heck the CPU is chilling at %38 usage. i've seen cpu benchmarks with these CPU where it pushes 120+ fps in avatar. 65 fps definitely is not an upper limit of a 7600, surely.

SOO


cm punk wrestling GIF by WWE
 
Last edited:
I have my mistakes here and there and I too keep learning about stuff. reason I got offensive and worked up because I provided credible proof and somehow it got denied. regardless I shouldn't have crossed the line so I apologise to everyone involved regardless.

I wholeheartedly, and repeatedly agree that 13900k 6 ghz thing is really overkill, at least while testing rx 6700.

starfield on the other hand is a super heavy game on the GPU. here's a benchmark at 1440p with 3070 and... 7800x 3d



at 1440p ultra, gpu is struggling, barely gets 41 fps. with fsr quality, it barely pushes 46 fps average instead. you can see how heavy this game and its upscaling is. 960p rendering means 1.6 millions of pixels (slimsnake logic) against the 1440p native which has 3.6 millions of pixels. you sacrifice 2 millions of pixels and only get 5 fps avg. difference. with a 7800x 3d. so a better cpu does not magically get you more framerates. do you think going from 3.6 millions of pixels to 1.6 millions of pixels should provide more than a mere %12 performance.

if i did the above test on my end, and reported that i was only getting 5 fps difference between 1440p and 1440p fsr quality, people here would lay all the blame on my lowend cpu.

also, im not saying this is the norm anyways. it is just that avatar and starfield actually has heavy upscaling. alan wake 2 and cyberpunk, you will get insane performance bumps from DLSS/fsr because in those games, ray tracing resolution also gets lowered which lifts a huge burden upon low/midrange GPUs.

I should add in my initial comment I dont believe the 13900k was causing a drastic difference I just initially thought it was aiding the gpu to actually surpass the ps5 in performance mode benchmarks I feel like the ps5 should consistently outperform it slightly at 1440p or lower and then have a decent gulf at 4k or above really even though spec wise its below the gpu should really be matched with a 6700xt in terms of real world performance
 

peish

Member
You're all missing the point in the video. You can get a similarly powerful GPU and games can run better because PC can overcome console's bottleneck easily, basically no effort these days.

You don't even need a $600 to do that, almost any low end CPU you can currently get in the market is enough, memory sticks are cheap.

Optimization these days is not a magic wand either, as long as you GPU is powerful enough, you mostly can't run games worse than on PS5 since you really need to try hard in order to find CPU and RAM that aren't way more powerful than console's in today's market.

No, you put a ryzen 3600X in place of 13900K, you see a big change in richs' results.
 

Gaiff

SBI’s Resident Gaslighter
No, you put a ryzen 3600X in place of 13900K, you see a big change in richs' results.
No. The only game where you might see a significant difference is Hitman 3. The others are hardly CPU-limited if at all.
 

Bojji

Gold Member
"PS5 Pro vs RTX 5090"

"Why does the Sony console suck so bad?"

He explained this in 7900GRE recview pretty well, he treats PS5 as the baseline expierience and platform that developers target with their games. Testing GPUs vs it shows you how much better experience you can get with your money.

It's for potential GPU buyers and not for butthurt PS fanboys ¯\_(ツ)_/¯

7PFzQNZ.jpg
 
I wonder why he didn't use Death Stranding or Spider-man 2 in this comparisons (games that performs very well on PS5 vs PC) but he didn't forget to use games we know performs poorly on PS5 (compared to PC) like Avatar, Hitman 2 or Alan Wake 2.

But isn't Hitman 2 notoriously CPU bound, at least on consoles?

About Death Stranding 2 I also predict they are going to avoid comparing that game to PC GPUs.
 
Last edited:

Senua

Member
I wonder why he didn't use Death Stranding or Spider-man 2 in this comparisons (games that performs very well on PS5 vs PC) but he didn't forget to use games we know performs poorly on PS5 (compared to PC) like Avatar, Hitman 2 or Alan Wake 2.

But isn't Hitman 2 notoriously CPU bound, at least on consoles?

About Death Stranding 2 I also predict they are going to avoid comparing that game to PC GPUs.
Spiderman 2 isn't on PC yet.

Death Stranding is old now.
 
He explained this in 7900GRE recview pretty well, he treats PS5 as the baseline expierience and platform that developers target with their games. Testing GPUs vs it shows you how much better experience you can get with your money.

It's for potential GPU buyers and not for butthurt PS fanboys ¯\_(ツ)_/¯

LOL

The vast majority of PC gamers play on a xx50 or xx60 card....

Digital Comedy compares a PS5 to a fucking 4070 Super
 
Last edited:

Bojji

Gold Member
LOL

The vast majority of PC gamers play on a xx50 or xx60 card....

Digital Comedy compares a PS5 to a fucking 4070 Super

I explained this to you. He did this in GPU REVIEWS, people interested in buying new GPUs get some interesting information from it.

Folks using 1660ti aren't the target audience.
 

shamoomoo

Member
He explained this in 7900GRE recview pretty well, he treats PS5 as the baseline expierience and platform that developers target with their games. Testing GPUs vs it shows you how much better experience you can get with your money.

It's for potential GPU buyers and not for butthurt PS fanboys ¯\_(ツ)_/¯

7PFzQNZ.jpg
Yeah,that sounds like BS. Unless the developers are making exclusive,they only target hardware capable of running the game.

By saying the PS5 is the baseline, which is technically true, Richard is implying devs are giving concessions or taking advantage PS5 specific hardware differences, which is not the case.

Callista Proto was far more stable in the PS5 than the Series X and PC but PC ran the game faster with better RT. The PS5 being the market leading for current gen has no bearing on the PC.
 

Gaiff

SBI’s Resident Gaslighter
I wonder why he didn't use Death Stranding or Spider-man 2 in this comparisons (games that performs very well on PS5 vs PC) but he didn't forget to use games we know performs poorly on PS5 (compared to PC) like Avatar, Hitman 2 or Alan Wake 2.
He used The Last of Us 2...

Or did you want him to test more games that were and still are terrible ports?

They also compared Death Stranding to the PS5 way back when and it performed slightly better than the vanilla 2080. It wouldn't be knocking heads in this comparison either.

And Hitman and MHR were brought up for very specific reasons as Rich said himself; to highlight some kind of bottlenecks at higher fps on the PS5. He doesn't even think it's a GPU problem either. He said it could be because they were early ports, perhaps driver optimizations, or some other issue.

Otherwise, he used very recent games.
About Death Stranding 2 I also predict they are going to avoid comparing that game to PC GPUs.
Yet they didn't avoid comparing Death Stranding in that long video Alex of all people did.

I swear, some people will cry no matter what. DF uses a 13900K to avoid CPU bottlenecks? They complain. DF drops down to a 12400F. They still complain. Okay, what about a 3600? Still people cry because it can be apparently much faster than the consoles anyway. DF benches TLOU2 that utterly embarrasses PC-equivalent GPUs, waah, wah, we want more shit port where the PS5 shines.

Fact is, you'd be only happy if DF stacked the deck entirely against the PC.
 

SlimySnake

Flashless at the Golden Globes
I explained this to you. He did this in GPU REVIEWS, people interested in buying new GPUs get some interesting information from it.

Folks using 1660ti aren't the target audience.
I am ok with him comparing the PS5 with PC GPUs. Read my first post in this thread, i want these videos.

And yes, when the PS5 Pro comes out, they better do a video comparing it to the 7800xt, 4070 and the 3080. Especially for RT performance which we are all hoping gets a big boost in RDNA4.

however, you and I both know how taxing RT is on CPUs and at that point he has to show some common sense and realize that the people who spend $600 on a graphics card dont have another $600 laying around for the 13900k.

I have a 3080 and instead of putting an extra $300 towards my CPU, i chose a 3080 over a 3070. I would be the target demographic for that video because Id like to know if my 11700k+3080 combo would be better than the console Sony is putting out. If he uses the $600 13900k then those comparisons would be useless, at least for me.
 

yamaci17

Member
0.9 MILLIONS OF PIXELSSSSSSS
600 BUCKSSSSSSSSSSSSSSS CPU

here's a 250 bucks cpu that is only %20 slower than the 600 BUCKSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS 13900k

relative-performance-games-1280-720.png


but nooooo 250 bucks is so expensive. no can do.

but hey, 150 bucks i5 12400f is there. provides %75 of 13900k's gaming performance for %25 of the price. but of course, we don't do that here. i'm sure that final %20 25 difference is a hard requirement to feed rx 6700.

Cm Punk Whatever GIF by WWE


oh the goalposts that would be move towards if this test was made with a 12400f or 13400f or 7600. i can already see: "but how about 3600... maybe 2600... in this test and that test we've seen ps5 barely matches ryzen 1700 :/// it is so unfair" after seeing concrete denial of %99 gpu usage not being a gpu bottleneck scenario, don't tell me you wouldn't find things to nitpick even then. you would still find things to complain. :)
 
Last edited:

Bojji

Gold Member
Yeah,that sounds like BS. Unless the developers are making exclusive,they only target hardware capable of running the game.

By saying the PS5 is the baseline, which is technically true, Richard is implying devs are giving concessions or taking advantage PS5 specific hardware differences, which is not the case.

Callista Proto was far more stable in the PS5 than the Series X and PC but PC ran the game faster with better RT. The PS5 being the market leading for current gen has no bearing on the PC.

Only specific hardware in PS5 is high speed SSD, other than that it's just DX12 (but not ultimate) class GPU and x86 CPU just like PC. There is nothing stopping developers in using DirectStorage so SSD advantage is not that big in the end but there are barely any games actually using PS5 I/O in a correct way even from sony first party.

PS5 has the largest pool of players so no wonder that many developers target it when designing games.

I am ok with him comparing the PS5 with PC GPUs. Read my first post in this thread, i want these videos.

And yes, when the PS5 Pro comes out, they better do a video comparing it to the 7800xt, 4070 and the 3080. Especially for RT performance which we are all hoping gets a big boost in RDNA4.

however, you and I both know how taxing RT is on CPUs and at that point he has to show some common sense and realize that the people who spend $600 on a graphics card dont have another $600 laying around for the 13900k.

I have a 3080 and instead of putting an extra $300 towards my CPU, i chose a 3080 over a 3070. I would be the target demographic for that video because Id like to know if my 11700k+3080 combo would be better than the console Sony is putting out. If he uses the $600 13900k then those comparisons would be useless, at least for me.

You don't exactly need that 13900K for good CPU performance, some average 4070S/7900GRE buyer will get plenty of CPU power from something like 13500, not to mention this will be much faster than console:

HL:

0AE8w4G.jpg


CP:

4YRHBsK.jpg
 

yamaci17

Member
Only specific hardware in PS5 is high speed SSD, other than that it's just DX12 (but not ultimate) class GPU and x86 CPU just like PC. There is nothing stopping developers in using DirectStorage so SSD advantage is not that big in the end but there are barely any games actually using PS5 I/O in a correct way even from sony first party.

PS5 has the largest pool of players so no wonder that many developers target it when designing games.



You don't exactly need that 13900K for good CPU performance, some average 4070S/7900GRE buyer will get plenty of CPU power from something like 13500, not to mention this will be much faster than console:

HL:

0AE8w4G.jpg


CP:

4YRHBsK.jpg
yeah this 13900k talk is real funny



even the 12400 is leagues ahead of 3600 which is by itself is head of ps5 by a noticable margin many games

so even if we pair an 12400f with a 6700, test would come out with similar results lol

or compare 3950x (750 bucks msrp) with a 3600 (200 bucks msrp). you would be just watisng your money if all you do is gaming...
 
Last edited:

Bojji

Gold Member
yeah this 13900k talk is real funny



even the 12400 is leagues ahead of 3600 which is by itself is head of ps5 by a noticable margin many games

so even if we pair an 12400f with a 6700, test would come out with similar results lol


Exactly. I have never aimed for best CPU because of diminishing returns, most of the time something like Ryzen x600 or Core xx400 is enough and if someone plays in 1440p and beyond GPU won't be waiting for CPU much.

You also pay a lot for more cores when for games lazy developers (I'm only 1/2 joking) are stuck with 6 cores max:

2tVCAdA.jpg


Cache is way more important than cores ^
 
Well, the ps 5 is better at bandwidth than the 6700, but the cpu is shit and struggles at higher framerates. It's almost like both platforms were built for different reasons.
 

yamaci17

Member
time for some extra recepits of proof for 3070 at 4k dlss ultra performance (0.9m pixels) being GPU limited at 58 FPS. this time I used PRESENTMON metrics to capture the performance limitation (you can see RTSS is saying the performance is limited by GPU)

nnzWz5d.jpeg




to learn about GPU busy (please educate yourself)



"The GPU Busy time is Intel's newest feature in PresentMon: it's a measure of how long the graphics processor spends rendering the frame; the timer starts the moment the GPU receives the frame from a queue, to the moment when it swaps the completed frame buffer in the VRAM for a new one.

GPU busy is practically the ultimate tool to identify if you're limited by your CPU in a scene or not. If it says you're GPU limited: YOU'RE GPU limited. no amount of better CPU in this scene, in this context, in this test will change the result.

does anyone still have any doubts?
 
Last edited:
He explained this in 7900GRE recview pretty well, he treats PS5 as the baseline expierience and platform that developers target with their games. Testing GPUs vs it shows you how much better experience you can get with your money.

It's for potential GPU buyers and not for butthurt PS fanboys ¯\_(ツ)_/¯

7PFzQNZ.jpg
I think the comparison would be good if he used a cpu slightly better than the ps5 one instead of one 3x better (which I hope I dont need to explain may affect the results)
 
Only specific hardware in PS5 is high speed SSD, other than that it's just DX12 (but not ultimate) class GPU and x86 CPU just like PC. There is nothing stopping developers in using DirectStorage so SSD advantage is not that big in the end but there are barely any games actually using PS5 I/O in a correct way even from sony first party.

PS5 has the largest pool of players so no wonder that many developers target it when designing games.



You don't exactly need that 13900K for good CPU performance, some average 4070S/7900GRE buyer will get plenty of CPU power from something like 13500, not to mention this will be much faster than console:

HL:

0AE8w4G.jpg


CP:

4YRHBsK.jpg
Yeah you don’t need much to outperform the console cpus they are garbage
 

yamaci17

Member
And he will do it with a cpu 3x the one in the pro

13900k is only 2.3x faster than ryzen 2600 (which is give or take console levels of cpu performance) at 720p cpu bound ultra settings
relative-performance-games-1280-720.png


7600x, a more modest CPU is also 1.9x faster than console CPU
12400f even more modest CPU is also 1.8x faster than console CPU
13900k is 2.3x faster than console CPU IN GAMING scenarios. its total CPU core amount does not correlate to actual gaming experience.
but here's the problem:

I hope you now stop the "3x better CPU" argument. it beats itself. if 13900k is 3x faster than PS5 like you claim, 13400 would also be close to being 3x faster than PS5. So the result wouldn't change much regardless. because for gaming scenarios, 13400 and 13900k is dangerously close to each other. can we have an understanding here in this regard at least?

you're overhyping 13900k and i don't even know where this misconception is coming from

of course this is assuming console CPU performs like a 2600 which I personally don't think the case (in spiderman and ratchet and last of us, console easily outperforms the 3600 and for some reason keep being ignored??
 
Last edited:
13900k is only 2.3x faster than ryzen 2600 (which is give or take console levels of cpu performance) at 720p cpu bound ultra settings
relative-performance-games-1280-720.png


7600x, a more modest CPU is also 1.9x faster than console CPU
12400f even more modest CPU is also 1.8x faster than console CPU
13900k is 2.3x faster than console CPU IN GAMING scenarios. its total CPU core amount does not correlate to actual gaming experience.
but here's the problem:

I hope you now stop the "3x better CPU" argument. it beats itself. if 13900k is 3x faster than PS5 like you claim, 13400 would also be close to being 3x faster than PS5. So the result wouldn't change much regardless. because for gaming scenarios, 13400 and 13900k is dangerously close to each other. can we have an understanding here in this regard at least?

you're overhyping 13900k and i don't even know where this misconception is coming from

of course this is assuming console CPU performs like a 2600 which I personally don't think the case (in spiderman and ratchet and last of us, console easily outperforms the 3600 and for some reason keep being ignored??
I should have said close to 3x instead of full 3x I apologize for that
 

Zathalus

Member
I wonder why he didn't use Death Stranding or Spider-man 2 in this comparisons (games that performs very well on PS5 vs PC) but he didn't forget to use games we know performs poorly on PS5 (compared to PC) like Avatar, Hitman 2 or Alan Wake 2.

But isn't Hitman 2 notoriously CPU bound, at least on consoles?

About Death Stranding 2 I also predict they are going to avoid comparing that game to PC GPUs.
Death Stranding would be about the same, PS5 version performs slightly better then a 2080 and so does the 6700 compared to the 2080.

Not sure why you think Alan Wake 2 performs poorly on the PS5 compared to PC, it's around the 2080 level which is better then most.
 

yamaci17

Member
I should have said close to 3x instead of full 3x I apologize for that
how is it close to 3x when it is barely 2.3x faster than a 3.6 ghz zen+ cpu in gaming scenarios

if anything it is closer to 2x than 3x

PS5's cpu is zen 2 architecture with 3.5 ghz with 8 mb cache but probably has access to more bandwidth than a ddr4-based zen+ or zen 2 cpu. typical ddr4 bandwidth zen+ or zen 2 cpu on desktop will be between 40-60 gb/s (2666 mhz to 3600 mhz). meanwihle console has access to a total of 448 gb/s. even if we assume GPU uses 350 gb/s, that would still give cpu a massive 100 gb/s bandwidth to work with. and considering i've been monitoring GPU bandwidth usage in a lot of 2023/2024 games on my 3070, trust me, games are not that hungry for memory bandwidth. (i can provide you some numbers if you want later on with some of the heaviest games)

ps5's cpu bound performance is super inconsistent. some people will downplay it to gain argument advantage here and go as far saying it is like a ryzen 11700. this is what ryzen 2600 gets you in spiderman with ray tracing



as you can see it is super cpu bottlenecked and drops to 50s. and ps5 is known to be able to hit 70+ high frame rates in its ray tracing mode. (gpu is clearly underutilized in this video. it is a super heavy bottleneck that is occuring at 50 fps average. a bottleneck that does not happen on PS5, as it is able to push 70+ fps in ray tracing mode across the city in both games)

care to explain this? in spiderman, ps5 CPU clearly overperforms ryzen 2600, a cpu that has the %45 of the performance of a 13900k.

simple questions:

1) do you think ps5 cpu is faster than ryzen 2600?
2) do you acknowledge that in gaming scenarios, 13900k is only 2.3x faster over a ryzen 2600 in average at 720p cpu bound resolution
3) if your answers to the questions above are both yes, do you understand this means 13900k is only, realistically is 2.2x faster than the PS5 CPU and nowhere near being 3x faster than it?
4) and if you do indeed acknowledge 13900kj being barely 2.3x faster than a ps5 equivalent CPU, would you also admit that 1/3 cheaper i5 12400f is also close to being 2x faster than ryzen 2600, the one that we should've settled as PS5 or worse equivalent CPU?

what will it be?
 
Last edited:

Bojji

Gold Member
how is it close to 3x when it is barely 2.3x faster than a 3.6 ghz zen+ cpu in gaming scenarios

if anything it is closer to 2x than 3x

PS5's cpu is zen 2 architecture with 3.5 ghz with 8 mb cache but probably has access to more bandwidth than a ddr4-based zen+ or zen 2 cpu. typical ddr4 bandwidth zen+ or zen 2 cpu on desktop will be between 40-60 gb/s (2666 mhz to 3600 mhz). meanwihle console has access to a total of 448 gb/s. even if we assume GPU uses 350 gb/s, that would still give cpu a massive 100 gb/s bandwidth to work with. and considering i've been monitoring GPU bandwidth usage in a lot of 2023/2024 games on my 3070, trust me, games are not that hungry for memory bandwidth. (i can provide you some numbers if you want later on with some of the heaviest games)

ps5's cpu bound performance is super inconsistent. some people will downplay it to gain argument advantage here and go as far saying it is like a ryzen 11700. this is what ryzen 2600 gets you in spiderman with ray tracing



as you can see it is super cpu bottlenecked and drops to 50s. and ps5 is known to be able to hit 70+ high frame rates in its ray tracing mode.

care to explain this? in spiderman, ps5 CPU clearly overperforms ryzen 2600, a cpu that has the %45 of the performance of a 13900k.


I think spider man is example of devs using hardware depression PS5 offers that offloads cpu. They have to do that stuff on cpu on pc so it hammers it much more.

Alex was talking about it in one of the videos I think.
 
Again, as an option. The base $400/500 console would still exist.

Making consoles it's not free...

They are not making a super expensive product for a niche of a niche...

How many people have a 4090 right now???

It doesn't make any sense from a business stand-point
 

yamaci17

Member
I think spider man is example of devs using hardware depression PS5 offers that offloads cpu. They have to do that stuff on cpu on pc so it hammers it much more.

Alex was talking about it in one of the videos I think.
that is true, but I still refuse to believe that console CPU, in general, performs like a desktop ryzen 1700, 2600 or whatever. I honestly think it will be close to a 3600 as much as possible.

Yes, df did tests with that 4800s and some people formed their opinions based on this video. but i guess they omitted this part:

"Before we go on, some words of caution in how the data should be interpreted. On a basic level, we should get some idea of the horsepower available to developers for their console titles. However, equally, we need to accept that consoles are very different beasts. The Xbox CPU is out of its natural habitat. So, just on a superficial level, the Xbox CPU and GPU are integrated into the same chip - there's no need to send out graphics commands and data over a PCI Express slot as we do on PC. On top of that, the nature of development on console and PC is very different: for Xbox Series machines, we should expect developers to tailor their CPU code to the fixed platform Microsoft has developed for them. On PC, games need to work on a plethora of different hardware."

Not only that but it is historically an occurence that PS5 often gets better framerate averages in CPU bound situations. It is why i cannot take that benchmark as a way to make surefire statements like "console is like a ryzen 1800 based on this and that test so this must be a cpu bottleneck!"

let's see how will dragon's dogma 2 play out. it will be an insanely cpu bound game. let's see if 3600 will have a mythical performance difference over PS5 in cpu bottlenecked scenarios.
 
Last edited:

rodrigolfp

Haptic Gamepads 4 Life
Making consoles it's not free...

They are not making a super expensive product for a niche of a niche...

How many people have a 4090 right now???

It doesn't make any sense from a business stand-point
A 4090 is free to make and doesn't make sense from a business stand point too?

The same people buying a 4080/4090 could buy a ultra premium console to play exclusives.
 

winjer

Gold Member

So this is something that might be of interest for some, in this current discussion.
That article is the Anandtech review for the 4750G, 4650G and 4350G. These are Zen2 based APUs.
The are a bit more interesting because they have only 8+8Mb of L3 cache. So it's in between the Zen2 of the PS5 and the full desktop Zen2 CPUs.
With less L3 cache and a memory controller for GDDR6, the Zen2 of the PS5 will lose a bit more performance.
Another interesting part is that they were using a 2080Ti. Which is only 15-17% faster that an RTX 2080, that some people compare to the PS5 GPU. And not very distant from the RX6700, that DF used.
The other fastest CPU in this test is a 5600X. But still, there is a difference in performance of around 20%, to the 4560G.
And a 10% difference to the 3600X. Although it varies from game to game.
This test is a bit old, from 2020. So there are no UE5 games. But there are several UE4. And other engines.
 
Last edited:

JRW

Member
Vast majority of the tests seem to be GPU limited, so the comparison is valid and interesting, however that Monster Hunter test might well be CPU limited so Rich not mentioning the CPU is dumb.
"All PC parts tested on a Core i9 13900K-based system with 32GB of 6000MHz DDR5." Not sure if they snuck that into the video description later as I just noticed the video today.
 
Top Bottom