ResilientBanana
Member
You get what you pay for.
Fast?? The tech is evolving waaay more slower than decades ago, same with the graphics evolution. And people where fine with it. The "need" of pro consoles it's the whinnig of nonconformist people with a need for consumption and who would equally complain about a ps5 pro, or ps5 pro ultra after a few months of launch.Dude, STFU with this low key bait. The world is moving fast and it's clear that 6-7 year old generations no longer work like intended. Mid-Gen refreshes are more than welcome and does not mean anyone is unhappy with the original PS5.
4090 - PS5 gap smallest ever relative to previous generations?
Yeah ignoring the API differences, hardware configurations etc optimization on a large portion of newly released PC ports leaves a lot to be desired. With that said.. UE5 can perform well if we look at games like Fortnite.Trouble is, UE5, the now defacto game engine, is such a CPU hog, that even the very latest desktop CPUs are struggling to cope with.
I really, really hate UE.
4800S is Xbox Series X APU made to work as normal PC platform by AMD (done much better than 4700S made from PS5), it looks like performance is lower than ANY Zen 2 CPU available in PC space and comparable to Zen 1 chips.
4090 - PS5 gap smallest ever relative to previous generations?
The 4090 is not mainstream. It's impossible to find a $250 or even $400 GPU today that can best the base consoles by the same percentage the 1060 was for $250 last gen.
I think that is what he is getting at.
If it's done much better than Ps5 version why are multi platform games still shit on either Xbox?
Lmao
So Sony had to customize the whole shit? CrazyExactly.
2014:
980ti $649 - 100% power
960 $199 - 48% power
2016:
1080ti $699 - 100% power
1060 $299 - 49% power
2018:
2080ti $999 - 100% power
2060 $349 - 60% power
2020:
3090 $1,499 - 100% power
3060 $329 - 45% power
2023:
4090 $1,599 - 100% power
4060 $299 - 32% power
And 4060 is better than 3060 in some games and worse in others! Only 1/3 of (almost) full chip performance. Prices of top GPUs went absurd and mainstream ones are weaker and weaker compared to them every gen, for 3xxx series 3080 was the best deal but almost no one could buy it at launch.
I meant how AMD has done the whole board, 4700S had shit PCIE slot so you couldn't even use any decent GPU on it and only 2 SATA ports. They made it better with this Xbox APU.
The 4090 is mainstream? Those type of high-end products aren't considered average for most PC gamers.4090 - PS5 gap smallest ever relative to previous generations?
So Sony had to customize the whole shit? Crazy
And that's only if you compare the top available chip to the 60-tier card. In actuality, the 4090 is only 90% of the full AD-102 die whereas the 1080 Ti was 93.3% and the 2080 Ti around 94%. The 4090 despite its name has more in common with traditional 80 Ti cards but NVIDIA knew they couldn't sell a 4080 Ti for $1600. 4090 sounds much nicer because the 90 suffix only made a return with Ampere. Before that, it was a dual-GPU configuration that stopped being made with Kepler (GTX 690). Historically speaking, the 4090 is an 80 Ti-class card, not a Titan one like its name would have you believe. Lovelace actually has great performance improvement but everything below the 4080 has been gutted so badly so as to not make the top guys look bad that they ended up screwing up the mid-range. I don't think a 60 card getting beaten by its predecessor in so many instances has ever happened before. Usually, the new 60 outpaces the previous 70 and sometimes even comes within striking distance of the previous 80.Exactly.
2014:
980ti $649 - 100% power
960 $199 - 48% power
2016:
1080ti $699 - 100% power
1060 $299 - 49% power
2018:
2080ti $999 - 100% power
2060 $349 - 60% power
2020:
3090 $1,499 - 100% power
3060 $329 - 45% power
2023:
4090 $1,599 - 100% power
4060 $299 - 32% power
And 4060 is better than 3060 in some games and worse in others! Only 1/3 of (almost) full chip performance. Prices of top GPUs went absurd and mainstream ones are weaker and weaker compared to them every gen, for 3xxx series 3080 was the best deal but almost no one could buy it at launch.
I meant how AMD has done the whole board, 4700S had shit PCIE slot so you couldn't even use any decent GPU on it and only 2 SATA ports. They made it better with this Xbox APU.
Playing Demons SoulsWhich bespoke features? What are consoles doing that PC can't?
I mean if you have a PS3 copy RPCS3 is a thing...Playing Demons Souls
I've got 3700x myself. it's a pretty good cpu.
these consoles take only 200watt. That's fantastic performance especially considering that
that confiq with the amount of ram needed would cost too much. devs like the single large pool of ram. just need more cache on cpuHBM is way too expensive for a console. Even for PC GPUs and that's why we only see it in enterprise class products.
Consoles have 2 solutions. One is to go for a dual pool of memory. DDR4 for the CPU and GDDR6 for the GPU.
The other is to add more cache. This helps a lot with scheduling. But consoles cut on this, as both the PS5 and Series X, only have 4MB per CCX.
Having the full 16MB per CCX that Zen2 has on PC, would alleviate a lot the issues with memory latency. Or even better, adding some 3DVcache.
Maybe you have. I am fine.Yea, it’s so good that you’re in every PC thread complaining about all the issues you have with your PC
Damn, this puts into perspective some technical performances we've seen then. It would even, IMO, explain why Series X might struggle with framerates more often on 3P games than PS5. The PS5 has a lot of custom ASIC hardware for offloading pretty much all of the I/O and decompression routines off the CPU.
While Series X (and S) have some I/O to handle these things, it's nowhere near as robust as on the PS5, so the CPU has to do more of the heavy lifting there. A 100 MHz advantage seemingly isn't enough to make up for this, not to mention I don't think the Series systems have dedicated I/O ASICs for handling cache coherency. Then there's the lack of cache scrubbers, so cache line flushes are more frequent with the Xbox systems, which take up CPU cycle times.
All of that would impact the rate at which draw calls for the GPU could be issued. Hence the 5-10 (sometimes more) lower FPS we see in a lot of Series X 3P games compared to PS5 ones at similar settings. This is just me giving a possible explanation for some of the performance we've seen in games on both platforms; the fact the CPUs in neither the Xbox Series or PS5 systems are as robust as first thought, just shows the increased likelihood of this all being true.
Guess this might make it two console gens in a row where the CPUs were rather weak-sauce? We need another Cell moment; the industry is ready this time.
You think it's a latency issue? That's the only thing I could think of.
IMO then, unified memory isn't the problem. They just need lower-latency memory. HBM is the future for console memory.
they don’t want the cost of designing a chip. much cheaper to partner with Amd. anything outside of that does not equal $400 console.It's long overdue for Sony to quit x86 and move PlayStation to ARM architecture based SoC for the PS5.
It's long overdue for Sony to quit x86 and move PlayStation to ARM architecture based SoC for the PS5.
Or give me upgrades every 3 years or sono. devs need to stop using console as budget pc and start using it's bespoke features and optimizations
That sounds untoward, i'm phoning F.A.C.T.I mean if you have a PS3 copy RPCS3 is a thing...
But the Zen 4 in the pro consoles would be cut down too. It might be better for a year, but then we're back at where we started.I am really hoping that PS5 Pro goes for Zen 4. This video from DF demonstrates how cut down CPU is in the current gen. Still significantly better vs PS4/Xbone but not enough to run modern Unreal basically.
Also, why are people complaing about this DF video? It's interesting from both technical and speculative perspective. If you don't like it, don't watch it ffs.
2080ti was $1200And that's only if you compare the top available chip to the 60-tier card. In actuality, the 4090 is only 90% of the full AD-102 die whereas the 1080 Ti was 93.3% and the 2080 Ti around 94%. The 4090 despite its name has more in common with traditional 80 Ti cards but NVIDIA knew they couldn't sell a 4080 Ti for $1600. 4090 sounds much nicer because the 90 suffix only made a return with Ampere. Before that, it was a dual-GPU configuration that stopped being made with Kepler (GTX 690). Historically speaking, the 4090 is an 80 Ti-class card, not a Titan one like its name would have you believe. Lovelace actually has great performance improvement but everything below the 4080 has been gutted so badly so as to not make the top guys look bad that they ended up screwing up the mid-range. I don't think a 60 card getting beaten by its predecessor in so many instances has ever happened before. Usually, the new 60 outpaces the previous 70 and sometimes even comes within striking distance of the previous 80.
NVIDIA has been charging us more while giving us less over the years. People were outraged at the $1000 2080 Ti. Well, the 4090 is a $1600 4080 Ti.
Why though? ARMs only real benifit is power savings, and that's because they build real wide. Do you want a chip with the cost of a 3090 and the performance worse than what you already have?It's long overdue for Sony to quit x86 and move PlayStation to ARM architecture based SoC for the PS5.
Or give me upgrades every 3 years or so
It's long overdue for Sony to quit x86 and move PlayStation to ARM architecture based SoC for the PS5.
If Apple acquires Sony, I can see future PS consoles switching to Apple Silicon. That would certainly be something. Apple Silicon humiliates anything on the x86 in perf/watt comparisons.Only if they can get Apple silicon...
Also, great for back compatibility, but I guess PS gamers are used to that kind of thing by now.
If Apple acquires Sony, I can see future PS consoles switching to Apple Silicon. That would certainly be something. Apple Silicon humiliates anything on the x86 in perf/watt comparisons.
Typing this post while watching Leagues Cup on the Apple TV+ app on my PS5
Just to comment on this since I watched it recently. I believe Cerny had said that the decompression unit was equal to about 9 Zen Cores of processing which is ALOT to take off of the CPU
It's not about being optimized to get every last cycle. I remember the days when games like Doom 3 or Half-Life 2 were made for the PC 1st and foremost and with no consideration about any console portsAh imagine a game 100% optimized for a 4090 based machine... one can dream.
You're doing a shit job of employing it *rollseyes*I always like deploying the 'ignore' function.
For consoles, they're a big jump over what has gone before, which seems to be last on many. Never mind 3 years in there's still only a couple of games where a PS5 or Series X owner could say 'this' game couldn't be done on a One or PS4.I think these kind of tests are quite interesting. Can't say I'm surprised at all by the results, these CPUs never looked particular great on paper with the castrated 2x4MB cache, their claim to fame is just that they are still a huge improvement from the Jaguar CPUs.
I would expect refreshed hardware to at minimum double the cache but preferably quadruple it to 32MB and punch up the max clocks by 1ghz at least. You could probably solidify 60fps on the CPU side with that in games that are CPU bound to 30 on the base hardware.
Based on which benchmarks? I would expect the Zen 4 mobile CPUs to be roughly competitive with the M2 Pro.Apple Silicon humiliates anything on the x86 in perf/watt comparisons.
I swear, every thread is turning into shitty console war these days… I guarantee that if there is a topic about the Gameboy, it will eventually contain a post about « superior PS5 I/O »…
That would certainly be something. Apple Silicon humiliates anything on the x86 in perf/watt comparisons.
So buy a PC then. The PS5 is selling so clearly neogaf is in the minority. Just like the people who have a 4090.But the PS5 is a beast, how can you out-beast the beast? How dare you even hint at such possibility!
It is interesting to see that the CPU is a measly 60nanoseconds more in latency (on average) than the desktop counterparts, giving very good evidence for why console hand-rolled deterministic code on lesser mobile grade processor could easily hide and amortise that typical random workload latency at the beginning of processing.
True, but it doesn't impact the first party console games, when their OS is a real-time OS with minimal resource usage optimised for games and the workloads are predictably batched so the latency hit happens only when kicking off an new unpredicted process and then hidden within the workload being batched.It's not. For some very strange reason, their 3600 is only doing 90ns of memory latency.
A normal value of latency for it should be around 75ns. With tuned ram, should be around 65ns.
For a long time I was finding their benchmarks with this CPU to be somewhat off, to what I was seeing in other sites gaming benchmarks, and even on my own benchmarks.
This huge latency that their CPU has can affect performance quite a bit, since Zen2 really likes low latency.
True, but it doesn't impact the first party console games, when their OS is a real-time OS with minimal resource usage optimised for games and the workloads are predictably batched so the latency hit happens only when kicking off an new unpredicted process and then hidden within the workload being batched.
The scale of latency difference being tiny was my main point, rather than the specific value. Even at an extra 40ns wouldn't break the bank for quality hand-rolled code by first party devs, and is only an issue with devs treaty the consoles like a budget PC which has been rife through the cross-gen faceoffs.
I noticed that when they tested the latency on these CPUs, that the 3600 had 90ns. This is really bad for this CPU.
It takes really bad memory and low speeds to get this bad.
But this also explains why in so many benchmarks that DF made with the 3600, it's performance seemed lower than what it should be.
Why did they screw up latency on this CPU so badly?
It's the GDDR
It's made to deliver wide bandwidth without caring too much about latency, which GPUs are perfectly fine with. But CPUs are more latency sensitive
Combine that with smaller caches and yeah
So buy a PC then. The PS5 is selling so clearly neogaf is in the minority. Just like the people who have a 4090.
no $1600 GPU is ever going to be a majority card lmaoare you saying the 4090 is a minority card? it's sold boatloads. Thats a positive and also painful that NVIDIA managed to make us all upgrade to a 1500 pound card!
Ok dudeDude, read my post again.
I'm talking about the latency of the their 3600. Not the 4800.
90ns is not a normal value for this CPU.
are you saying the 4090 is a minority card? it's sold boatloads. Thats a positive and also painful that NVIDIA managed to make us all upgrade to a 1500 pound card!