• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro Technical Seminar with Mark Cerny

Bojji

Member
I used the one that has been refined and evolved over decades, where bad port accusations, or not pushing visuals can't be used, and being the biggest game in town would run better on Nvidia at pure raster if it possibly could given Nvidia own the dedicated card market and running best on Nvidia doesn't hurt the game's marketing.

Some games will run better on Nvidia or AMD for no logical reason, other than optimizing for one platform and not the other. COD is one of those games:

mHCQBBl.jpeg
9yVrxLG.jpeg


TPU shows average based of dozens of tests/games ^
 

PaintTinJr

Member
Some games will run better on Nvidia or AMD for no logical reason, other than optimizing for one platform and not the other. COD is one of those games:

mHCQBBl.jpeg
9yVrxLG.jpeg


TPU shows average based of dozens of tests/games ^
But in 'relative' spend on the die, AMD is still winning even in those situations because it would be much cheaper for Nvidia with their market share to produce those AMD cards with superior memory - that would then perform better - so by economics alone Nvidia maintain their advantage, not by design. That's the point I'm making.
 

TrebleShot

Member
The thing that should be taken away from this discussion, and applied generally to all related subjects, is the paramount importance of system I/O.

It doesn't matter how fast the internals are on any component if there's bottleneck to data coming in or out of it,.
I'd add to this that the major bottleneck on every single PSSR implementation is the ability to passively apply it. It's not possible currently so requires an update from developers.

So say Sony suddenly announce PSSR 2 is available go ahead and use it. Every dev has to jump back in update the game and submit it for release.


That to me sounds very slow and which devs are going to take the time to actually do it? I don't think many will. Especially if as we've seen it can bork some graphical settings and need reversing or further work and another release.

They need a passive system that makes the updates automatically apply and it just updates.
 

FireFly

Member
But in 'relative' spend on the die, AMD is still winning even in those situations because it would be much cheaper for Nvidia with their market share to produce those AMD cards with superior memory - that would then perform better - so by economics alone Nvidia maintain their advantage, not by design. That's the point I'm making.
AMD uses the same wafers for their CPUs, so it's not clear if they purchase significantly less than Nvidia overall. Customer-specific discounts are also unknown.

What is known is that with RDNA 3, AMD spend more transistors for comparable performance, on average.
 

PaintTinJr

Member
AMD uses the same wafers for their CPUs, so it's not clear if they purchase significantly less than Nvidia overall. Customer-specific discounts are also unknown.

What is known is that with RDNA 3, AMD spend more transistors for comparable performance, on average.
Which you should expect for a chip that is a RISC-CISC hybrid, as opposed to RISC + CISC subsystems.
 

Zathalus

Member
But you can't you would have to compare spend on the die, so it is 'relative' in a true sense and not being able to offer interface connections, cache and vram and lithography levels that the competition can't. Ultimately Nvidia produce nothing that will be more power efficient and performant as the Pro, at the same size, and that's despite half the die space in a Pro being used by a CPU.
Well yeah, because Nvidia doesn’t make APUs for a home console (Switch 2 aside). We can compare against GPUs from AMD and Intel where Nvidia is still the market leader in die space and efficiency. Which was my point, Nvidia only seems like they brute force things if you take a look at the top end GPU die, for most of their product stack they actually have small and efficient GPUs.
 

Three

Gold Member
They don't appear very important because small fraction of games use them. Why? Maybe because most popular console (target platform) doesn't support it?
So with that in mind would that change with a much smaller audience high end device like the PS5 Pro? They would target the PS5 still no? Why are you picking and choosing like this regarding feature importance? If you think things like DP4a may become important then so does ML performance in general on the Pro.
VRS - dogshit
Mesh Shaders - Very important (in the long run) but PS5 supports Primitive Shaders so it doesn't change much for this console
SFS - who knows
No DP4a support - this killed potential for ML uspacling being used on regular PS5 (while you can run XESS on RDNA2 AMD GPUs on PC)
You say DP4a was important because it killed potential of XeSS on regular PS5. so why didn't the xbox exclusives like Starfield, Halo, redfall etc use it instead on xbox consoles? That had DP4a. Nobody is going to go official XeSS on a console with an AMD GPU and risk AMD support and improvements on that GPU when they can use the AMD equivalent. It wouldn't make sense. Is this not a case of "the non ML GPU/machine can do upscaling without AI" anymore?



Tensor Cores/Custom RDNA hardware in Pro - ML hardware is something that will be important in the future for game rendering but IMO not for next few years. Right now they pretty much only use it for SR (DLSS, PSSR, XeSS).

It's important even now for things beyond SR. Games are benefiting from it already, framegen, denoisers, AI opponents, muscle deformation etc. They just aren't in many console games yet. Your previous post that "the non ML consoles can do those things already without a gpu better suited for it" is the same thing as your importance for DP4a now. Yet you're emphasising its importance now with DP4a because the base PS5 didn't have it.

Having hardware that's more efficient at these tasks instead of having older hardware not designed for these specific tasks means it frees up resources for the features both PS5 and PS5 pro support, instead of having the PS5 base doing these things in software and taking up more resources. The PS5 Pro benefits.
 

PaintTinJr

Member
Well yeah, because Nvidia doesn’t make APUs for a home console (Switch 2 aside). We can compare against GPUs from AMD and Intel where Nvidia is still the market leader in die space and efficiency. Which was my point, Nvidia only seems like they brute force things if you take a look at the top end GPU die, for most of their product stack they actually have small and efficient GPUs.
But if their 4070 is only on par with a Pro APU, even with half the Pro APU being held back to RDNA2/3, the clock being held back to PS5 OG levels and the other half of the APU being a CPU, how would they add a CPU to match the Pro's Ryzen 2 and then fit more GPU performance?

The reality is that Sony using RDNA would design GPUs that would significantly beat Nvidia repeatedly if die-size, market share and component costs and power usage were all equalized.

So the point again, is that Nividia's market position allow them to design bloated GPUs with better lithography and memory technologies and more expensive layouts to sell at bigger prices and higher volume to outperform the competition in 2 out of 3 areas, currently.
 

Gaiff

SBI’s Resident Gaslighter
But if their 4070 is only on par with a Pro APU, even with half the Pro APU being held back to RDNA2/3, the clock being held back to PS5 OG levels and the other half of the APU being a CPU, how would they add a CPU to match the Pro's Ryzen 2 and then fit more GPU performance?
The Pro isn’t on par with the 4070.
The reality is that Sony using RDNA would design GPUs that would significantly beat Nvidia repeatedly if die-size, market share and component costs and power usage were all equalized.
Yeah, yeah, just like PSSR was supposed to be far superior to DLSS and you were acting smug about it.
So the point again, is that Nividia's market position allow them to design bloated GPUs with better lithography and memory technologies and more expensive layouts to sell at bigger prices and higher volume to outperform the competition in 2 out of 3 areas, currently.
Nonsense.
 

FireFly

Member
I wasn't being literal about RISC-CISC, but the same is true of both APU/GPU, the more generalized the circuitry for more generalized GPU compute, the more transistors needed. That's my assertion.
I suspect that even in compute Nvidia is ahead on a per-transistor basis, due to the doubling of CUDA cores per SM with Ampere. However the discussion was specifically about the gaming performance of the architecture.
 

PaintTinJr

Member
The Pro isn’t on par with the 4070.

Yeah, yeah, just like PSSR was supposed to be far superior to DLSS and you were acting smug about it.

Nonsense.
Let's check back on the Pro vs 4070 situation at the end of the gen, the PSSR situation versus DLSS 6(?) at the end of the gen, and compare power efficiency versus performance vs die size on RDNA vs Nvidia GPUs at the end of the gen.

The amortised design effort to get Pro APU to the performance and versatility it has is only going to make things easier for Amethyst going forward IMO. But it's cool you think it will work out different as we hit the diminishing returns walls versus costs in GPU design.
 

Xyphie

Member
But you can't you would have to compare spend on the die, so it is 'relative' in a true sense and not being able to offer interface connections, cache and vram and lithography levels that the competition can't. Ultimately Nvidia produce nothing that will be more power efficient and performant as the Pro, at the same size, and that's despite half the die space in a Pro being used by a CPU.

It's almost like consumer GPUs aren't optimized for power efficiency because power is cheap. Take that same silicon as a 4070 and actually run it efficiently and you end up with 72W.

CPU is also nothing close to half the die space, it's like 40mm^2 out of ~300mm^2 back on 7nm, significantly less than that on 4nm or whatever the Pro uses.
 

PaintTinJr

Member
I suspect that even in compute Nvidia is ahead on a per-transistor basis, due to the doubling of CUDA cores per SM with Ampere. However the discussion was specifically about the gaming performance of the architecture.
And I would still expect in all edge cases RDNA to wipe the floor with CUDA in real world performance per FLOP, by being more generalized processing to find superior solutions via GPU software innovation, but both our statements are sadly hypotheticals.
 

PaintTinJr

Member
It's almost like consumer GPUs aren't optimized for power efficiency because power is cheap. Take that same silicon as a 4070 and actually run it efficiently and you end up with 72W.

CPU is also nothing close to half the die space, it's like 40mm^2 out of ~300mm^2 back on 7nm, significantly less than that on 4nm or whatever the Pro uses.
Are you including the area of the wiring interfaces like the northbridge/southbridge and unified memory controllers in that assessment or the layer count?
 
Last edited:

Zathalus

Member
But if their 4070 is only on par with a Pro APU, even with half the Pro APU being held back to RDNA2/3, the clock being held back to PS5 OG levels and the other half of the APU being a CPU, how would they add a CPU to match the Pro's Ryzen 2 and then fit more GPU performance?

The reality is that Sony using RDNA would design GPUs that would significantly beat Nvidia repeatedly if die-size, market share and component costs and power usage were all equalized.

So the point again, is that Nividia's market position allow them to design bloated GPUs with better lithography and memory technologies and more expensive layouts to sell at bigger prices and higher volume to outperform the competition in 2 out of 3 areas, currently.
But the Pro isn’t on par with a 4070? The 4070 outperforms the Pro in raster and RT. The 4070 is also a severely cut down AD104 with only 46 out of 60 SM units enabled that is tied to a meagre 192bit bus.

Besides, dedicated GPUs don’t overly focus on power efficiency. Clock speeds and voltages are often bumped up to hit a specific performance target with not a lot of thought given to being efficient. For that a Nvidia mobile GPU would be a better comparison. The 4080 mobile offers performance equal to a regular 4070 at about 110w. The entire laptop uses roughly the same power as a Pro while offering superior performance. Or somebody can under volt a 4070 Ti which will dramatically lower power usage while still offering superior performance.
 

Bojji

Member
But the Pro isn’t on par with a 4070? The 4070 outperforms the Pro in raster and RT. The 4070 is also a severely cut down AD104 with only 46 out of 60 SM units enabled that is tied to a meagre 192bit bus.

Besides, dedicated GPUs don’t overly focus on power efficiency. Clock speeds and voltages are often bumped up to hit a specific performance target with not a lot of thought given to being efficient. For that a Nvidia mobile GPU would be a better comparison. The 4080 mobile offers performance equal to a regular 4070 at about 110w. The entire laptop uses roughly the same power as a Pro while offering superior performance. Or somebody can under volt a 4070 Ti which will dramatically lower power usage while still offering superior performance.

Default voltage for 4070ti super is 1.1v. Quick and dirty undervolt: default performance on 0.975v

Card uses ~220W instead of 300W. Power efficiency of Ada is amazing and far beyond AMD.
 

Mr.Phoenix

Member
Cool maybe he'll explain why some games looks like shit.
I am late to this party... so it won't surprise me if someone else has said this.

I see PSSR no differently from how I saw DLSS when that first launched, and it's like everyone forgot the uproar following the 2080 vs. the 1080 in price, specs, and real-world performance back then.

Then DLSS got better, and now it is the gold standard of image reconstruction.

I believe that will happen with PSSR too. It will get better.
 
I am late to this party... so it won't surprise me if someone else has said this.

I see PSSR no differently from how I saw DLSS when that first launched, and it's like everyone forgot the uproar following the 2080 vs. the 1080 in price, specs, and real-world performance back then.

Then DLSS got better, and now it is the gold standard of image reconstruction.

I believe that will happen with PSSR too. It will get better.
The last versions of PSSR are already pretty good. For instance in Wukong, a UE5 game with tons of foliage and alpha effects.
 

Bojji

Member
I am late to this party... so it won't surprise me if someone else has said this.

I see PSSR no differently from how I saw DLSS when that first launched, and it's like everyone forgot the uproar following the 2080 vs. the 1080 in price, specs, and real-world performance back then.

Then DLSS got better, and now it is the gold standard of image reconstruction.

I believe that will happen with PSSR too. It will get better.

Dlss 1.0 was shit, and got deserved criticism. But it was also completely different thing than 2.0 (and beyond).

2.0 launched in 2020 and it was good from the start (and got better since then).

They launched pssr 4 years after Dlss 2.0 (and few years after XeSS) and it shows (sometimes big) issues in some portion of patched games. It wasn't ready to be given to third party devs in Q4 2024.
 
Last edited:

diffusionx

Gold Member
Dlss 1.0 was shit, and got deserved criticism. But it was also completely different thing than 2.0 (and beyond).

2.0 launched in 2020 and it was good from the start (and got better since then).

They launched pssr 4 years after Dlss 2.0 (and few years after XeSS) and it shows (sometimes big) issues in some portion of patched games. It wasn't ready to be given to third party devs in Q4 2024.
PSSR is better than DLSS 2.0 was, which had issues but were mostly glossed over because it had showcase titles like Death Stranding. I played through Control with DLSS turned on around that time and actually wondering what the big deal was because it had IQ issues and the like.

On some level this PSSR uproar is crazy exaggeration and blowing things out of proportion but if it spurs things to improve quickly all the better. People are acting like every ps5 pro pssr game had major issues and is unplayable and it’s just not the case at all.
 

twilo99

Member
real world performance

Since there aren’t any standard performance benchmarks in the console world this is directly related to developers and how well optimized their code is for whatever specific hardware they are targeting.

We have a whole thread dedicated to “ps5 pro enhanced games” because without extra care the extra performance is negligible.
 
Last edited:

Bojji

Member
PSSR is better than DLSS 2.0 was, which had issues but were mostly glossed over because it had showcase titles like Death Stranding. I played through Control with DLSS turned on around that time and actually wondering what the big deal was because it had IQ issues and the like.

On some level this PSSR uproar is crazy exaggeration and blowing things out of proportion but if it spurs things to improve quickly all the better. People are acting like every ps5 pro pssr game had major issues and is unplayable and it’s just not the case at all.

You can try updating DLSS file in Control and see what issues were related to 2.0 and what issues are related to lower res RT.

I don't remember anyone complaining much about DLSS in Control (this game was also beta test for it with "1.9" DLSS).
 

PaintTinJr

Member
But the Pro isn’t on par with a 4070? The 4070 outperforms the Pro in raster and RT. The 4070 is also a severely cut down AD104 with only 46 out of 60 SM units enabled that is tied to a meagre 192bit bus.

Besides, dedicated GPUs don’t overly focus on power efficiency. Clock speeds and voltages are often bumped up to hit a specific performance target with not a lot of thought given to being efficient. For that a Nvidia mobile GPU would be a better comparison. The 4080 mobile offers performance equal to a regular 4070 at about 110w. The entire laptop uses roughly the same power as a Pro while offering superior performance. Or somebody can under volt a 4070 Ti which will dramatically lower power usage while still offering superior performance.
So exactly as I predicted and said was Bojji Bojji 's angle from the start. A Nvidia lovin takes over the conversation about design philosophy where we can move up to a 4080 mobile or 4070ti and alter clocks and probably not even benchmark for 2hrs at full load to let thermal throttling kick-in, that doesn't on consoles no matter if 50hrs or 500hrs, and then ignore relative spend on caches, expensive wiring layouts, etc, etc, and just state it is better on Nvidia.

Pretty sure the 1300 theoretical TOPs of a RTX 4070 are stupidly memory bandwidth bound even at full desktop and power level for those stacked CNNs if running PSSR compared to the Pro as Cerny described in the video but no worries, say no more this dance repeats in every GPU related tech thread.
 

Zathalus

Member
So exactly as I predicted and said was Bojji Bojji 's angle from the start. A Nvidia lovin takes over the conversation about design philosophy where we can move up to a 4080 mobile or 4070ti and alter clocks and probably not even benchmark for 2hrs at full load to let thermal throttling kick-in, that doesn't on consoles no matter if 50hrs or 500hrs, and then ignore relative spend on caches, expensive wiring layouts, etc, etc, and just state it is better on Nvidia.

Pretty sure the 1300 theoretical TOPs of a RTX 4070 are stupidly memory bandwidth bound even at full desktop and power level for those stacked CNNs if running PSSR compared to the Pro as Cerny described in the video but no worries, say no more this dance repeats in every GPU related tech thread.
It was mentioned that Nvidia "brute forces" compared to AMD. I have just pointed out that it is factually untrue. Nvidia has smaller die sizes and less power usage compared to equivalent GPUs from AMD and Intel. I've not moved up GPUs at all, 4080 mobile, 4070ti, 4070 Super, and regular 4070 all use the exact same die, AD104. 4080 mobile is perfectly capable of sustaining clocks that give it performance in-line with a regular 4070 as it has more SM units enabled of the AD104 chip, so running at reduced clock speeds allows it to use a fraction of the power while still offering good performance. A laptop with an 8 core AMD CPU and 4080 mobile is capable of performing at the level of a desktop 4070 while consuming around 200-220w combined - without being thermal throttled. Just comparing desktop GPUs, as I originally did, still shows that 7800XT trailing behind the 4070 Ti in performance (behind in raster and much more behind in RT) while having significantly more memory bandwidth, more cache, and a larger die.

I'm not even sure why you bring up 1300 TOPs of the 4070 as it certainly is not that when using INT8 and/or sparsity. The equivalent TOPS number to the Pro would be 260 TOPs of INT8 (no sparsity) at regular clock speeds.
 

Bojji

Member
So exactly as I predicted and said was Bojji Bojji 's angle from the start. A Nvidia lovin takes over the conversation about design philosophy where we can move up to a 4080 mobile or 4070ti and alter clocks and probably not even benchmark for 2hrs at full load to let thermal throttling kick-in, that doesn't on consoles no matter if 50hrs or 500hrs, and then ignore relative spend on caches, expensive wiring layouts, etc, etc, and just state it is better on Nvidia.

Pretty sure the 1300 theoretical TOPs of a RTX 4070 are stupidly memory bandwidth bound even at full desktop and power level for those stacked CNNs if running PSSR compared to the Pro as Cerny described in the video but no worries, say no more this dance repeats in every GPU related tech thread.

There is no nvidia loving from me. I don't like this company.

But I don't have to like them to acknowledge that they are top dog in GPU space and AMD is behind them in almost every category.
 

PaintTinJr

Member
It was mentioned that Nvidia "brute forces" compared to AMD. I have just pointed out that it is factually untrue. Nvidia has smaller die sizes and less power usage compared to equivalent GPUs from AMD and Intel. I've not moved up GPUs at all, 4080 mobile, 4070ti, 4070 Super, and regular 4070 all use the exact same die, AD104. 4080 mobile is perfectly capable of sustaining clocks that give it performance in-line with a regular 4070 as it has more SM units enabled of the AD104 chip, so running at reduced clock speeds allows it to use a fraction of the power while still offering good performance. A laptop with an 8 core AMD CPU and 4080 mobile is capable of performing at the level of a desktop 4070 while consuming around 200-220w combined - without being thermal throttled. Just comparing desktop GPUs, as I originally did, still shows that 7800XT trailing behind the 4070 Ti in performance (behind in raster and much more behind in RT) while having significantly more memory bandwidth, more cache, and a larger die.

I'm not even sure why you bring up 1300 TOPs of the 4070 as it certainly is not that when using INT8 and/or sparsity. The equivalent TOPS number to the Pro would be 260 TOPs of INT8 (no sparsity) at regular clock speeds.
And you believe it can run 500hrs, like any console (PlayStation or otherwise could) in a shop window at full tilt with a game for that duration without thermal throttling? No chance, you lot all called Michael a fanboy when he benchmarked UE5 on PC after a 1hr warmup before benchmarks that showed a full desktop couldn't maintain first hour performance. And you said the laptop was more performant than Pro, and now you are claiming 5/6 the AI performance of the Pro even in the first hour before throttling.
 
Last edited:

Zathalus

Member
And you believe it can run 500hrs, like any console (PlayStation or otherwise could) in a shop window at full tilt with a game for that duration without thermal throttling? No chance, you lot all called Michael a fanboy when he benchmarked UE5 on PC for 1hr before benchmarks that showed a full desktop couldn't maintain first hour performance. And you said it was more performant, and now you are claiming 5/6 the AI performance of the Pro even in the first hour before throttling.
Do I believe a 4080 mobile can sustain performance in a good quality laptop? Err, yes? I’m not aware of any widespread reports of thermal throttling or overheating when it comes to those GPUs. They are underclocked cut down AD104 chips. On a decent laptop they are usually in the 70-80c range.

As for Michael, what GPU did he use? What branding? Blower style card? What cooling did his PC use? Ambient temperature of his room? It’s pretty meaningless to claim a specific user had throttling issues without knowing all the variables. I have multiple GPUs and none of them thermal throttle at all while gaming, even in summer without AC.

And yes, a regular 4070 has less TOPs than a PS5 Pro, I didn’t exactly claim otherwise. RT and raster performance is another matter.

But this is really aside my original point, that Nvidia is not really brute forcing anything, unless you have a 90 series card.
 

PaintTinJr

Member
Do I believe a 4080 mobile can sustain performance in a good quality laptop? Err, yes? I’m not aware of any widespread reports of thermal throttling or overheating when it comes to those GPUs. They are underclocked cut down AD104 chips. On a decent laptop they are usually in the 70-80c range.

As for Michael, what GPU did he use? What branding? Blower style card? What cooling did his PC use? Ambient temperature of his room? It’s pretty meaningless to claim a specific user had throttling issues without knowing all the variables. I have multiple GPUs and none of them thermal throttle at all while gaming, even in summer without AC.

And yes, a regular 4070 has less TOPs than a PS5 Pro, I didn’t exactly claim otherwise. RT and raster performance is another matter.

But this is really aside my original point, that Nvidia is not really brute forcing anything, unless you have a 90 series card.
Workstation laptops have trouble running SAS for more than 1.5hr without thermal throttling and crashing the analysis, on all but ones design for F1 and Nasa, 500hrs straight gaming on laptop isn't going be trivial like it is on a console where throttling isn't allowed.
 

Zathalus

Member
Workstation laptops have trouble running SAS for more than 1.5hr without thermal throttling and crashing the analysis, on all but ones design for F1 and Nasa, 500hrs straight gaming on laptop isn't going be trivial like it is on a console where throttling isn't allowed.
Well that just sounds like cheap Dell or HP workstations that probably just rely on a stock shitty Intel cooler. God knows corporations like buying the shittiest versions of PCs. I have a 4090 desktop and can game for 10+ hours without my temperatures spiking even once., nor do I see any major fluctuations in clock speeds. My wife has her 4080 laptop and is perfectly capable of playing D4 or WoW for multiple hours at a time without the GPU temperatures even going over 80c.

Intel CPUs, at least the 13900/14900 series thermal throttling makes sense as those CPUs suck at that. But a modern Nvidia GPU (80 series and under) should not really have any problems, especially a undervolted mobile part.

But if you don’t want to agree on temperatures that is fine, my original point was that die size and power consumption has Nvidia being the technology leader for now, which is factually true when comparing desktop or laptop parts.
 
Who actually cares at this point?
So far the Pro has been disappointing to say the least...
People care because PS is moving in the wrong direction and at this point there's no reason to be excited about PS6 or the future of PS consoles.
Despite knowing that the majority of users turn RT off they've added dedicated RT HW that will raise the price of all future PS consoles.
Cerny also says PSSR is toughest thing he ever worked on and very resources consuming. I am telling you... all this AI crap is a scam.
It's a solution in search of a problem.
AI's got a big future in gaming outside of 'upscaling'. Dedicated AI cores will be useful for doing new things like real-time psychedelic AI hallucinations and just handling ordinary in-game AI (NPC, traffic, weather, events, etc.) more efficiently.
I’m gonna post this here since I don’t see a Pro OT, but I just got a Pro and set it up, is the wobble normal? I have a disc drive installed too but the wobble is strong. I’m using the cheap plastic things that came with it.
The eject button's on the wrong side of the disc slot.
Make another PS3.
They would make another PS2.
Who is the guy that insisted that Sony is simpy sticking with whatever AMD puts out with no customizations?
PS using AMD/AMD HW is a massive problem that undermines PS.
Every AMD PS console is an advertisement for AMD's consumer HW specced 'at' or 'better than' the PS-equivalent.
It doesn't matter how deeply PS customizes existing AMD HW, the damage is done and it's permanent. With time PS HW will always look worse than new AMD consumer HW.
PS needs to move back to full esoteric HW and existing in a proprietary realm where any comparison to consumer PC parts is impossible.
There's still no equivalent to the PS2's proprietary HW (CPU or GPU) while the PS3's GPU (based on Nvidia G70) looks underpowered and obsolete compared to modern day Nvidia HW.
 
Top Bottom