TheRaidenPT
Member
If they do go for 3D cache with the PS6 I think it will be a beefy upgrade.
![]()
Chips aren’t improving like they used to, and it’s killing game console price cuts
Op-ed: Slowed manufacturing advancements are upending the way tech progresses.arstechnica.com
These days only architecture improvement can result in any kind of die shrink. Phasing out x86 can't come soon enough.
Thats what I said?What generations move how fast?!
50xx is basically close to 40xx which was close to 30xx… small jumps
Due to inflation, if nothing changes, they'll just cost more through organic market forces over time. Add the current economic turmoil caused by tariff ideocracy, and it's pretty easy to explain why the hardware hasn't gone down in price. Now, going up in price is something else - Sony in particular has been pretty open that the PS5 underwent multiple revisions since launch to achieve hardware profitability. They could make less profit - still a profit - but have elected not too, passing the cost on to the consumers. The same is true of Nintendo. The one outlier is Microsoft; their hardware is still sold at a loss - so if anyone can justify raising their prices, it's the green team. But Microsoft are a trillion dollar company and aren't selling much hardware anyway, so, frankly, they could eat the costs if they wanted to.If chips aren't advancing that much, they shouldn't cost more, so neither should consoles.
At this point backwards compatibility is expected.At current prices the Mac mini m4 makes even more sense as a gaming machine but studios just don't care to make games for that gorgeous ARM based hardware…
Apple are missing a golden opportunity here because console manufacturers are able to convince their customers that buying a console based on some archaic x86 architecture from 5 years ago at a premium by sticking "pro" on it or simply raising their prices is actually a good deal…
It blows my mind that people are actually buying zen2 architecture CPUs in the middle of 2025 for gaming purposes
U are technically right, but those 3nm chips apple is using in their phones are mobile low powerdraw ones console gonna need around 300mm² with 250W powerdraw.
Those need different node compared to small mobile chips apple is using.![]()
AMD Playstation 5 Pro GPU Specs
AMD Viola, 2350 MHz, 3840 Cores, 240 TMUs, 64 ROPs, 16384 MB GDDR6, 2250 MHz, 256 bitwww.techpowerup.com
https://en.wikichip.org/wiki/3_nm_lithography_process#google_vignette here some deep down info about it.
The rational is that the advance is costlier than before.If chips aren't advancing that much, they shouldn't cost more, so neither should consoles.
PS5 has already gone up in price in every region but the US. It happened a few years ago. It seems Sony intentionally tried to keep the US away from this and instead decided to just put us through the pain with the Pro.Due to inflation, if nothing changes, they'll just cost more through organic market forces over time. Add the current economic turmoil caused by tariff ideocracy, and it's pretty easy to explain why the hardware hasn't gone down in price. Now, going up in price is something else - Sony in particular has been pretty open that the PS5 underwent multiple revisions since launch to achieve hardware profitability. They could make less profit - still a profit - but have elected not too, passing the cost on to the consumers. The same is true of Nintendo. The one outlier is Microsoft; their hardware is still sold at a loss - so if anyone can justify raising their prices, it's the green team. But Microsoft are a trillion dollar company and aren't selling much hardware anyway, so, frankly, they could eat the costs if they wanted to.
Price and manufacturing capacity. Whenever a new node comes in, Apple swoops in with their gazillions of dollars to make sure their chips have it first.What prevents anyone from making a large die or high power chip on the current 3nm nodes? No doubt the HP-optimized libs (N3P/N3X) will be better but Intel ships Arrow Lake with 250W TDP on N3B and Apple ships M4 Max with >100B transistors which is like ~500sqmm on N3E right now.
I do like the improvement called "all dressed" chips.The last great chip was probably salt and vinegar.
"If something can't go on forever, it will end."
3nm node isnt mature enough yet, even nvidia's premium rtx 5090 that has streetprice of 3,5k+ usd is made on tsmc 4n finfet process which is 5nm(large die size -750 mm²). Thats over 4x base ps5 raw performance btw.What prevents anyone from making a large die or high power chip on the current 3nm nodes? No doubt the HP-optimized libs (N3P/N3X) will be better but Intel ships Arrow Lake with 250W TDP on N3B and Apple ships M4 Max with >100B transistors which is like ~500sqmm on N3E right now.
Yeah, well… that is why I was expecting and wanted longer console cycles and no mid generation upgrades while a lot of people still seem to want the "choice" of new hardware coming out more often. It does not make sense. Part of the fight is getting devs to invest in the new generation HW and SW and optimise it. Short generations go against that and also it takes longer and longer for great performance boosts to happen.
Now I understand the feeling that if someone sold me something that made my games look a bit better I might get it and I did, but fundamentally it is turkeys voting for thanksgiving's.
Except that costs to extract the ore, to manufacture the chips, to apply to regulations etc... are rising on a constant basis.If chips aren't advancing that much, they shouldn't cost more, so neither should consoles.
Exacrly, and the jump isn't worth it, gens will last longer. They already are. Pro consoles etc.Except that costs to extract the ore, to manufacture the chips, to apply to regulations etc... are rising on a constant basis.
Thats just a gussied-up cheese & onionSour Cream and Chives would like a word...
Year | GPU | Architecture | FP32 Teraflops | Delta from Prior Gen (TFLOPS) |
2004 | GeForce 6800 Ultra | Curie | ~0.040 | — |
2006 | GeForce 8800 GTX | Tesla | ~0.345 | +0.305 |
2008 | GTX 280 | Tesla (2nd gen) | ~0.933 | +0.588 |
2010 | GTX 480 | Fermi | ~1.35 | +0.417 |
2012 | GTX 680 | Kepler | ~3.09 | +1.74 |
2013 | GTX 780 Ti | Kepler Refresh | ~4.50 | +1.41 |
2015 | GTX 980 Ti | Maxwell | ~5.63 | +1.13 |
2017 | GTX 1080 Ti | Pascal | ~11.34 | +5.71 |
2018 | RTX 2080 Ti | Turing | ~13.45 | +2.11 |
2020 | RTX 3090 | Ampere | ~35.58 | +22.13 |
2022 | RTX 4090 | Ada Lovelace | ~82.58 | +46.99 |
Year | GPU | Architecture | FP32 TFLOPS | Launch Price (USD) | TFLOPS per $ |
2004 | GeForce 6800 GT | Curie | ~0.040 | $399 | 0.0001 |
2006 | GeForce 7900 GT | Curie | ~0.145 | $299 | 0.0005 |
2008 | GeForce 9800 GTX | Tesla | ~0.504 | $349 | 0.0014 |
2010 | GeForce GTX 470 | Fermi | ~1.21 | $349 | 0.0035 |
2012 | GeForce GTX 670 | Kepler | ~2.46 | $399 | 0.0062 |
2014 | GeForce GTX 970 | Maxwell | ~3.49 | $329 | 0.0106 |
2016 | GeForce GTX 1070 | Pascal | ~6.46 | $379 | 0.0170 |
2018 | GeForce RTX 2070 | Turing | ~7.46 | $499 | 0.0149 |
2020 | GeForce RTX 3070 | Ampere | ~20.31 | $499 | 0.0407 |
2022 | GeForce RTX 4070 | Ada Lovelace | ~29.15 | $599 | 0.0487 |
2025 | GeForce RTX 5070 | Blackwell | ~42.00* | $549 | 0.0765 |
The thing is the average consumer doesn't care about the cost of process nodes going up or their being major engineering challenges, they expect something for their money. If PS6 is highly iterative and just something like a 3nm monolithic chip with bump in CPU/GPU architectures and no more than 60% more raw power over pro; then paired with another excessive cross-gen period it'll end up as little more than a PS5 Pro+. I think a significant number of people will just hang around on PS5, it already has the drastically improved load times and general responsiveness / quality of life improvements.
I think they need to find some way to pull a miracle out of their arse as there needs to be some self-evident leaps on screen to make it worth it on this go around and not someone saying "if you look closely, you can see the ray tracing on this surface here, it's really impressive what we're doing here because the calculation is really complicated and it's being accelerated by special units, it's running at sub-1080p and the particles look like tv static, but it's really impressive cause of the calculations, honest...".
I feel like they got away with it this gen, but I wonder if they get away with it again. I just don't see significant chunk of people being inclined to switch again to what'll probably be a $600 console (before a disc drive) so they can play what are fundamentally the same games with some extra bells and whistles. I think they really have to find a way to wow people or to differentiate the experience somehow. I know totally exotic setups are out of the window but I feel like they need to find a way to lean parts of the system towards that again to punch above their weight and at the same time make it incredibly easy to utilise, they also need to drop a few experiences that simply cannot be had on PS5 or even PS4.
Many people I know are frustrated with things as they are and I've explained to them as clearly as possible why things have slowed and how process nodes effect that; and to paraphrase their response it's basically "ok, I get it, but that's not really my problem, I expect a certain step up for my money and they need to find a way to do it, if not then I'll wait and buy a PC down the line, just not bother anymore or stick with my current console for most of the generation".
They have to find a way.
ARM architecture is a 40 year old derivative of RISC, so somehow implying that it's the new hotness and x86 is old and busted is kind of weird. Especially since x86 is only 7 years older than ARM. ARM and 32-bit x86 were introduced in the same year.At current prices the Mac mini m4 makes even more sense as a gaming machine but studios just don't care to make games for that gorgeous ARM based hardware…
Apple are missing a golden opportunity here because console manufacturers are able to convince their customers that buying a console based on some archaic x86 architecture from 5 years ago at a premium by sticking "pro" on it or simply raising their prices is actually a good deal…
It blows my mind that people are actually buying zen2 architecture CPUs in the middle of 2025 for gaming purposes
Wasabi Ginger lays, limited time only around 2015. I lament their discontinuation.The last great chip was probably salt and vinegar.
Chips are improving, but teracorporations have cornered the chip market.Roughy estimates above. Are we really seeing a decline on price vs performance?
Someone who gets it. We are in a strange period that will last until someone develops a new tech, probably sometime in the next 6-8 years.There is a hard limit on how small node can go, and we already not too far from it. So it's understandable that it will be harder and more expensive to try to reach a cap for current technology
Only switching technology may help, but it's also still too early
The problem is in shrinking. Splitting will not help much
Basically cost of transistors doesn't go down, so any more power equals to extra cost
It's because the VAST majority of us gamers aren't fucking nerds who look at that shit too deeply and just want to play good games. If we looked too hard at things like that and actually cared about that bullshit, the Switch would've been a massive failure. Thank FUCK gamers don't care about bullshit like this to be honest.It blows my mind that people are actually buying zen2 architecture CPUs in the middle of 2025 for gaming purposes
There was only 40 years between the invention of salt and vinegar crisps and the iPhone. if you were born before 1987 you were born closer to the crisps than the smart phone revolutionsThe last great chip was probably salt and vinegar.
We need those Bismuth chips right now. Or the light ones.
Turned out well for everyone the S is their best selling console with the worst games, Microsoft has lost all credibility as a serious hardware manufacturer going forward, and they are now slowly ruining major developers with gamepass fillers.. yeah team!!Yeah, we know. Microsoft explained that was the reason for the Series S existing at the start of the generation - they couldn't deliver revised hardware like prior generations to cut costs, so there was no point in waiting for one.
AI is going to change the game I think.I don't think we will be getting the performance boosts per generation like the last 25 years either
yes, it's Apple's control of the entire stack - hardware, OS, API, drivers - that gives them their primary edge. They can also put specific things in their chips to help their hardware and their software that Intel or AMD or Samsung or Qualcomm can't do. They're also going to be getting rid of Qualcomm's modems so that advantage will only grow.ARM architecture is a 40 year old derivative of RISC, so somehow implying that it's the new hotness and x86 is old and busted is kind of weird. Especially since x86 is only 7 years older than ARM. ARM and 32-bit x86 were introduced in the same year.
The thing that makes Apple silicon machines as efficient as they are isn't just the CPU architecture. It's just as much the integrated nature of the entire system. The M-series system design with unified memory and integrated storage allows for Apple to fine tune things like fast-swapping processes between RAM and SSD, which lets it make much better use of memory than a similar spec Windows machine. They know exactly how fast memory and storage are as well as the bandwidth between all of the parts. Windows still has to support nearly infinite spec combinations where Apple gets to target and tune to a finite number of configurations. It's one of the reasons why a MacBook Air can run really well with 8 GB RAM and a Windows laptop with 8 GB RAM runs like crap.
An M4 Mac mini is architected more like a console than it an x86 PC, so I agree that it could potentially be a great machine for gaming. Getting developers to support it is the challenge.
Apple already runs the biggest game store in the world, and devs can put their games on iOS and Mac with little trouble, I am not sure what you want them to do exactly?At current prices the Mac mini m4 makes even more sense as a gaming machine but studios just don't care to make games for that gorgeous ARM based hardware…
Apple are missing a golden opportunity here because console manufacturers are able to convince their customers that buying a console based on some archaic x86 architecture from 5 years ago at a premium by sticking "pro" on it or simply raising their prices is actually a good deal…
It blows my mind that people are actually buying zen2 architecture CPUs in the middle of 2025 for gaming purposes
In the past, process shrinks would have allowed console manufacturers to increase their margins while keeping the price the same.Bullshit. Since Covid they all increased their margins because they saw they could do it so it inevitably led to serious price increase accross the board.
I would disagree about the "splitting will not help much" part.There is a hard limit on how small node can go, and we already not too far from it. So it's understandable that it will be harder and more expensive to try to reach a cap for current technology
Only switching technology may help, but it's also still too early
The problem is in shrinking. Splitting will not help much
Basically cost of transistors doesn't go down, so any more power equals to extra cost
the engineering nightmares some people fantasize about.....I would disagree about the "splitting will not help much" part.
With where we are in terms of trying to render natively as low as possible (540p) and use ML to deliver final pixels, two revised PS5 Pro APUs for a PS6 would be enough for a gen leap over PS5 and likely hit the target power and price given the age of the PS5 Pro silicon and its use of older RDNA parts.
I could see an asymmetric approach with a high CPU clock in APU1 along with a reduced GPU clock for the low resolution ML AI raster input - and for doing low priority ML/RT - and a low CPU clock and high GPU clock on APU2 for the ML AI and the lighting simulation - with a one frame latency - so as to keep both APU's power draws below 100Watts to stay below 250watt system limit.
Sony showed in the PS3 gen prototype GT5 rendering features (to DF) splitting across multiple PS3s to get a 240fps mode, and had another mode doing 4K60fps. Crossfire/SLI styled software solutions aren't ideal but far easier to achieve in a closed console environment IMO.
What? First time? That is child's play of a design compared to the PS2 and PS3 architectures and wouldn't workout much different to a PC with a CPU and discrete GPUthe engineering nightmares some people fantasize about.....
this is exactly why i believe the whole ARM hype started over the M1 macs 5 years ago was kinda unfounded and didn't do much. Apple machines are apple machines, they are designed bespoke from the ground up. We've not really seen equivalent performance on ARM from most other machines/manufacturers, because it isn't ARM doing the heavy lifting there.ARM architecture is a 40 year old derivative of RISC, so somehow implying that it's the new hotness and x86 is old and busted is kind of weird. Especially since x86 is only 7 years older than ARM. ARM and 32-bit x86 were introduced in the same year.
The thing that makes Apple silicon machines as efficient as they are isn't just the CPU architecture. It's just as much the integrated nature of the entire system. The M-series system design with unified memory and integrated storage allows for Apple to fine tune things like fast-swapping processes between RAM and SSD, which lets it make much better use of memory than a similar spec Windows machine. They know exactly how fast memory and storage are as well as the bandwidth between all of the parts. Windows still has to support nearly infinite spec combinations where Apple gets to target and tune to a finite number of configurations. It's one of the reasons why a MacBook Air can run really well with 8 GB RAM and a Windows laptop with 8 GB RAM runs like crap.
An M4 Mac mini is architected more like a console than it an x86 PC, so I agree that it could potentially be a great machine for gaming. Getting developers to support it is the challenge.
Denial is the most predictable of human responsesThe near future is very, very bright and as soon as 2029, you'll be blown away by how far we have come from May 2025. Heck, we have some big things coming by late 2027!
the M1 laptops got so much hype because they delivered better performance AND battery life than the Intel equivalents, which nobody expected.this is exactly why i believe the whole ARM hype started over the M1 macs 5 years ago was kinda unfounded and didn't do much. Apple machines are apple machines, they are designed bespoke from the ground up. We've not really seen equivalent performance on ARM from most other machines/manufacturers, because it isn't ARM doing the heavy lifting there.
Swapping architectures will not suddenly yield massive gains in performance. No CPU architecture holds the insane amount of speed and power X86 does at the moment.
The only way out is to basically change what our CPUs are made out of and nobody has any leads on that. I don't think our current computing landscape will drastically change for the next 15 years at least
I agree. Simply swapping to ARM isn't likely to yield performance gains over x86. While ARM processors can be very powerful there isn't some magic that just makes them better than other architectures. Windows ARM laptops with the Snapdragon processors have great battery life and are good for lighter workloads, but they can struggle in use cases that require higher performance.this is exactly why i believe the whole ARM hype started over the M1 macs 5 years ago was kinda unfounded and didn't do much. Apple machines are apple machines, they are designed bespoke from the ground up. We've not really seen equivalent performance on ARM from most other machines/manufacturers, because it isn't ARM doing the heavy lifting there.
Swapping architectures will not suddenly yield massive gains in performance. No CPU architecture holds the insane amount of speed and power X86 does at the moment.
The only way out is to basically change what our CPUs are made out of and nobody has any leads on that. I don't think our current computing landscape will drastically change for the next 15 years at least
That doesn't say much about ARM, just Apple. Which was what I meant by my post.the M1 laptops got so much hype because they delivered better performance AND battery life than the Intel equivalents, which nobody expected.
Haven't they tried to make their own break in with Arc? They're taking steps to try and enter the dGPU market and their tech is pretty decent.The issue here is that speed and power isn't what it used to be. Most consumers don't give a shit. And a lot of intense applications that theoretically could benefit from the speed and power are even better on GPU. This is leaving Intel on the outside looking in because they don't have GPUs for those applications and they can't compete on the stuff people want out of their laptops which is good battery life and efficiency.