ars technica "Chips aren’t improving like they used to, and it’s killing game console price cuts"

giphy.gif
 
These days only architecture improvement can result in any kind of die shrink. Phasing out x86 can't come soon enough.

At current prices the Mac mini m4 makes even more sense as a gaming machine but studios just don't care to make games for that gorgeous ARM based hardware…

Apple are missing a golden opportunity here because console manufacturers are able to convince their customers that buying a console based on some archaic x86 architecture from 5 years ago at a premium by sticking "pro" on it or simply raising their prices is actually a good deal…

It blows my mind that people are actually buying zen2 architecture CPUs in the middle of 2025 for gaming purposes
 
What generations move how fast?!
50xx is basically close to 40xx which was close to 30xx… small jumps
Thats what I said?
Console - same performance target for 5+ years.
Reduced chip advancments = no price reduction
PC - performance target increases roughly every two years
Reduced chip advancements = smaller and smaller jumps in performance
 
If chips aren't advancing that much, they shouldn't cost more, so neither should consoles.
Due to inflation, if nothing changes, they'll just cost more through organic market forces over time. Add the current economic turmoil caused by tariff ideocracy, and it's pretty easy to explain why the hardware hasn't gone down in price. Now, going up in price is something else - Sony in particular has been pretty open that the PS5 underwent multiple revisions since launch to achieve hardware profitability. They could make less profit - still a profit - but have elected not too, passing the cost on to the consumers. The same is true of Nintendo. The one outlier is Microsoft; their hardware is still sold at a loss - so if anyone can justify raising their prices, it's the green team. But Microsoft are a trillion dollar company and aren't selling much hardware anyway, so, frankly, they could eat the costs if they wanted to.
 
At current prices the Mac mini m4 makes even more sense as a gaming machine but studios just don't care to make games for that gorgeous ARM based hardware…

Apple are missing a golden opportunity here because console manufacturers are able to convince their customers that buying a console based on some archaic x86 architecture from 5 years ago at a premium by sticking "pro" on it or simply raising their prices is actually a good deal…

It blows my mind that people are actually buying zen2 architecture CPUs in the middle of 2025 for gaming purposes
At this point backwards compatibility is expected.
If PS6 has a 3D cache CPU and UDNA graphics it will be a nice step forward. Not a leap, but a nice step.
 
Last edited:
U are technically right, but those 3nm chips apple is using in their phones are mobile low powerdraw ones console gonna need around 300mm² with 250W powerdraw.
Those need different node compared to small mobile chips apple is using.
https://en.wikichip.org/wiki/3_nm_lithography_process#google_vignette here some deep down info about it.

What prevents anyone from making a large die or high power chip on the current 3nm nodes? No doubt the HP-optimized libs (N3P/N3X) will be better but Intel ships Arrow Lake with 250W TDP on N3B and Apple ships M4 Max with >100B transistors which is like ~500sqmm on N3E right now.
 
Due to inflation, if nothing changes, they'll just cost more through organic market forces over time. Add the current economic turmoil caused by tariff ideocracy, and it's pretty easy to explain why the hardware hasn't gone down in price. Now, going up in price is something else - Sony in particular has been pretty open that the PS5 underwent multiple revisions since launch to achieve hardware profitability. They could make less profit - still a profit - but have elected not too, passing the cost on to the consumers. The same is true of Nintendo. The one outlier is Microsoft; their hardware is still sold at a loss - so if anyone can justify raising their prices, it's the green team. But Microsoft are a trillion dollar company and aren't selling much hardware anyway, so, frankly, they could eat the costs if they wanted to.
PS5 has already gone up in price in every region but the US. It happened a few years ago. It seems Sony intentionally tried to keep the US away from this and instead decided to just put us through the pain with the Pro.


What prevents anyone from making a large die or high power chip on the current 3nm nodes? No doubt the HP-optimized libs (N3P/N3X) will be better but Intel ships Arrow Lake with 250W TDP on N3B and Apple ships M4 Max with >100B transistors which is like ~500sqmm on N3E right now.
Price and manufacturing capacity. Whenever a new node comes in, Apple swoops in with their gazillions of dollars to make sure their chips have it first.

I am not sure the juice is worth the squeeze on PC parts. It makes a lot of sense on a cell phone where every little bit matters but Intel was able to, as you point out, just use a less efficient node and run more power through the chip. Now, we might be reaching the end of the road on that strategy too.
 
What prevents anyone from making a large die or high power chip on the current 3nm nodes? No doubt the HP-optimized libs (N3P/N3X) will be better but Intel ships Arrow Lake with 250W TDP on N3B and Apple ships M4 Max with >100B transistors which is like ~500sqmm on N3E right now.
3nm node isnt mature enough yet, even nvidia's premium rtx 5090 that has streetprice of 3,5k+ usd is made on tsmc 4n finfet process which is 5nm(large die size -750 mm²). Thats over 4x base ps5 raw performance btw.
 
We´ve been in a "what`s next? Gosh I hope there is a next" - Limbo for almost half a decade now with the fabrication nodes getting dangerously close to what is physically possible (to our current understanding) at least.
Now we`ve arrived at the squeezing point where any and all progress gets prohibitively expensive and honestly, I have no clue what the way forward will turn out to be at this point.
We obviously have the roadmaps exchanging nm for "angstrom" from the big foundries, but in the end most of that is just PR material and alternative technologies are nowhere to be seen.
Short term we`ll just get more and more aggressive shortcutting via ML/AI ..no idea where the limits for that will turn out to be.

Right now is pretty much the most
Shrugs GIF
- moment ever in the semiconductor industry since it was established.
 
Last edited:
Yeah, well… that is why I was expecting and wanted longer console cycles and no mid generation upgrades while a lot of people still seem to want the "choice" of new hardware coming out more often. It does not make sense. Part of the fight is getting devs to invest in the new generation HW and SW and optimise it. Short generations go against that and also it takes longer and longer for great performance boosts to happen.

Now I understand the feeling that if someone sold me something that made my games look a bit better I might get it and I did, but fundamentally it is turkeys voting for thanksgiving's.

This so hard!

We have hardware features in the current gen APUs that was present in the previous gen and is still currently underutilized.

And computing hardware tech aside, there are also a plethora of peripheral hardware features that have been largely ignored by devs, that can be utilized to create innovative, whole new gaming experiences that would be more impactful than just being able to bump framerate, resolution or texture res.

If we had longer gens, with no mid-gen refreshes, devs could start focusing on optimization as well as utilizing some of these other features.
 
If chips aren't advancing that much, they shouldn't cost more, so neither should consoles.
Except that costs to extract the ore, to manufacture the chips, to apply to regulations etc... are rising on a constant basis.
 
Last edited:
Not all electrónics need a last gen chip , most of them uses chips that were created 20 years ago, we find them from a washing machine to a blender.

Those basic chips have huge demand and factories are busy producing them instead of developing state of the art chips for niche /smaller markets.

This is is also a reason why new consoles price behave different from previous generations
 
Last twenty years of halo GPUs by architecture by Nvidia with an attempt of an apples to apples comparison of computing power increases.


Year
GPU
Architecture
FP32 Teraflops
Delta from Prior Gen (TFLOPS)
2004GeForce 6800 UltraCurie~0.040
2006GeForce 8800 GTXTesla~0.345+0.305
2008GTX 280Tesla (2nd gen)~0.933+0.588
2010GTX 480Fermi~1.35+0.417
2012GTX 680Kepler~3.09+1.74
2013GTX 780 TiKepler Refresh~4.50+1.41
2015GTX 980 TiMaxwell~5.63+1.13
2017GTX 1080 TiPascal~11.34+5.71
2018RTX 2080 TiTuring~13.45+2.11
2020RTX 3090Ampere~35.58+22.13
2022RTX 4090Ada Lovelace~82.58+46.99

Now I know people are going to throw the current prices into the mix on the halo cards so bumping down to the xx70 line here are some rough estimates on baseline MSRP cards.

Year
GPU
Architecture
FP32 TFLOPS
Launch Price (USD)
TFLOPS per $
2004GeForce 6800 GTCurie~0.040$3990.0001
2006GeForce 7900 GTCurie~0.145$2990.0005
2008GeForce 9800 GTXTesla~0.504$3490.0014
2010GeForce GTX 470Fermi~1.21$3490.0035
2012GeForce GTX 670Kepler~2.46$3990.0062
2014GeForce GTX 970Maxwell~3.49$3290.0106
2016GeForce GTX 1070Pascal~6.46$3790.0170
2018GeForce RTX 2070Turing~7.46$4990.0149
2020GeForce RTX 3070Ampere~20.31$4990.0407
2022GeForce RTX 4070Ada Lovelace~29.15$5990.0487
2025GeForce RTX 5070Blackwell~42.00*$5490.0765

Roughy estimates above. Are we really seeing a decline on price vs performance?
 
Last edited:
The thing is the average consumer doesn't care about the cost of process nodes going up or their being major engineering challenges, they expect something for their money. If PS6 is highly iterative and just something like a 3nm monolithic chip with bump in CPU/GPU architectures and no more than 60% more raw power over pro; then paired with another excessive cross-gen period it'll end up as little more than a PS5 Pro+. I think a significant number of people will just hang around on PS5, it already has the drastically improved load times and general responsiveness / quality of life improvements.

I think they need to find some way to pull a miracle out of their arse as there needs to be some self-evident leaps on screen to make it worth it on this go around and not someone saying "if you look closely, you can see the ray tracing on this surface here, it's really impressive what we're doing here because the calculation is really complicated and it's being accelerated by special units, it's running at sub-1080p and the particles look like tv static, but it's really impressive cause of the calculations, honest...".

I feel like they got away with it this gen, but I wonder if they get away with it again. I just don't see significant chunk of people being inclined to switch again to what'll probably be a $600 console (before a disc drive) so they can play what are fundamentally the same games with some extra bells and whistles. I think they really have to find a way to wow people or to differentiate the experience somehow. I know totally exotic setups are out of the window but I feel like they need to find a way to lean parts of the system towards that again to punch above their weight and at the same time make it incredibly easy to utilise, they also need to drop a few experiences that simply cannot be had on PS5 or even PS4.

Many people I know are frustrated with things as they are and I've explained to them as clearly as possible why things have slowed and how process nodes effect that; and to paraphrase their response it's basically "ok, I get it, but that's not really my problem, I expect a certain step up for my money and they need to find a way to do it, if not then I'll wait and buy a PC down the line, just not bother anymore or stick with my current console for most of the generation".

They [console makers] have to find a way.
 
Last edited:
We've just hit diminishing returns in general. So even when we see things that are "big upgrades" by the numbers, they don't translate into "big upgrades" for your eyes and they certainly don't translate to "big upgrades" for games overall.

For me, this would be completely fine. It should be so easy for developers to take advantage of the hardware that makes it EASY for them to have games look and perform competently and now just focus on making compelling games. Instead a lot of these devs sit around and spend their time trying to get marginal graphical upgrades that provide very little to the actual experience.

Even Switch 2, which will be the weakest console on the market when it launches next month, is more than enough power to make a great game that looks plenty fine. If you can't accomplish that, you're just not a good dev.
 
The thing is the average consumer doesn't care about the cost of process nodes going up or their being major engineering challenges, they expect something for their money. If PS6 is highly iterative and just something like a 3nm monolithic chip with bump in CPU/GPU architectures and no more than 60% more raw power over pro; then paired with another excessive cross-gen period it'll end up as little more than a PS5 Pro+. I think a significant number of people will just hang around on PS5, it already has the drastically improved load times and general responsiveness / quality of life improvements.

I think they need to find some way to pull a miracle out of their arse as there needs to be some self-evident leaps on screen to make it worth it on this go around and not someone saying "if you look closely, you can see the ray tracing on this surface here, it's really impressive what we're doing here because the calculation is really complicated and it's being accelerated by special units, it's running at sub-1080p and the particles look like tv static, but it's really impressive cause of the calculations, honest...".

I feel like they got away with it this gen, but I wonder if they get away with it again. I just don't see significant chunk of people being inclined to switch again to what'll probably be a $600 console (before a disc drive) so they can play what are fundamentally the same games with some extra bells and whistles. I think they really have to find a way to wow people or to differentiate the experience somehow. I know totally exotic setups are out of the window but I feel like they need to find a way to lean parts of the system towards that again to punch above their weight and at the same time make it incredibly easy to utilise, they also need to drop a few experiences that simply cannot be had on PS5 or even PS4.

Many people I know are frustrated with things as they are and I've explained to them as clearly as possible why things have slowed and how process nodes effect that; and to paraphrase their response it's basically "ok, I get it, but that's not really my problem, I expect a certain step up for my money and they need to find a way to do it, if not then I'll wait and buy a PC down the line, just not bother anymore or stick with my current console for most of the generation".

They have to find a way.

They can't just find a way. They are pushing the edge of known science to produce these.

While those people will get some benefit from switching to PC, it will force console makers to make more games exclusive.

I'm fine with that, but I doubt those people will be.
 
At current prices the Mac mini m4 makes even more sense as a gaming machine but studios just don't care to make games for that gorgeous ARM based hardware…

Apple are missing a golden opportunity here because console manufacturers are able to convince their customers that buying a console based on some archaic x86 architecture from 5 years ago at a premium by sticking "pro" on it or simply raising their prices is actually a good deal…

It blows my mind that people are actually buying zen2 architecture CPUs in the middle of 2025 for gaming purposes
ARM architecture is a 40 year old derivative of RISC, so somehow implying that it's the new hotness and x86 is old and busted is kind of weird. Especially since x86 is only 7 years older than ARM. ARM and 32-bit x86 were introduced in the same year.

The thing that makes Apple silicon machines as efficient as they are isn't just the CPU architecture. It's just as much the integrated nature of the entire system. The M-series system design with unified memory and integrated storage allows for Apple to fine tune things like fast-swapping processes between RAM and SSD, which lets it make much better use of memory than a similar spec Windows machine. They know exactly how fast memory and storage are as well as the bandwidth between all of the parts. Windows still has to support nearly infinite spec combinations where Apple gets to target and tune to a finite number of configurations. It's one of the reasons why a MacBook Air can run really well with 8 GB RAM and a Windows laptop with 8 GB RAM runs like crap.

An M4 Mac mini is architected more like a console than it an x86 PC, so I agree that it could potentially be a great machine for gaming. Getting developers to support it is the challenge.
 
Last edited:
Roughy estimates above. Are we really seeing a decline on price vs performance?
Chips are improving, but teracorporations have cornered the chip market.
I still believe in Moore's law. The production side isn't focused on consumer side right now. Unless you're willing to drop 2k on a GPU you get the scraps. The demand is in the ultra-high-end, for multiple reasons.
 
Last edited:
Whats also killing console prices is I dont get a sense MS/Sony are willing to subsidize their consoles losing a couple hundred dollars for years. And when they'd be dumping systems in old gens for $100-150 were they making money off it?

They're in money making mode. Keep console prices high, gamepads cost as much as a game (elite pads more), sub plans, and whatever other rev streams they got.

Only gen ever console makers raised hardware prices.
 
Last edited:
There is a hard limit on how small node can go, and we already not too far from it. So it's understandable that it will be harder and more expensive to try to reach a cap for current technology
Only switching technology may help, but it's also still too early


The problem is in shrinking. Splitting will not help much
Basically cost of transistors doesn't go down, so any more power equals to extra cost
Someone who gets it. We are in a strange period that will last until someone develops a new tech, probably sometime in the next 6-8 years.

However, that does not mean we are not seeing some incredible things come out in the next year or two:

TSMC's SoIC-mH (System-on-Integrated-Chips Molding Horizontal), a 2.5D packaging technology. This enables 3D chip stacking, improving thermal management and reducing electrical leakage probably significantly.

Wafer-Level Multi-Chip Module (WMCM) Packaging: by 2026 we may see adoption of WMCM packaging, integrating multiple components (CPU, GPU, Neural Engine, DRAM as mentioned by others posts) into a single package. This enhances performance by improving interconnect speed and efficiency, potentially enabling more complex chip designs for high-end tasks.

Combine with TSMC's 2nm tech and the new Gate-All-Around (GAA) transistors that will come with it, we're going to see things over the next 2-5 years that blow everyone's minds.

The near future is very, very bright and as soon as 2029, you'll be blown away by how far we have come from May 2025. Heck, we have some big things coming by late 2027!
 
It blows my mind that people are actually buying zen2 architecture CPUs in the middle of 2025 for gaming purposes
It's because the VAST majority of us gamers aren't fucking nerds who look at that shit too deeply and just want to play good games. If we looked too hard at things like that and actually cared about that bullshit, the Switch would've been a massive failure. Thank FUCK gamers don't care about bullshit like this to be honest.
 
Yeah, we know. Microsoft explained that was the reason for the Series S existing at the start of the generation - they couldn't deliver revised hardware like prior generations to cut costs, so there was no point in waiting for one.
Turned out well for everyone the S is their best selling console with the worst games, Microsoft has lost all credibility as a serious hardware manufacturer going forward, and they are now slowly ruining major developers with gamepass fillers.. yeah team!!
 
ARM architecture is a 40 year old derivative of RISC, so somehow implying that it's the new hotness and x86 is old and busted is kind of weird. Especially since x86 is only 7 years older than ARM. ARM and 32-bit x86 were introduced in the same year.

The thing that makes Apple silicon machines as efficient as they are isn't just the CPU architecture. It's just as much the integrated nature of the entire system. The M-series system design with unified memory and integrated storage allows for Apple to fine tune things like fast-swapping processes between RAM and SSD, which lets it make much better use of memory than a similar spec Windows machine. They know exactly how fast memory and storage are as well as the bandwidth between all of the parts. Windows still has to support nearly infinite spec combinations where Apple gets to target and tune to a finite number of configurations. It's one of the reasons why a MacBook Air can run really well with 8 GB RAM and a Windows laptop with 8 GB RAM runs like crap.

An M4 Mac mini is architected more like a console than it an x86 PC, so I agree that it could potentially be a great machine for gaming. Getting developers to support it is the challenge.
yes, it's Apple's control of the entire stack - hardware, OS, API, drivers - that gives them their primary edge. They can also put specific things in their chips to help their hardware and their software that Intel or AMD or Samsung or Qualcomm can't do. They're also going to be getting rid of Qualcomm's modems so that advantage will only grow.


At current prices the Mac mini m4 makes even more sense as a gaming machine but studios just don't care to make games for that gorgeous ARM based hardware…

Apple are missing a golden opportunity here because console manufacturers are able to convince their customers that buying a console based on some archaic x86 architecture from 5 years ago at a premium by sticking "pro" on it or simply raising their prices is actually a good deal…

It blows my mind that people are actually buying zen2 architecture CPUs in the middle of 2025 for gaming purposes
Apple already runs the biggest game store in the world, and devs can put their games on iOS and Mac with little trouble, I am not sure what you want them to do exactly?
 
Bullshit. Since Covid they all increased their margins because they saw they could do it so it inevitably led to serious price increase accross the board.
In the past, process shrinks would have allowed console manufacturers to increase their margins while keeping the price the same.
 
There is a hard limit on how small node can go, and we already not too far from it. So it's understandable that it will be harder and more expensive to try to reach a cap for current technology
Only switching technology may help, but it's also still too early


The problem is in shrinking. Splitting will not help much
Basically cost of transistors doesn't go down, so any more power equals to extra cost
I would disagree about the "splitting will not help much" part.

With where we are in terms of trying to render natively as low as possible (540p) and use ML to deliver final pixels, two revised PS5 Pro APUs for a PS6 would be enough for a gen leap over PS5 and likely hit the target power and price given the age of the PS5 Pro silicon and its use of older RDNA parts.

I could see an asymmetric approach with a high CPU clock in APU1 along with a reduced GPU clock for the low resolution ML AI raster input - and for doing low priority ML/RT - and a low CPU clock and high GPU clock on APU2 for the ML AI and the lighting simulation - with a one frame latency - so as to keep both APU's power draws below 100Watts to stay below 250watt system limit.

Sony showed in the PS3 gen prototype GT5 rendering features (to DF) splitting across multiple PS3s to get a 240fps mode, and had another mode doing 4K60fps. Crossfire/SLI styled software solutions aren't ideal but far easier to achieve in a closed console environment IMO.
 
I would disagree about the "splitting will not help much" part.

With where we are in terms of trying to render natively as low as possible (540p) and use ML to deliver final pixels, two revised PS5 Pro APUs for a PS6 would be enough for a gen leap over PS5 and likely hit the target power and price given the age of the PS5 Pro silicon and its use of older RDNA parts.

I could see an asymmetric approach with a high CPU clock in APU1 along with a reduced GPU clock for the low resolution ML AI raster input - and for doing low priority ML/RT - and a low CPU clock and high GPU clock on APU2 for the ML AI and the lighting simulation - with a one frame latency - so as to keep both APU's power draws below 100Watts to stay below 250watt system limit.

Sony showed in the PS3 gen prototype GT5 rendering features (to DF) splitting across multiple PS3s to get a 240fps mode, and had another mode doing 4K60fps. Crossfire/SLI styled software solutions aren't ideal but far easier to achieve in a closed console environment IMO.
the engineering nightmares some people fantasize about.....
all that´s missing here is duck tape.
 
Last edited:
ARM architecture is a 40 year old derivative of RISC, so somehow implying that it's the new hotness and x86 is old and busted is kind of weird. Especially since x86 is only 7 years older than ARM. ARM and 32-bit x86 were introduced in the same year.

The thing that makes Apple silicon machines as efficient as they are isn't just the CPU architecture. It's just as much the integrated nature of the entire system. The M-series system design with unified memory and integrated storage allows for Apple to fine tune things like fast-swapping processes between RAM and SSD, which lets it make much better use of memory than a similar spec Windows machine. They know exactly how fast memory and storage are as well as the bandwidth between all of the parts. Windows still has to support nearly infinite spec combinations where Apple gets to target and tune to a finite number of configurations. It's one of the reasons why a MacBook Air can run really well with 8 GB RAM and a Windows laptop with 8 GB RAM runs like crap.

An M4 Mac mini is architected more like a console than it an x86 PC, so I agree that it could potentially be a great machine for gaming. Getting developers to support it is the challenge.
this is exactly why i believe the whole ARM hype started over the M1 macs 5 years ago was kinda unfounded and didn't do much. Apple machines are apple machines, they are designed bespoke from the ground up. We've not really seen equivalent performance on ARM from most other machines/manufacturers, because it isn't ARM doing the heavy lifting there.

Swapping architectures will not suddenly yield massive gains in performance. No CPU architecture holds the insane amount of speed and power X86 does at the moment.
The only way out is to basically change what our CPUs are made out of and nobody has any leads on that. I don't think our current computing landscape will drastically change for the next 15 years at least
 
Last edited:
Okay S Schmendrick what's your expected solution?

They'll have 250watts a their disposal, they'll need double the TOPs and RT of the PS5 Pro and at least the same or better raster capabilities for B/C. The Pro also shows their hand that an NPU is out of the question. They need to parallelize ML AI in the system or incur latency or get a modest improvement over the PS5 Pro.

They still have the option to delid and use liquid metal and expensive cooling to hit high frequencies/thermals at launch again, but the power consumption will be a surprising design choice if it isn't locked below 250watts. Even taking BoM out of the equation, other than my asymmetric twin APU suggestion how would they even double PS5 Pro performance at 250watt?
 
Last edited:
this is exactly why i believe the whole ARM hype started over the M1 macs 5 years ago was kinda unfounded and didn't do much. Apple machines are apple machines, they are designed bespoke from the ground up. We've not really seen equivalent performance on ARM from most other machines/manufacturers, because it isn't ARM doing the heavy lifting there.

Swapping architectures will not suddenly yield massive gains in performance. No CPU architecture holds the insane amount of speed and power X86 does at the moment.
The only way out is to basically change what our CPUs are made out of and nobody has any leads on that. I don't think our current computing landscape will drastically change for the next 15 years at least
the M1 laptops got so much hype because they delivered better performance AND battery life than the Intel equivalents, which nobody expected.

The issue here is that speed and power isn't what it used to be. Most consumers don't give a shit. And a lot of intense applications that theoretically could benefit from the speed and power are even better on GPU. This is leaving Intel on the outside looking in because they don't have GPUs for those applications and they can't compete on the stuff people want out of their laptops which is good battery life and efficiency.
 
this is exactly why i believe the whole ARM hype started over the M1 macs 5 years ago was kinda unfounded and didn't do much. Apple machines are apple machines, they are designed bespoke from the ground up. We've not really seen equivalent performance on ARM from most other machines/manufacturers, because it isn't ARM doing the heavy lifting there.

Swapping architectures will not suddenly yield massive gains in performance. No CPU architecture holds the insane amount of speed and power X86 does at the moment.
The only way out is to basically change what our CPUs are made out of and nobody has any leads on that. I don't think our current computing landscape will drastically change for the next 15 years at least
I agree. Simply swapping to ARM isn't likely to yield performance gains over x86. While ARM processors can be very powerful there isn't some magic that just makes them better than other architectures. Windows ARM laptops with the Snapdragon processors have great battery life and are good for lighter workloads, but they can struggle in use cases that require higher performance.

For video game consoles, much like gaming PC's, x86 is probably the better choice where the expectation is higher overall performance with power consumption being a lesser concern. That's not to say that one day ARM won't be the right choice. I just don't think today is that day.
 
the M1 laptops got so much hype because they delivered better performance AND battery life than the Intel equivalents, which nobody expected.
That doesn't say much about ARM, just Apple. Which was what I meant by my post.

We've yet to really see any non-Apple ARM chips compete with x86 performance. The closest is Switch 2 and that's moreso just Nvidia's GPU solution doing the carrying than anything substantial. This just reflects more on the benefits of Apple's engineering and approach to computing as opposed to the architecture itself.


The issue here is that speed and power isn't what it used to be. Most consumers don't give a shit. And a lot of intense applications that theoretically could benefit from the speed and power are even better on GPU. This is leaving Intel on the outside looking in because they don't have GPUs for those applications and they can't compete on the stuff people want out of their laptops which is good battery life and efficiency.
Haven't they tried to make their own break in with Arc? They're taking steps to try and enter the dGPU market and their tech is pretty decent.
 
Top Bottom