• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia to start to directly make ARM based Windows PC chips in a challenge to the x86 duo

LordOfChaos

Member

Nvidia (NVDA.O) dominates the market for artificial intelligence computing chips. Now it is coming after Intel’s longtime stronghold of personal computers.

Nvidia has quietly begun designing central processing units (CPUs) that would run Microsoft’s (MSFT.O) Windows operating system and use technology from Arm Holdings(O9Ty.F), , two people familiar with the matter told Reuters.

The AI chip giant's new pursuit is part of Microsoft's effort to help chip companies build Arm-based processors for Windows PCs. Microsoft's plans take aim at Apple, which has nearly doubled its market share in the three years since releasing its own Arm-based chips in-house for its Mac computers, according to preliminary third-quarter data from research firm IDC.


Advanced Micro Devices (AMD.O) also plans to make chips for PCs with Arm technology, according to two people familiar with the matter.

Nvidia and AMD could sell PC chips as soon as 2025, one of the people familiar with the matter said. Nvidia and AMD would join Qualcomm (QCOM.O), which has been making Arm-based chips for laptops since 2016. At an event on Tuesday that will be attended by Microsoft executives, including vice president of Windows and Devices Pavan Davuluri, Qualcomm plans to reveal more details about a flagship chip that a team of ex-Apple engineers designed, according to a person familiar with the matter.

I've long wondered why they didn't already do this tbh. They had some oddball hardware attempts with the Shield, but not much direct for Windows in a while.

I'd like to see them design the rest of the PC hardware too maybe, I wonder what that would look like. Sell the chips to others, but also go whole banana on an internal attempt.
 
Last edited:

Pagusas

Elden Member
I wonder if we'll look back and see this as the end of traditional x86 computing in the consumer world. Apple's M1 was a giant step, and if Nvidia and AMD both make waves with ARM, Microsoft will have to either take the ARM space serious when it comes to windows, or they'll see their monopoly in the OS space vanaish from the consumer space (obviously they'd still have Office, so they arent going anywhere, but crazy to think we could be at the end of an era).
 

Tams

Gold Member
I wonder if we'll look back and see this as the end of traditional x86 computing in the consumer world. Apple's M1 was a giant step, and if Nvidia and AMD both make waves with ARM, Microsoft will have to either take the ARM space serious when it comes to windows, or they'll see their monopoly in the OS space vanaish from the consumer space (obviously they'd still have Office, so they arent going anywhere, but crazy to think we could be at the end of an era).

x86 has caught up with Apple's implemenation of ARM.

This is for four reasons:
  • More advanced node capacity becoming available after Apple bought up all the initial amount. Also Intel finally get past 14 'nm'.
  • Apple having picked all the low-hanging fruit in their processor design.
  • Apple integrating the memory. This has been a combination of Intel and AMD also integrating RAM more, and separate RAM benefitting from faster connectors to the processors, nullifying most of the advantage of integrated RAM.
  • x86 (CISC) having more RISC structures developed into it.
I've always said that the M1 was impressive, but nothing really special. If you're willing to make your system less modular then you get better performance. People have been let down by M3 as they wrongly let their expectations become too high.

The thing that will advance computing other than the process nodes is cooling. Solid-state cooling is looking very promising.

The issue with ARM has been SoC manufacturers love to add their own proprietry blobs into it, thus making it an utter ballache to develop for. x86 has a history that means that it is more open (despite Microsoft trying to lock it down).
 
Last edited:

LordOfChaos

Member
x86 has caught up with Apple's implemenation of ARM.

This is for four reasons:
  • More advanced node capacity becoming available after Apple bought up all the initial amount. Also Intel finally get past 14 'nm'.
  • Apple having picked all the low-hanging fruit in their processor design.
  • Apple integrating the memory. This has been a combination of Intel and AMD also integrating RAM more, and separate RAM benefitting from faster connectors to the processors, nullifying most of the advantage of integrate RAM.
  • x86 (CISC) having more RISC structures developed into it.
I've always said that the M1 was impressive, but nothing really special. If you're willing to make your system less modular then you get better performance. People have been let down by M3 as they wrongly let their expectations become too high.

The thing that will advance computing other than the process nodes is cooling. Solid-state cooling is looking very promising.

The issue with ARM has been SoC manufacturers love to add their own proprietry blobs into it, thus making it can utter ballache to develop for. x86 has a history that means that it is more open (despite Microsoft trying to lock it down).


Intel is saying performance per watt leadership by Lunar Lake, and Meteor Lake is launching within a month and is a huge change to how they do things with a disaggregated tile layout where each component can be made on the best node for it, plus another new Intel node to use for the CPU die.

Intel-Roadmap-to-2024-and-beyond.jpg
 

Tams

Gold Member
Intel is saying performance per watt leadership by Lunar Lake, and Meteor Lake is launching within a month and is a huge change to how they do things with a disaggregated tile layout where each component can be made on the best node for it, plus another new Intel node to use for the CPU die.

Intel-Roadmap-to-2024-and-beyond.jpg

AMD are pretty much taking the same approach, though it seems more to maximise the utility of the limited wafers they can get and to save money by not using the latest and greatest node for things like I/O that don't benefit much from it.
 

LordOfChaos

Member
AMD are pretty much taking the same approach, though it seems more to maximise the utility of the limited wafers they can get and to save money by not using the latest and greatest node for things like I/O that don't benefit much from it.

Yeah AMD is doing it with the IO die and some of the many core products having multiple dies of CPU cores (on the same node though), Intel already did that back to Haswell at least for the IO chip, Meteor Lake is the first real complete disaggregation where your GPU, SoC, IO, compute etc can all be on different dies and mixed and matched as needed. Tile GPUs are going to be interesting, especially with Adamantine cache, maybe get between IGPs and dedicated.

Architecting%20Our%20Next%20Gen%20Power%20Efficient%20Processor_FINAL%20CLEAN-03_575px.png


 

Drew1440

Member
The main problem is x86 backwards compatability, people won't buy those chips if most of their software won't run on them. Hopefully Nvidia have a solution for this.
 

winjer

Gold Member
x86 has caught up with Apple's implemenation of ARM.

This is for four reasons:
  • More advanced node capacity becoming available after Apple bought up all the initial amount. Also Intel finally get past 14 'nm'.
  • Apple having picked all the low-hanging fruit in their processor design.
  • Apple integrating the memory. This has been a combination of Intel and AMD also integrating RAM more, and separate RAM benefitting from faster connectors to the processors, nullifying most of the advantage of integrate RAM.
  • x86 (CISC) having more RISC structures developed into it.
I've always said that the M1 was impressive, but nothing really special. If you're willing to make your system less modular then you get better performance. People have been let down by M3 as they wrongly let their expectations become too high.

The thing that will advance computing other than the process nodes is cooling. Solid-state cooling is looking very promising.

The issue with ARM has been SoC manufacturers love to add their own proprietry blobs into it, thus making it can utter ballache to develop for. x86 has a history that means that it is more open (despite Microsoft trying to lock it down).

Intel's X86 has been RISC since the P5.
The support for CISC instructions, has been kept through microcode.
In the days of the P5, microcode would take nearly a third of the CPU. But today's CPUs it's barely anything.
Like Jim Keller said a couple of years ago, ISA is not determinant for performance or power usage.
 

Pagusas

Elden Member
The main problem is x86 backwards compatability, people won't buy those chips if most of their software won't run on them. Hopefully Nvidia have a solution for this.
I mean Apple is doing just fine with emulating/translating x86 when needed with Rosetta 2.
 
Last edited:

Dane

Member
I wonder if we'll look back and see this as the end of traditional x86 computing in the consumer world. Apple's M1 was a giant step, and if Nvidia and AMD both make waves with ARM, Microsoft will have to either take the ARM space serious when it comes to windows, or they'll see their monopoly in the OS space vanaish from the consumer space (obviously they'd still have Office, so they arent going anywhere, but crazy to think we could be at the end of an era).
There's has been major talks up and down for a few years that ARM and even MIPS could replace x86, Apple M1 was a major kickstart in performance and it made their hardware have a good cost benefit (shockingly).
 
Last edited:

Pagusas

Elden Member
There's has been major talks up and down for a few years that ARM and even MIPS could replace x86, Apple M1 was a major kickstart in performance and it made their hardware have a good cost benefit (shockingly).
I've never seen a good write up on the benifits or negatives of x86 vs ARM. Is legacy support really x86's only major advantage, or does it do certain CPU task better than ARM and thats why we've stuck with it?
 
Last edited:

Kadve

Member
I wonder if we'll look back and see this as the end of traditional x86 computing in the consumer world. Apple's M1 was a giant step, and if Nvidia and AMD both make waves with ARM, Microsoft will have to either take the ARM space serious when it comes to windows, or they'll see their monopoly in the OS space vanaish from the consumer space (obviously they'd still have Office, so they arent going anywhere, but crazy to think we could be at the end of an era).
ARM CPU's needs to be able to run x86 code natively before that. The entire windows eco system is built around X86 and is too integral to PC's. People are not gona uproot themselves due to another architecture being better in a vacuum.
 
Last edited:

Tams

Gold Member
I've never seen a good write up on the benifits or negatives of x86 vs ARM. Is legacy support really x86's only major advantage, or does it do certain CPU task better than ARM and thats why we've stuck with it?

From what I've read and how I've understood it, ARM (RISC) is somewhat more efficient, but not drastically so. And while x86 is CISC, RISC like parts have been developed for it.

Really, it's more of how the instruction sets have developed. x86 hasn't been locked down by the processor manufacturers (bizzare, when you cosider only Intel, AMD, and VIA can use it), whereas ARM designs have. Qualcomm are notorious for it, and then they go and drop support after a few years. Apple are incredibly insular, so for community/OS development, their SoCs are yet another closed box people have to work around. MIPS never took off from low-powered applications and is now Chinese owned. RISC-V might end up as something, but the hype around it makes me suspicious.
 
Last edited:

Pagusas

Elden Member
Would work pretty well with laptops and tablets. Would be meh on desktops.
Would it? the Mac Pro with M2 Ultra seems to be doing great, didn't it post higher benchmarks than the i9 13900k? At dramatically lower power usage too? That might have just been early leak benhcmarks, trying to find some officials stats on it.
 
Last edited:
Would it? the Mac Pro with M2 Ultra seems to be doing great, didn't it post higher benchmarks than the i9 13900k? At dramatically lower power usage too?

It might be on certain tasks. It wouldn't surprise me. Would be hard to do that on all tasks where the x86-64 arch has more complex instructions.
 

Dane

Member
I've never seen a good write up on the benifits or negatives of x86 vs ARM. Is legacy support really x86's only major advantage, or does it do certain CPU task better than ARM and thats why we've stuck with it?
IIRC ARM has a higher efficiency, while x86 design is somewhat tired with no bright future ahead.
 

LordOfChaos

Member
IIRC ARM has a higher efficiency, while x86 design is somewhat tired with no bright future ahead.

The instruction set is less than 1% of the die on these modern chips and the attribution to purely that for efficiency is very overstated, the rest of the architectures around the ISA had decades of work on low power use as that's where the focus was.

Apart from a small handicap, there's not a technical reason why you coudln't have x86 cores drawing much less power, AMD's made big strides on x86 efficiency and Intel wants perf per watt leadership by 2025

Intel-Roadmap-to-2024-and-beyond.jpg
 

Tams

Gold Member
With the numbers Qualcomm just announced for Snapdragon X (Nuvia), this is going to be interesting.

They are claiming much higher power efficiency than the Apple M-series, better CPU performance than the top Intel, and better GPU performance than the top AMD APU.

IIRC ARM has a higher efficiency, while x86 design is somewhat tired with no bright future ahead.

Not that I'm an expert on the matter, but that's a very ignorant and unknowledgeable take.

The ISA does matter, but it is only part of it.

RISC ISAs have been used mostly on very low-powered up until recently, so the perception that RISC is drastically more power efficient has been perpetuated.

Is it? For most computing tasks it is more efficient, but Apple are currently the only ones to have pushed RISC processor performance on the consumer side. But, they also tightly integrated almost all the parts of the SoC, which is where they get a lot of their improvements from. However, as a result upgrades are non-existant.

There's also the business-to-business/corporate side. But considering Intel do still sell a lot of server chips and AMD have made massive inroads, both with x86 CPUs...
 

kruis

Exposing the sinister cartel of retailers who allow companies to pay for advertising space.
Would it? the Mac Pro with M2 Ultra seems to be doing great, didn't it post higher benchmarks than the i9 13900k? At dramatically lower power usage too? That might have just been early leak benhcmarks, trying to find some officials stats on it.

Apple is great at fudging benchmark numbers. A 13900K + RTX4090 outperforms a maxed out M2 Ultra and is cheaper too. A M2 Ultra IS more efficient and the system is a lot smaller but if you're a professional you're more interested in the best performance for the price than the best performance at a lower power level for a premium price.



 

winjer

Gold Member
IIRC ARM has a higher efficiency, while x86 design is somewhat tired with no bright future ahead.

That is complete non-sense.
Both ARM and X86 are both RISC based architectures.
X86 has one point that is a disadvantage and an advantage, and that is legacy support.
The advantage is that a lot of companies and programs depend on these instructions. And these companies will not change their IT infrastructure because of that.
The disadvantage is that it uses a bit of die scape. But not much, as this is mostly micro-code and modern CPUs are huge.
And X86 has a bigger decode stage, but like Jim Keller said, it doesn't matter.

For a while we thought variable-length instructions were really hard to decode. But we keep figuring out how to do that. … So fixed-length instructions seem really nice when you’re building little baby computers, but if you’re building a really big computer, to predict or to figure out where all the instructions are, it isn’t dominating the die. So it doesn’t matter that much.
 

Dane

Member
With the numbers Qualcomm just announced for Snapdragon X (Nuvia), this is going to be interesting.

They are claiming much higher power efficiency than the Apple M-series, better CPU performance than the top Intel, and better GPU performance than the top AMD APU.



Not that I'm an expert on the matter, but that's a very ignorant and unknowledgeable take.

The ISA does matter, but it is only part of it.

RISC ISAs have been used mostly on very low-powered up until recently, so the perception that RISC is drastically more power efficient has been perpetuated.

Is it? For most computing tasks it is more efficient, but Apple are currently the only ones to have pushed RISC processor performance on the consumer side. But, they also tightly integrated almost all the parts of the SoC, which is where they get a lot of their improvements from. However, as a result upgrades are non-existant.

There's also the business-to-business/corporate side. But considering Intel do still sell a lot of server chips and AMD have made massive inroads, both with x86 CPUs...
That is complete non-sense.
Both ARM and X86 are both RISC based architectures.
X86 has one point that is a disadvantage and an advantage, and that is legacy support.
The advantage is that a lot of companies and programs depend on these instructions. And these companies will not change their IT infrastructure because of that.
The disadvantage is that it uses a bit of die scape. But not much, as this is mostly micro-code and modern CPUs are huge.
And X86 has a bigger decode stage, but like Jim Keller said, it doesn't matter.
Like i've said, If I Recall Correctly, that's what I remember reading years ago at a glance, where supposedly most of the industry was going to switch to ARM in the next 10 years or so. The compability with x86 software was a pointed issue at the time.
 

winjer

Gold Member
Intel's reponse:

I think what you're seeing is the industry is excited around the AIPC. And as I declared this generation of AIPC at our Innovation Conference a couple of months ago, we're seeing that materialize and customers, competitors seeing excitement around that. ARM and Windows client alternatives, generally, they've been relegated to pretty insignificant roles in the PC business.

And we take all competition seriously. But I think history as our guide here, we don't see these potentially being all that significant overall. Our momentum is strong. We have a strong road map, Meteor Lake launching this AIPC generation December 14. Arrow Lake, Lunar Lake, we've already demonstrated the next-generation product at Lunar Lake, which has significant improvements in performance and capabilities.

When thinking about other alternative architectures like ARM, we also say, wow, what a great opportunity for our foundry business. And given the results I referenced before, we see that as a unique opportunity that we have to participate in the full success of the ARM ecosystem or whatever market segments that may be as an accelerant to our foundry offerings, which are now becoming, we think, very significant around the ARM ecosystem with our foundry packaging and 18A wafer capabilities as well.

Pat Gelsinger - Intel CEO (Q3 2023 Earnings Call)
 

Wildebeest

Member
I predict code translation overhead will not be a problem for nvidia because they will throw as many cores as needed into their chips even if the power draw is over 1MW and the chips cost ten times as much as other CPU.
 

LordOfChaos

Member
Enter Snapdragon X Elite debuting next year...

dBfYTNm.png



I just realized it's all performance cores. Interesting, as the x86 camp follows the ARM camp being first to big.little, now their effort going after the x86 camp is dropping the efficiency cores.

But, the minimum power on even ARM performance cores is still highly efficient and probably more than comparable to any x86 E-core, so this might be ok for larger PCs as it's not for a phone. And these are finally Nuvia cores which promise both greater ST and lower power than ARM's reference X4.
 
Last edited:

DeafTourette

Perpetually Offended
I just realized it's all performance cores. Interesting, as the x86 camp follows the ARM camp being first to big.little, now their effort going after the x86 camp is dropping the efficiency cores.

But, the minimum power on even ARM performance cores is still highly efficient and probably more than comparable to any x86 E-core, so this might be ok for larger PCs as it's not for a phone. And these are finally Nuvia cores which promise both greater ST and lower power than ARM's reference X4.

Qualcomm is really bringing it... The Snapdragon 8 Gen 3 is even more powerful than the A17Pro... I can't wait to see what the X Elite can do on a Windows laptop or desktop!
 

Trogdor1123

Gold Member
Qualcomm is really bringing it... The Snapdragon 8 Gen 3 is even more powerful than the A17Pro... I can't wait to see what the X Elite can do on a Windows laptop or desktop!
Let’s wait and see how it actually performs out in the wild before making too many claims. Also, it will be nearly a year later so I’d hope so.

These arm chips keep getting better and better. How long till consoles use them?
 
Top Bottom