• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

This new 'skyscraper' chip could make computers run 1,000 times faster

mocoworm

Member
Not being a PC guy, will this have applications in the gaming world? 1000 times faster sounds like a LOT, right?

This new 'skyscraper' chip could make computers run 1,000 times faster - Here's what happens when you ditch silicon.

http://www.sciencealert.com/this-new-skyscraper-chip-could-make-our-computers-1-000-times-faster

Engineers in the US have invented a new computer chip that replaces the silicon in conventional chips with nano-materials that can be stacked, changing the landscape from a resource-heavy single-storey layout to a more efficient ‘skyscraper’ approach.

Their new multi-layer chip, called N3XT, is designed to break data bottlenecks by laying processors and memory chips on top of each other like floors in a skyscraper, rather than side by side. "When you combine higher speed with lower energy use, N3XT systems outperform conventional approaches by a factor of a thousand," says one of the researchers, H. S. Philip Wong from Stanford University.

If you think about current computer systems, they’re arranged just like a sleepy suburb dominated by single-storey houses. The processors and silicon memory chips are arranged next to each other, which means a whole lot of space is being taken up by a single device.

This also means that the digital signals often have to travel across relatively long distances, which not only wastes processing time and energy, it risks bottlenecking, which occurs when a lot of data is trying to travel across the same circuit.

But what if computer systems could be built more like a city, filled with super-efficient skyscrapers that could fit more processors and chips into tinier spaces, with less distance between them? That’s the concept behind the N3XT (Nano-Engineered Computing Systems Technology), and while this certainly isn’t a new idea, Wong and his team have figured out how to actually make it work.

The problem with stacked computer chips in the past has been silicon. It works great in our current computer systems, but because it has to be heated to about 982ºC (1,800ºF) during manufacture, it’s virtually impossible to stack silicon chips without the one on top damaging the one on the bottom.

The best we have right now are three-dimentional, or 'stacked', chips that are composed of two silicon chips, made separately, and then connected by a few thousand wires. That might seem like a lot of wires, but relatively speaking, it’s not many at all, which means a lot of energy is needed to push the data from one chip to another.

Instead, the researchers at Stanford decided to build their stacked chips by replacing conventional transistors with carbon nanotube transistors (CNTs). These are not only faster than silicon transistors, but because they don't require such high temperatures during manufacture, they can also be layered between memory chips without causing any damage.

The team also replaced existing silicon chips with what's known as resistive random-access memory, or RRAM, which uses tiny jolts of electricity to switch memory cells between the binary states of zero and one. Millions of devices called vias are used to connect each RRAM and CNT layer like tiny electronic 'elevators' that allow data to travel over much smaller distances than traditional circuits, and thermal cooling layers are installed between the layers to ensure that the heat they generate doesn’t build up and cause long-term degradation.

The team demonstrated a four-layer prototype made from two layers of RRAM memory and two layers of CNTs at the 2014 International Electron Devices Meeting, and in an upcoming paper in the IEEE Computer journal, describe how they've run simulations showing how their more advanced skyscraper approach is 1,000 times more efficient than conventional computer systems.

They now just have to convince the industry to invest in the huge infrastructure change that would be needed to step away from silicon and towards a potentially much more efficient system, but they say the pay-off would be massive. And of course, a working prototype - not just simulations - would be required to really prove the capabilities of N3XT.

"There are huge volumes of data that sit within our reach and are relevant to some of society's most pressing problems from health care to climate change, but we lack the computational horsepower to bring this data to light and use it," said one of the team, computer scientist Chris Re. "As we all hope in the N3XT project, we may have to boost horsepower to solve some of these pressing challenges."
 

Jezbollah

Member
I doubt the major benefactors of this kind of technology will be gamers (because lets face it, we all dont run Xeon processors, do we?) - I would say industrial design and any venture that uses mathematical computing as a core will make most use of this.
 

komplanen

Member
Even if Playstation 10 will have these chips, developers will find a way to sell us 25fps games and gamers in general don't care.

But it's still damn interesing!
 

The_Poet

Banned
I doubt the major benefactors of this kind of technology will be gamers (because lets face it, we all dont run Xeon processors, do we?) - I would say industrial design and any venture that uses mathematical computing as a core will make most use of this.

The point is that as the technology matures and becomes cheaper the benefit will be passed on to consumer products.
 

tejpis

Neo Member
3D ICs are being worked on from multiple angles. Their arrival is inevitable and has been for decades. The reason it hasn't happened sooner is because we've gotten enormous performance and efficiency gains yearly simply by making the process node smaller, fitting more transistors per chip at lower voltages. The benefits of that minituarization is drying up as we're hitting physical limits and building upwards is starting to look enticing. You can be sure that all relevant chipmakers are researching 3D ICs.

https://en.wikipedia.org/wiki/3D_IC

1000x performance increase is just clickbait, I'm sure we'll get there in time but at first you have to make chips that are even equal to the competition. You need billions of dollars worth of research in materials, fabs, EDA software, probably new ISAs, schedulers etc. It's a massive task that has been put off because making the process node smaller has been easier (still possibly the most daunting endeavor in human history) until now.
 

Zaptruder

Banned
I doubt the major benefactors of this kind of technology will be gamers (because lets face it, we all dont run Xeon processors, do we?) - I would say industrial design and any venture that uses mathematical computing as a core will make most use of this.

Nah, this is a computing substrate change. While the architecture might benefit specific types of calculations over others, it's simply such a huge order of magnitude change, that the bottom line is, everything will see benefit.

Also, video cards are starting to see 3D ram chips, and it provides significant advantages due to the memory density and more importantly the bandwidth between memory and processor... it'd follow that they'd see significant advantage from having interleaved processor/ram designs as described in this article.

Not sure about how soon this tech will come into play... I'd give it a 5-10 year time range before it trickles down to consumer electronics.
 
Watch-Bjork-Explain-How-Television-Works-600x337.jpg
 
Millions of devices called vias are used to connect each RRAM and CNT layer like tiny electronic 'elevators' that allow data to travel over much smaller distances than traditional circuits, and thermal cooling layers are installed between the layers to ensure that the heat they generate doesn’t build up and cause long-term degradation.

This is the most interesting part of the article IMO, as stacking itself has been in the talks for quite a while.

Edit: After the CNTs, of course. That's the biggest news.

They now just have to convince the industry to invest in the huge infrastructure change that would be needed to step away from silicon and towards a potentially much more efficient system, but they say the pay-off would be massive. And of course, a working prototype - not just simulations - would be required to really prove the capabilities of N3XT.

Considering we're already having a hard time shrinking silicon-based transistors, convincing the industry it's necessary isn't the hardest step, really.
 

Akai__

Member
Will X4 and PS5 be able to utilize this?

That completely depends on if this gets the attention of the industry and companies are investing into this. Even then, it would take multiple years, until it hits mainstream probably.

Quote from the article:

They now just have to convince the industry to invest in the huge infrastructure change that would be needed to step away from silicon and towards a potentially much more efficient system, but they say the pay-off would be massive. And of course, a working prototype - not just simulations - would be required to really prove the capabilities of N3XT.
 
1000x more efficient could mean more than 1000x more power so that would be pretty huge. Lets see if someone can actualy mass produce this stuff.
 

chrislowe

Member
I always thought todays big overcoming is that now the transistors on the silicon are only a few atoms big.. and its hard to steer the electrons to go in the right direction.
So this wouldnt change this anyway right?
 
remember back when everyone and their mom was speculating Xbox One was using stacked chips because there was no possible way that MS's console would have such weak (leaked) specs?

Fun times.
 
remember back when everyone and their mom was speculating Xbox One was using stacked chips because there was no possible way that MS's console would have such weak (leaked) specs?

Fun times.

Those "Xbox is actually stronk gais!" rumors never made sense. Why would Microsoft not advertise whatever advantage they had?
 

Hermii

Member
Even if Playstation 10 will have these chips, developers will find a way to sell us 25fps games and gamers in general don't care.

But it's still damn interesing!
With a thousand ps4s duck taped together, it's hard to imagine developers not hitting 60fps:p
 
HBM memory is the first thing that comes to mind when considering this for consoles. The memory itself is 3D stacked, and AMD uses a 2.5D configuration with an interposer to connect the memory to the GPU. I think it's not feasible currently to stack the memory on top of a GPU due to cooling issues, but it might be possible for low power mobile chips.
 

Calabi

Member
Why do they need to convince anyone?

If it's true then it's advantages should be clear, those whom take it up will have major advantages against those whom do not, no convincing required.
 
called "n3xt" ugh :D

and the next article: "These urine-fuelled socks generate enough power to transmit wireless signals" seems more futuristic!
 

blastprocessor

The Amiga Brotherhood
Would be one of the biggest advances in computing if this is possible. It would also mean re-imagining a ton of software such as encryption.
 
Are they saying it is a 1000 times faster or 1000 times as efficient? To me it reads like the later where you calculate performance by energy use.
 
I honestly wouldn't be surprised. You guys don't follow tech much I see.

? Do you have any idea how difficult and expensive it is to merely make a jump in transistor size for factories? Let alone something that uses a completely different approach on even the most basic level (Carbon nanotubes instead of silicon)? TSMC, and by extension the GPU market, has been stuck on 28nm for years due to issues with 20nm yield. Do you think that in less than five years we will see this technology go from "yes, it's possible" to "So cheap and common we can put it on consoles"?
 

cheezcake

Member
I honestly wouldn't be surprised. You guys don't follow tech much I see.

Lol what, there is literally 0 chance. They use carbon nanotube transistors, we've known about them since the 90's and while interesting we are not any closer to getting them to a mass production level of reliability and cost.
 
So, more tech we won't see for 10+ years? Don't get me wrong, I'm as hyped about cool stuff we can do with carbon, but practically none of that stuff's hitting consumer markets any time soon.
 

Zaru

Member
3D ICs are being worked on from multiple angles. Their arrival is inevitable and has been for decades. The reason it hasn't happened sooner is because we've gotten enormous performance and efficiency gains yearly simply by making the process node smaller, fitting more transistors per chip at lower voltages. The benefits of that minituarization is drying up as we're hitting physical limits and building upwards is starting to look enticing. You can be sure that all relevant chipmakers are researching 3D ICs.

https://en.wikipedia.org/wiki/3D_IC

1000x performance increase is just clickbait, I'm sure we'll get there in time but at first you have to make chips that are even equal to the competition. You need billions of dollars worth of research in materials, fabs, EDA software, probably new ISAs, schedulers etc. It's a massive task that has been put off because making the process node smaller has been easier (still possibly the most daunting endeavor in human history) until now.

People really need to consider how few companies actually manage to MAKE chips like these and what kind of effort went into building the sites producing them at a cost-efficient rate.
 

Xenus

Member
People really need to consider how few companies actually manage to MAKE chips like these and what kind of effort went into building the sites producing them at a cost-efficient rate.

Not only that but until we find a way to cool 3D SOC's efficiently it's all just a pipe dream anyways. Presently carbon nanotubes or not they are not going to stack ram on top of a hot a processor like today's GPUs due to inability o the cool the thing.
 
Top Bottom