• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Computing @ speed of light. New chip, millions of times faster than current machines.

Sigh. This is being misreported, as always. Photonic computing is not going to give us "million times" faster CPUs.

We compute with electrons because they interact with each other strongly. Light only interacts with itself weakly. You need much more power and larger components to make it happen and this is a limit imposed by physics, not engineering. A photonic computer might be possible but it would have to be either very dumb or very big and hot.

Where photonic computing *will* make a difference is in interconnects.

It would be wonderful if all RAM was on the same die as the CPU, so that lookup times were close to zero, but this just isn't feasible from manufacturing yield and heat management POVs. So instead we have a few MB of precious cache on-chip, with the rest of RAM on chips. But electrical signals only move so fast; waiting for the data to arrive from RAM puts a bottleneck on data-intensive computing.

Photonic computing has the potential to do those connections with light, making all RAM effectively as fast as cache.

If you want to see a radical increase in CPU power, look to graphene. That has the potential to allow the same sort of computing we already do with electrons, but 100x faster. Still plenty of work to be done there.

Ah, that's what I thought. The headline seemed a little crazy.
 
If you want to see a radical increase in CPU power, look to graphene. That has the potential to allow the same sort of computing we already do with electrons, but 100x faster. Still plenty of work to be done there.

Have any advancements in Graphene been made in the past 5 years? Last I read about it it was some miracle material but it's too expensive.
 

infovore

Neo Member
Where photonic computing *will* make a difference is in interconnects.

It would be wonderful if all RAM was on the same die as the CPU, so that lookup times were close to zero, but this just isn't feasible from manufacturing yield and heat management POVs. So instead we have a few MB of precious cache on-chip, with the rest of RAM on chips. But electrical signals only move so fast; waiting for the data to arrive from RAM puts a bottleneck on data-intensive computing.

Photonic computing has the potential to do those connections with light, making all RAM effectively as fast as cache.

Speed of light in glass and speed of electrical signals in copper are actually pretty similar. So photonics is not going to make all ram as fast as cache. The speed of cache is dependent on the topology of the CPU, which is why not even all cache is equally fast. There's not just the distinction between L1, L2, and L3 caches (on high-end CPUs), but access times to the L3 from a core may vary depending on where in the L3 the data is: the L3 may be divided into blocks, with each core having a local block of L3 that it can access faster than the rest of the L3 cache.

Where optics make a difference is that it is much easier to get high-frequency signals to work reliably with light than with electrons, especially as the distance increases. Distance (wire delay) is a factor in memory latency, and becomes a bigger factor with higher speeds. But today most of the latency comes from the speed with which memory chips work, and in bigger systems, the extra latencies incurred if you have to make extra hops through the "network" that ties all components together.

In this context, what is interesting about HBM is that it puts RAM and CPU in the same package. This isn't quite as good as on the same die, but it gets you many of the same benefits. It allows for many more (electrical!) connections to be used between them, which is where the increased bandwidth compared to external memory like GDDR5 or DDR4 comes from. As fast as cache it isn't, and won't ever be, but it is an improvement.
 
Top Bottom