Funky Papa said:
This thread doesn't make any sense at all.
Sense? on GAF? Surely you jest.
Ignorant fanboys love to take any tiny piece of information they don't understand and run away with it into hype-heaven. It's actually the more technically-rounded people that are hyping things up that are actually frustrating me. It's understandable that they're excited for new technology and all of that, but to flippantly proclaim this new approach as a total revolution is near-sighted and dangerous. Looking at the computer industry over the past 50 years how many big breakthroughs have their been?
The invention of the transistor is probably the biggest breakthrough, and with that VLSI chips. Then you have the Arpa/internet, and other than that, what else is there? RISC architectures were hyped to be the best new way for processors to go, and for a while they did have an advantage but now CISC designs have surpassed them. I read something interesting a while ago, and it was about how people hype of brand-new technologies because they feel since they're not tied to the past they can eclipse present designs. But that thinking never works out because the people using current designs continually refine and improve and the new designs never really catch up.
What Sony appears to be trying to do is very very ambitious, and if all goes as planned then it very well might be a new turning point in computing technology. But what you have to remember is that Sony/IBM/Toshiba isn't just competing with MS here, it's competing with Intel, AMD, Apple and Sun. And do you really think they're gonna blow these giants out of the water? Colour me skeptical.
Just continuing with the tech-talk there are major benefits to multiprocessor architectures, but there are inherent, fundamental issues that impede multiprocessor programming. This is why multiprocessors do not exhibit a linear relationship with the number of processors, there's actually a pretty large diminishing rate of return as the of processors increases. And right now I'll give you an example. I don't know the exact details of the PS3 architecture and whether or not each Cell has it's own memory, or if there's shared memory between all the processing units, but we'll go with individual memory as my example can be applied to shared memory as well.
From what we know there is only going to be one disc drive, meaning that the game data will ultimately be derived from a single (slow) source. Now, all the cells will be getting their data from that source, or from indiretly from that source from data derived from the disc. Either way, there will be wait times associated with that because there won't be simultaneous access to the data. And I don't believe that there will be a separate bus for each cell to communicate with another cell, so you're going to have buses that are shared in which case there can be only one master at a time. That introduces another bottleneck. Just because you have a larger number of cells they're still limited by having ONE source of data in the Blu-Ray drive. So obviously the amount of RAM inside the machine will have a HUGE impact on its parallel performance, if there's enough RAM such that disc access isn't common then there won't be too much of a performance hit.
But that's just one example. The fundamental issue with parallelism is the overhead associated with synchronization and that's just something that can never be done away with. From the limited info I've read on Cell, it does have some pretty novel ways of dealing with data transfer (super-high speed buses, etc) but how they'll work in practise remains to be seen.