marsomega said:
Running multiple OS's at the same time is not some new "mega wow" feature. It's always been there and done before on many computing clusters.
I am not aware that OS Virtualization (the buzz-word for the ability of running multiple OS's at the same time) being supported in Hardware in a single chip was underwelming

.
It is one of the best ways to learn, tweak, and tinker when studying the science of operating systems at the grad level. It is nothing knew, anything with similar computing characteristics such as cell can do it.
Look, many of us are not stating CELL is a revolution, many of the concept you will see deployed in new CPU's for the next 5-10 years are all out there... thinkered in Research Labs, in Universities or at Xerox

. What is nice about CELL is that many of those ideas, thanks to the improovements in manufacturing technology and greater experience of CPU architects and Circuit Designers, are coming together in an architecture like CELL.
If company XYZ made the REYES Engine, a single chip solution capable of rendering PRman shaders and scenes of the complexity of say Monster Inc. I will still applaud them even though, in a million years or so of rendering time, a C64 could have done the same thing: it is all about computing efficiency, power consumption, size and manufacturing costs.
Much of what Pana mention is pretty much out there if you take the time and read.
You seriously expected that I would make forum posts using for them sources that are not already public in some sort thus divulging those things (whether inside rumors or factual truths)? Not gong to happen

.
And I also have my curiosities in this area. For example, I know NVIDIA is going to take it up the rear and fully implement unified shaders. However, I'm curious as to why they rather have two separate optimized parts to do all the work in their hardware.
The way I see it is mainly due to the latencies the two units are designed to sustain: Pixel Shaders have to tolerate higher latencies more often than Vertex Shaders do, they (for example) write, sample and filter textures (you can call them buffers instead of textures if you want) considerably more often than what Vertex Shaders do and thus they need to be able to multi-thread (pause one pixel, store the context in a temp buffer and move to the next pixel to execute the shader on it waiting for the texture fetch to be completed for the stalled pixel) between pixels: having 32-64 hardware contexts is not unheard of.
nVIDIA does not see the Vertex Shader to have to bear such high latencies and most likely wants to be able to have more compact and faster Vertex Shaders. The need for fast Vertex Shaders is still out there: the Pixel Shaders need something that set-ups their work.
What made their engineers wide mouth flashing the lights on and off and slapping their shoes on the wall saying "we have a winner". :lol
Likely they expeimented, constructed models of what technology allowed in the time-frame their new architecture would have to come out, the resources needed fr such a Unified Shader ALU and saw that they could work a GPU that was amazingly fast (not slower than ATI's GPU from their projected figures and analyses) with less money spent in R&D (both Hardware and Software).
ATI has the advantage now because they had a relative false step with the R3XX to R400 or R420 transition: the R400 pulled quite a bit of resources IMHO and getting the R420 out in time was asignificant effort that also eased nVIDIA recovery from the NV30 situation and turn the tables with parts such as the 6800 Ultra and the 6600 GT.
Likely nVIDIA, in the longer-term future, might move to such a Unified Shader ALU (if you think in REYES terms there is no such separation between Vertex and Fragment processing), but they feel that at this point in time they would be able to produce a better GPU going their own way than copying ATI. Remember when ATI was going around doing damage control against Shader Model 3.0 features because "their part in which they would be able to support them at full speed was not out yet" ? Quite a scandal back then :lol.