Dave Baumann from B3D suggest Cell is a failure

Jonnyram said:
Why did Sony go with nVidia and not ATI?
I think there are many factors that influence who gets the deal. I want to point out that just because ATI is leading the performance segment in the PC space doesn't mean Nvidia isn't capable of coming up with a competitive part for a console system. We don't know what Sony is contributing or what kind of money is being spent. Perhaps, the deciding factor wasn't even hardware. Nvidia owns a lot of patents that might be interesting to Sony and has better OpenGL and Linux drivers which are definitely interesting to Sony.
 
One reason Sony may have gone with nVidia rather than ATI is OpenGL performance.

ATI is still lagging in this area, and it's an API Sony seem commited too.

Another possibility is ATI's lack of Linux experience compared to nVidia, they really are only just starting to ramp up Linux development.

*prays for X.org 6.8.1 support in next weeks Linux driver release*
 
CueTheMusic said:
Like the 6800 Ultra? :) I wouldn't really say one company is ahead of the other right now. It seems to depend more on how a specific game was coded.

I tend to agree. Looking at benchmarks (admittedly only those on anandtech), ATi's new chips announced in the last couple of weeks are only now putting them on par, or slightly ahead, of NVidia's much older 6800 Ultra. They both outperform one another in different games/benchmarks. There seems to be a perception that ATi are "ahead" in the PC graphics card space at the moment, but I wouldn't say that is accurate (especially from a features POV).
 
Jonnyram said:
Now that Pana is finally here, I have a question.
Why did Sony go with nVidia and not ATI?
The past year or so, ATI have really started to own the PC GFX card market, and it doesn't look like being the fleeting lead position that Diamond Stealth and nVidia once had either, they could be at the top for some time. So why did Sony pick the #2 GFX card maker?

I don't mean to insinuate anything, by the way. It could be that nVidia has gone behind in the PC market because they are putting a lot of effort into making something that blows ATI away. Just wondering if anyone had a proper idea.


To quote Dave:

Talks between Sony and other vendors had been ongoing for sometime, but the finalisation of the deal was likely very recent. Speaking to an analyst yesterday it seems that Sony's approach was "yes, we're listen to others, but at present thats not our preferred route and there's no money on the table yet"; that wasn't particulary attractive to ATI when they already had two console projects on the boil - it appears they did drop the ball on following the progress of the preffered solution and getting in there at the right time. Given that NVIDIA were out of the other two consoles they followed Sony's progress much closer.

I think this is a fair assesment of the situation.

Remember the first priority for Ken Kutaragi's SSNC is to produce in Sony's owned fabs most if not all chips used in their products to keep their fabs full :).

Then the second priority is to leverage in-house R&D, but that does not prevent from licensing IPs and collaborating with experienced partners like IBM, Toshiba and nVIDIA if it makes technical and business sense.
 
I know my post have lost a bit a focus in the whole Sony/NVIDIA discussion. However, I do believe what was said in the pr by NVIDIA sharing their IPs or what not. I don't necessarily think the reason Sony went with NVIDIA is direct proof that CELL is a failure and there is another reasonable reason. (How reasonable I'll leave to you.) I think I believe CELL while all mighty and powerful isn't necessarily good or the best solution for something a GPU would do based on what I know about multiprocessing...


From the little I know, CELL is all about multiprocessing. When I think of multiprocssing I think of all the things that must be implmented to have a fully functional CELL architecture thats executing 80 to 85 percent of the time.

Key things I think of to get the most out of the CELL architecture.

An efficient and super optimized algorithm for handling process scheduling. For example Process A has one core but requires an I/O, thus it is interrupted, state of the process is saved, context switch is performed, and the core goes to work on the next process in queue while the I/O is being completed. Effectively keeping the cores busy.

Moving along, the basic idea is that the multiple processes handled by the cores will have to be preemptive to keep the cores working most of the time and decrease the time the cores are idle.

Especially note that work has to be done to get the most out of your memory. (Why load the whole process when you only want parts of it? Loading partcial processes, the parts you need, willl allow you to have more processes in memory at the same time for faster access and Exexcution. This techniique effectively makes better use of your memory. Basically similar to virtual memory. Offcourse page faults might hinder you but the trade off is the full benefit of having more processes in memory then the physical limitation would permit at the same time. I also imagine they would have algorithms to reduce the number page faults to as less frequent as they can.)

I'll stop there as it gets even more involved but those two examples should be enough.

Keep in mind, I only have experience on the general computing level with multiple microprocessors and multiprocessing on a single processor operating systems. This is just an educated guess towards how CELL will handle processes and I for one am only going on the detail that CELL cores are general processing units. (If not...tee hee hee oh well.)

Anyway, the point I want to get accross is that most of the stuff I described above plus other things to optimize multiple general CPU systems is not practical for graphics processing and thus should not be practical for CELL. So it doesn't necessarily mean that CELL is a failure. Just that CELL's "true power" isn't the best used on the graphics processing. Especially if someone else has dedicated graphics tech that can match with less effort and/or beat it. This is all permitting the multi-CELL core computing cluster is very similar to a multiple cpu cluster.


Hope I contributed something....
 
Nice post marsomega :).

IMHO, nVIDIA's and ATI's GPUs are not too different from the basic idea in CELL, just that their ALUs are multi-threaded and they hide the latency automatically by switching between vertices or pixels depending if we are talking about VS ALUs or PS ALUs.

So yes, they do end up saving their context (the pixel has to wait) in a way as they will come back to that pixel (or vertex) when the texture fetch/sampling process is finished (or whatever other operation was stalling the unit).

APUs are also more akin to super-Shader ALUs than general processors like say a MIPS R10K or a Pentium 4: they are optimized mainly for SIMD processing with scalar processing coming in at 1/4th the throughput of the peak SIMD performance.

Recent IBM patents such as this one: http://appft1.uspto.gov/netacgi/nph...=50&co1=AND&d=PG01&s1=kahle&OS=kahle&RS=kahle and others suggest that CELL is indee Multi-Threaded as the ISSCC presentation papers suggested: the DMA engine in each APU/SPU Complex can handle multiple outstanding DMA transations initiated by the APU/SPU and keep track of them and each APU can be time-shared by multiple threads/processes (context switching penalty will be higher to switch to a thread froma different process of course).



Look at this patent from nVIDIA (all nVIDIA guys there):

http://appft1.uspto.gov/netacgi/nph...="bastos+rui"&OS="bastos+rui"&RS="bastos+rui"

Look at the Images section... the first time I saw it I said "CELL ?!?"... architectural principles in processors' design in the area of mdeia-processing oriented designs are indeed converging (I am not saying that nVIDIA's next-generation PC GPU will be CELL based as we know it will not be or neither that this shows that the customized version of that GPU they will work on with SCE will be CELL based as I do not think it will be CELL based at the moment... it is the ideas behind efficient computing for media-processing are shaping up a similar answer when you take targeted performance, power consumption, etc... in consideration).

P.S.: the patents were found first by nAo :).
 
Ruud_Luiten said:
This is Sony you are talking about and sony is awesome!
Nintendo should be out of business!!!1

Uhm... I do think you have a point, but sony isn't debatable at GaF. Sony never makes mistakes!

I thought cell was CPU en Nvidia was for GPU. Different things.....


What does this have anything to do with Nintendo?
 
Panajev, I'm not particularly well-versed in the language of patents and don't really have time right now to wade through the patent-speak and get to the technical details. If there's a post or something that gives an indepth description of what these two patents do I'd be very grateful, thanks.
 
Elios83 said:
So, how about this? Do you think that Cell failed to deliver on Sony's part and that this new alliance proves a defeat for Sony initial intentions?

I think the more likely scenario is that Sony realized the PS3 needed to be easy to program, and bringing nVidia into the project helped that angle immensely.
 
DaveBaumann
Senior Member



Joined: 29 Jan 2002
Posts: 8288
Location: Bedforshire, UK
Posted: Thu Dec 09, 2004 4:28 pm Post subject:

--------------------------------------------------------------------------------

Quote:
What do you think about the 18 months comment? Do you think perhaps that's just how much time they had been working on their next generation gpu anyway, and they can use it as a convenient "been working on this with sony for 18mo" excuse?


Partially, yes, that probably is how long their own architecture has been underway, but I'd also say that lines of communication have been open for as long if not longer; I'm sure that NVIDIA would been very keen to have their hand in there even if it wasn't at a hardware level - the idea of being frozen out of all the next generation consoles would not be a good one as this is where a significant quantity of development, so of which will end up on the PC, will be done.

Quote:
I'm just curious why you seem so convinced they missed their performance target. Just from the general information available, that seems like one possible explanation for going with nVidia, but certainly not the only one.


It may not necessarily solely be their performance target, but the targets for what they could achieve with their prefferred solutionin terms of general capabilities for graphics use - NVIDIA may have made a very convincing argument that it may not be achievable that route and going their route would be far safer (re: the post about development trends towards fragment shading).

(BTW - something that may have helped NVIDIA's relationship, and given them a decent line in, with Sony is they hired one of Sony Computer Entertainments Del Rel managers a few years back. When Chris Donnelly left as NVIDIA's head of Developer Relations (only to crop up at MS later on) NVIDIA back filled that postition with the guy from Sony)

Interesting.
 
I think people get confused by the term GPU. In the classic PC sense, the GPU is rasteriser/shader etc, *and* vertex transformer. This is due to the architecture, and the desire to decouple from the slower PC bus.

PS2 had the EE do the tranformation, and feed the transformed vertices to the GS for texturing/shading/rasterising.

PS3 with Cell as EE and NVidia as GS sounds like exactly the same model.
 
Dave Baumann putting somewhat negative spin on Sony/Playstation and nVidia news??! What an Earth shattering surprise! :lol
 
More like a strong Ati supporter. I think he has some friends working at Ati, or he used to work for them. He's knowledgeable about stuff in the industry, but most of his posts of this kind are more of an (educated) speculation, than some real information, from what I've seen so far.
 
mrklaw said:
I think people get confused by the term GPU. In the classic PC sense, the GPU is rasteriser/shader etc, *and* vertex transformer. This is due to the architecture, and the desire to decouple from the slower PC bus.

PS2 had the EE do the tranformation, and feed the transformed vertices to the GS for texturing/shading/rasterising.

PS3 with Cell as EE and NVidia as GS sounds like exactly the same model.

I think that too.
Sony will take the classic approach CPU+rasterizer (with pixel shaders this time) and leave all the vertex transformations to the Cell processor.The nVidia GPu will be an advanced rasterizer with shaders.Leaving the vertex processing elements out of the GPU will allow at the same time to increase the quantity of embedded ram in the chip.
I can't imagine a CPU like Cell designed for massive floating point performance left to handle only phisics and IA.
Now the question for the tech experts is: is this kind of approach feasible when you have to handle so much polygon data?Will the bandwith between CPU and GPU be a limit in this case?
 
Elios83 said:
I think that too.
Sony will take the classic approach CPU+rasterizer (with pixel shaders this time) and leave all the vertex transformations to the Cell processor.The nVidia GPu will be an advanced rasterizer with shaders.Leaving the vertex processing elements out of the GPU will allow at the same time to increase the quantity of embedded ram in the chip.
I can't imagine a CPU like Cell designed for massive floating point performance left to handle only phisics and IA.
Now the question for the tech experts is: is this kind of approach feasible when you have to handle so much polygon data?Will the bandwith between CPU and GPU be a limit in this case?

I don't know what kind of a bus PS3 will use but if it's going to use PCI-Express (Since that's what nVidia's next gen. GPU would have run on..) then I doubt it will be that way, otherwise PC GPUs would have loaded their vertex operations to the CPU as well..

Besides, unless there are enough CELL processors in the PS3 to make it equivalent of an Athlon64 9000+ or something I doubt that the thing can push vertex transformations *and* AI+Physics and the like at the same time. Sounds like too much load to me.

But then, that's just my uneducated speculation..
 
Marconelly said:
More like a strong Ati supporter. I think he has some friends working at Ati, or he used to work for them. He's knowledgeable about stuff in the industry, but most of his posts of this kind are more of an (educated) speculation, than some real information, from what I've seen so far.

Doesn't that describe 99% of what everyone's saying, both positive and negative?
 
tahrikmili said:
I don't know what kind of a bus PS3 will use but if it's going to use PCI-Express (Since that's what nVidia's next gen. GPU would have run on..) then I doubt it will be that way, otherwise PC GPUs would have loaded their vertex operations to the CPU as well..

Besides, unless there are enough CELL processors in the PS3 to make it equivalent of an Athlon64 9000+ or something I doubt that the thing can push vertex transformations *and* AI+Physics and the like at the same time. Sounds like too much load to me.

But then, that's just my uneducated speculation..


EE did all the vertex transformations, and one of the keys to the PS2 architecture was the bus between EE and GS. It was a machine designed to deal with polys.

CELL is designed to be way more powerful - like 100x more than EE. So its completely feasible (likely IMO) that it will deal with polys and physics etc. Don't forget that polys are the bread and butter of games. Physics is a relatively small CPU load in comparison - certainly not enough to need a CELL.

Good comment about leaving the tranform engine off - NVidia know how to pack in the transistors, so either that brings the cost down, or leaves room for embedded Ram as mentioned - sounds good to me
 
it now seems likely that the Cell-CPU will handle all the geometry / lighting / vertex / polygon transformations / calculations, then pass that data off to the custom Nvidia GPU for pixel processing / rasterization / image processing. though I am not saying that is exactly how PS3 would work, it's just a reasonable possibility.

leaving the vertex shaders / geometry processing off the GPU leaves more room for pixel processing / pixel shading / pixel pipelines, hardwired graphics features, eDRAM, etc.
 
tahrikmili said:
ATi's contract may have prohibited them from collaborating with Sony or they may have found it pretty much impossible to work with Sony without violating their NDAs with MS or maybe they even didn't have enough workforce to develop for Sony in addition to Nintendo, MS as well as graphics and chipsets for PCs and mobiles.. ATi do have their hands full these days

1. Why Sony and not over Nintendo? I don't doubt it, but that's an interesting thought, form your own opinions there...

2. More high caliber contracts = more jobs

3. You don't stay in business by refusing it

4. Nvidia probably lowered their bid, and is looking to re-establish the market share lost with this generation. Whether or not they will use this a time to prove their worth is beyond me, but I would think so
 
Then the second priority is to leverage in-house R&D, but that does not prevent from licensing IPs and collaborating with experienced partners like IBM, Toshiba and nVIDIA if it makes technical and business sense.

true. and Sony knows that it is very extremely badly outclassed by ATI and Nvidia in the area of graphics processing. Sony's strength includes intergation, process technology (smaller and smaller size) manufacturing, bandwidth and consumer electronics. Sony is weak compared to Nvidia and ATI in graphics chip design. even graphics chip makers that are behind Nvidia and ATI (PowerVR, S3, XGI, 3DLabs) are way ahead of Sony imho.

Sony was able to get away with in-house graphic chip designs with PS1 'GPU' (not a full GPU just *called* the GPU) and pretty much with PS2 GS (upgraded PS1 GPU with large parallelism + eDRAM) but for PS3 it seems Sony is admitting they need help.


if you look at Sega and Nintendo, and their ability to design their own graphics processors, we could say that they are both at level 1 or 2. if you look at Sony's ability design their own graphics processors they are maybe 10 or 11. ATI and Nvidia are at 100. I know this is a bad comparison but it's enough to help people understand. imo.


yes, Sony would have been able to design its own GPU in-house, or in-house with the assistance of its main partner, Toshiba. but Sony saw the light, they decided to go with a custom Nvidia GPU that would be a joint Sony-Nvidia effort. (maybe Toshiba is still helping too).

if we can agree that Nvidia is at paridy with ATI more or less (compared to Sony by themselves or with Toshiba) we can now say that PlayStation3 graphics quality will be unbeatable by rival consoles (Revolution, Xenon) or by the PC. that is, at least until newer generations of graphics chips arrive (NV70, R800, etc)
 
WasabiKing said:
1. Why Sony and not over Nintendo? I don't doubt it, but that's an interesting thought, form your own opinions there...

2. More high caliber contracts = more jobs

3. You don't stay in business by refusing it

4. Nvidia probably lowered their bid, and is looking to re-establish the market share lost with this generation. Whether or not they will use this a time to prove their worth is beyond me, but I would think so

1. Probably because their contract with Nintendo pre-dates the contract with MS..

2. Yes but qualified personnel do not grow on trees, ATI has been recruiting rather rapidly lately through job fairs and even posted their recruitment efforts on their homepage..

3. I can't object to this one :D

4. Probably true.
 
Top Bottom