The 7 Myths of the Cell Processor

McFly

Member
http://games.slashdot.org/comments.pl?sid=138810&cid=11616545

# The Cell is just a PowerPC with some extra vector processing.
Not quite. The Cell is 9 complete yet simple CPU's in one. Each handles its own tasks with its own memory. Imagine 9 computers each with a really fast network connection to the other 8. You could problably treat them as extra vector processors, but you'd then miss out on a lot of potential applications. For instance, the small processors can talk to each other rather than work with the PowerPC at all.

# Sony will have to sell the PS3 at an incredible loss to make it competitive.
Hardly. Sony is following the same game plan as they did with their Emotion Engine in the PS2. Everyone thought that they were losing 1-200 bucks per machine at launch, but financial records have shown that besides the initial R&D (the cost of which is hard to figure out), they were only selling the PS2 at a small loss initially, and were breaking even by the end of the first year. By fabbing their own units, they took a huge risk, but they reaped huge benefits. Their risk and reward is roughly the same now as it was then.

# Apple is going to use this processor in their new machine.
Doubtful. The problem is that though the main CPU is PowerPC-based like current Apple chips, it is stripped down, and the Altivec support will be much lower than in current G5s. Unoptomized, Apple code would run like a G4 on this hardware. They would have to commit to a lot of R&D for their OS to use the additional 8 processors on the chip, and redesign all their tweaked Altivec code. It would not be a simple port. A couple of years to complete, at least.

# The parallel nature will make it impossible to program.
This is half-true. While it will be hard, most game logic will be performed on the traditional PowerPC part of the Cell, and thus normal to program. The difficult part will be concentrated in specific algorithms, like a physics engine, or certain AI. The modular nature of this code will mean that you could buy a physics engine already designed to fit into the 128k limitation of the subprocessor, and add the hooks into your code. Easy as pie.

# The Cell will do the graphics processing, leaving only rasterezation to the video card. Most likely false. The high-end video cards coming out now can process the rendering chain as fast as the Cell can, looking at the raw specs of 256Gflops from the Cell, as opposed to about 200GFlops from video cards. In two years, video cards will be capable of much more, and they are already optomized for this, where the Cell is not, so video cards will perform closer to the theoretical limits.

# The OS will handle the 8 additional vector processors so the programmer doesn't need to.
Bwahahaha! No way. This is a delicate bit of coding that is going to need to be tweaked by highly-paid coders for every single game. Letting on OS predictively determine what code needs to get sent to what processor to run is insane in this case. The cost of switching out instructions is going to be very high, so any switch will need to be carefully considered by the designer, or the frame-rate will hit rock-bottom.

# The Cell chip is too large to fab efficiently.
This is one myth that could be correct. The Cell is huge (relatively), and given IBM's problems in the recent past with making large, fast PowerPC chips, it's a huge gamble on the part of all parties involved that they can fab enough of these things.

Looks like he got it right ... but what do I know. ;)

Fredi
 
McFly said:
# The Cell chip is too large to fab efficiently.
This is one myth that could be correct. The Cell is huge (relatively), and given IBM's problems in the recent past with making large, fast PowerPC chips, it's a huge gamble on the part of all parties involved that they can fab enough of these things.

Jon "Hannibal" Stokes said:
The entire Cell is produced on a 90nm SOI process with 8 layers of copper interconnect. The Cell sports 234 million transistors, and its die size is 221mm2. (This is roughly the size of the Emotion Engine at its introduction.) The PPC core's 32KB L1 cache is connected to the system L2 cache via a bus that can transfer 32 bytes/cycle between the two caches.

BTW,

Two short ArsTechnica articles are up for the CELL. Basically, like every other group trying to analyze the CELL, they seem to be hoping for more info soon.

http://arstechnica.com/articles/paedia/cpu/cell-1.ars

http://arstechnica.com/articles/paedia/cpu/cell-2.ars
 
For an article designed at highlighting myths, they sure perpetuate some themselves. Well, one at least - the SPUs have 256k of local ram, not 128k.
 
Here's an other interesting article: http://www.poughkeepsiejournal.com/today/frontpage/stories/fr020805s1.shtml

''This is still the biggest chip technology advance in probably 20 years,'' said Richard Doherty, research director at Envisioneering Group in Seaford, Nassau County.

If anything, claims of a 10-fold leap in performance are understated, Doherty said. ''Our estimate is 10 to 20, so they're being conservative,'' he said.

He added Cell developers said they could have put 16 cores on the same size chip if they had thought it necessary.

So don't rule out a two PE or 16APU Cell chip already if Kutaragi is again willing to take some extra risks.

Fredi
 
# The OS will handle the 8 additional vector processors so the programmer doesn't need to.
Bwahahaha! No way. This is a delicate bit of coding that is going to need to be tweaked by highly-paid coders for every single game. Letting on OS predictively determine what code needs to get sent to what processor to run is insane in this case. The cost of switching out instructions is going to be very high, so any switch will need to be carefully considered by the designer, or the frame-rate will hit rock-bottom.

Not if EA has any say!
 
"# Sony will have to sell the PS3 at an incredible loss to make it competitive.
Hardly. Sony is following the same game plan as they did with their Emotion Engine in the PS2. Everyone thought that they were losing 1-200 bucks per machine at launch, but financial records have shown that besides the initial R&D (the cost of which is hard to figure out), they were only selling the PS2 at a small loss initially, and were breaking even by the end of the first year. By fabbing their own units, they took a huge risk, but they reaped huge benefits. Their risk and reward is roughly the same now as it was then."

.... what?!
 
DCharlie said:
.... what?!

Don't know where he heard that. I've seen "analysts" estimate that the PS2 was over $500 to manufacture at the time of its launch. But I don't think anyone really knows outside of Sony. A news article yesterday, though, had the cost to manufacture of PSP - 30,000 yen, apparently (it retails at 19800 yen in Japan).
 
Just found this in my inbox.
Bwahahaaa! The more info that comes out about Cell, the more I like it. Reading some comments at Slashdot and Ars Technica has given me some real insight. Unlike your posters who are the biggest bunch of technical idiots I have ever seen. What morons. Most everything posted about cell at youe forum is ignorant hyperbole. "Intel is doomed. Cell is 100 times more powerful. You cant even compare how powerful cell is. It's going to be on 45 nm process. Orders of magnitude more powerful than Xbox2". Like 12 year olds. It's amazing how technically stupid your forum is, seriously.

Anyways, like I said, Cell is anything from intriguing to a disaster. Some actual valid information I gleaned from forums that know what they're talking about RE Cell:

Current GPU's can already do 200 Gflops. Cell supposedly does 256. In other words, in 2 years a $150 video card will smoke Cell. AHAHAHAAA. Predictable. The PC architecture destroys another pretender. I knew it would. Put it this way, there was no way Cell wouldn't be outclassed by PC's in six months from it's release. I knew that all along, which made the Cell hyperbole spewing from your forums more laughable.

Cell has theoretical performance ten times current PC's. This is the 8 SMD version. The 4 SMD version in PS3 would then be at most 5 times as powerful! At most! BWAHAHAHAAA PART TWO!!! 5 times for only the CPU (AT MOST) and all the sudden, that's pretty meaningless to XBOX2 especially considering another factor known all along, that CPUS DONT MATTER NEARLY AS MUCH AS GPUS IN GAMING!!!!! Another fact lost on your Sony trolls. All along in my mind this has made the cell hype stupid! It's the CPU! I dont care if it's such a great CPU it cooks toast, it's still only the CPU!

Isn't it amazing how "orders of magnitude" becomes "5 times" (if you're lucky) once real specs of this turkey come out! How not surprising!

IBM wont release any meaningful Cell benchmarks so far. OOoooh, big surprise, not.

None wants the Cell for mobile devices. It's stupid. It's overkill. There's no point. It's power hungry, too expensive, WHY? Just use a via low power chip or something!!!

No one wants the Cell for desktop. From what I gather it will be a good multimedia CPU (IE PS3) but not good for general purpose desktop computing. OOps! No mobile, no desktop, No market!!!! BWAHAHAHAAAA.

I can just see the CPU establishment, Intel, AMD, Microsoft, laughing at this thing. They've got nothing to fear.

Send my condolences to Gofreak!! And tell him keep running that Sony damage control while I laugh at him BWAHAHAAAA!!!!! Even he knows it's not half as good as he thought.

Why am I sending this to your generic e-mail? I just felt the need to comment on the idiocy and well, it's too much pain to get approved an even if I was I'd just get banned at your forum for saying Cell aint all it's cracked up to be. We know there's not free speech at your forum to tell the truth!

I will send another e-mail with good comments from Ars etc. You know, actual real knowledge instead of the technical IDIOCY at yor forum!
 
You forgot to cite that article, you douche!

Sharp, Brad. "Cell a Disaster," Gaming-Age Staff Email. 2/9/05.
 
That about wraps it up for this thread.

Brad Sharp, you have managed to 0WN!!!1! the entirety of GAF for all eternity.
 
BuddyChrist83 said:
Just found this in my inbox.

Let's take a look.

Anyways, like I said, Cell is anything from intriguing to a disaster. Some actual valid information I gleaned from forums that know what they're talking about RE Cell:

Current GPU's can already do 200 Gflops. Cell supposedly does 256. In other words, in 2 years a $150 video card will smoke Cell.

The clueless moron that wrote this is from TXB. Which hardly lends him much credibility to start with. Mindless parroting of the drivel spewed by people who can't tell the difference between a CPU and a GPU puts paid to the rest. Game over.
 
SHUTUP GUYS!

Brad Sharp is going to read this and own us again! My ownzor burns still ache after the last time he totally rocked the forums.

:( :( :(
 
Bwahahaaa! The more info that comes out about Cell, the more I like it. Reading some comments at Slashdot and Ars Technica has given me some real insight. Unlike your posters who are the biggest bunch of technical idiots I have ever seen. What morons. Most everything posted about cell at youe forum is ignorant hyperbole. "Intel is doomed. Cell is 100 times more powerful. You cant even compare how powerful cell is. It's going to be on 45 nm process. Orders of magnitude more powerful than Xbox2". Like 12 year olds. It's amazing how technically stupid your forum is, seriously.

Anyways, like I said, Cell is anything from intriguing to a disaster. Some actual valid information I gleaned from forums that know what they're talking about RE Cell:

Current GPU's can already do 200 Gflops. Cell supposedly does 256. In other words, in 2 years a $150 video card will smoke Cell. AHAHAHAAA. Predictable. The PC architecture destroys another pretender. I knew it would. Put it this way, there was no way Cell wouldn't be outclassed by PC's in six months from it's release. I knew that all along, which made the Cell hyperbole spewing from your forums more laughable.

Cell has theoretical performance ten times current PC's. This is the 8 SMD version. The 4 SMD version in PS3 would then be at most 5 times as powerful! At most! BWAHAHAHAAA PART TWO!!! 5 times for only the CPU (AT MOST) and all the sudden, that's pretty meaningless to XBOX2 especially considering another factor known all along, that CPUS DONT MATTER NEARLY AS MUCH AS GPUS IN GAMING!!!!! Another fact lost on your Sony trolls. All along in my mind this has made the cell hype stupid! It's the CPU! I dont care if it's such a great CPU it cooks toast, it's still only the CPU!

Isn't it amazing how "orders of magnitude" becomes "5 times" (if you're lucky) once real specs of this turkey come out! How not surprising!

IBM wont release any meaningful Cell benchmarks so far. OOoooh, big surprise, not.

None wants the Cell for mobile devices. It's stupid. It's overkill. There's no point. It's power hungry, too expensive, WHY? Just use a via low power chip or something!!!

No one wants the Cell for desktop. From what I gather it will be a good multimedia CPU (IE PS3) but not good for general purpose desktop computing. OOps! No mobile, no desktop, No market!!!! BWAHAHAHAAAA.

I can just see the CPU establishment, Intel, AMD, Microsoft, laughing at this thing. They've got nothing to fear.

Send my condolences to Gofreak!! And tell him keep running that Sony damage control while I laugh at him BWAHAHAAAA!!!!! Even he knows it's not half as good as he thought.

Why am I sending this to your generic e-mail? I just felt the need to comment on the idiocy and well, it's too much pain to get approved an even if I was I'd just get banned at your forum for saying Cell aint all it's cracked up to be. We know there's not free speech at your forum to tell the truth!

I will send another e-mail with good comments from Ars etc. You know, actual real knowledge instead of the technical IDIOCY at yor forum!


:lol :lol this guy has SOME beef with Cell and Gofreak, eh? what the hell? :lol
 
So a guy that can't post here cause he's too goddamn stupid mails some of you with his "post"?

Comedy gold!
 
Myth #8: Cell is the second coming of the messiah and the best thing since sliced bread.

I'm getting tired of the hype. On paper PS2 also looked awesome and then it only had 4 Mbyte embedded VRAM+Texturecache and horrible jaggies and bad mip-mapping.

It's not all about the CPU, the whole package must fit together.
 
Myth #9: every news article is singing Cell's praises.


http://news.com.com/Cell+chip+Hit+o...46.html?part=rss&tag=5568046&subj=news.1006.5

Will the Cell processor be the new Itanium?

At the International Solid-State Circuits Conference on Monday, the joint developers of the long-awaited processor--Sony, Toshiba and IBM--unveiled a number of the details about it amid a surge of dramatic speculation. The New York Times said the chip could create "a new digital computing ecosystem that includes Hollywood, the living room and high-performance scientific and engineering markets."

Others speculated that the chip could drive everything from cell phones to servers, tying them into a grand computing grid.

"We believe a 10x performance over the PC, at the same power envelope, can be achieved," said IBM's Dac Pham, one of the designers of Cell. "It will usher in a new era of media-centered computing."
This sort of excitement and speculation about chips is driven by the "Battlestar Galactica" principle.

Intel's limping Itanium debuted with a similar level of fanfare. In 1994, the Microprocessor Report, examining the investment Intel planned to put behind the chip, predicted that it would become commonplace in desktops by 2004. It didn't happen.

Similarly, feelings ran high about the Emotion Engine, the microprocessor inside the original PlayStation 2 game console. Analysts said it could undercut chips from Intel and Advanced Micro Devices in PCs, and become the nerve center for DVD players and other home electronics. Toshiba even created a company, Artile, to license the Emotion.

But the Emotion Engine never migrated outside the PlayStation, and Toshiba snuffed out Artile in 2003. The PlayStation 2, meanwhile, didn't live up to the suggestion that it would serve as a conduit for movies, TV, e-mail and the Internet.

This sort of excitement and speculation about chips is driven by what I call the "Battlestar Galactica" principle. It goes as follows: If the domination of the universe isn't contested on a weekly basis, ratings will go down. Analysts, reporters, consumers and even executives need a gladiatorial contest to keep the job interesting.

The high-public profile of Sun Microsystems can partly be attributed to its role as the William Shatner of computing--donning a new uniform every three seasons to battle a new nemesis.

Put in that perspective, the Cell story starts to look different.
Cell will be a victory if it doesn't lead to layoffs.

Going by papers presented at ISSCC, Cell looks like a tremendous achievement. However, this is the chip industry: Only a handful of companies--Samsung, Intel, Texas Instruments and Taiwan Semiconductor Manufacturing Co.--consistently produce profits. Most everyone else is seemingly always two steps away from the trailer park. Over the past few years, IBM Microelectronics has often reported quarterly losses. Cell will be a victory if it doesn't lead to layoffs.

In all likelihood, Cell will sell in far greater numbers than the just-as-trumpeted Itanium. Sony will put it into the PlayStation 3 video console. Unless gamers lose interest in stock cars, ninja stars and wiping out space aliens between now and 2006, that thing will sell. IBM and Toshiba will put it in products, too.

Still, whether the chip will be able to enter different markets is another question that hinges on factors such as:

Size: Cell contains 234 million transistors and takes up 221 square millimeters in the 90-nanometer production process. That's about double the size of the 90-nanometer 3.6GHz Pentium 4, with 112 square millimeters and 125 million transistors.
Why invite your rival to your top-secret design meetings?

Big chips cost more to produce, can hide more bugs and can be tough to cram into portable devices. Cell will get cheaper when it goes to 65-nanometer production, but so will the alternatives.

Cost: Remember liquid crystal on silicon (LCOS)? The chip that would bring down the price of big-screen TVs? Intel and Brilliant Technologies failed at it. JVC and Sony succeeded. However, the latter two companies sell their LCOS chips to their own television units. The cost of the chip gets absorbed into the TV set.

Sony, Toshiba and IBM don't have to worry about the cost of Cell because they will sell it to themselves. It becomes part of a product that is tagged at a slightly higher price. An expensive Cell, however, will be a tough sell to any other manufacturers.

Alliances: Consumer electronics companies won't want to buy a processor from Sony and Toshiba. Similarly, not a lot of server manufacturers will line up to buy a Cell server chip from IBM. Why invite your rival to your top-secret design meetings?

Power: Cell will have to be air-cooled, IBM said. In other words, fans will probably be required. Ever talk on a cell phone with a fan?

While IBM didn't disclose the exact heat statistics, some at ISSCC said it could run as hot as 130 watts, more than most desktop and notebook chips. If Cell is in this range, kids will really be huddled around the PlayStation 3 at Christmas--for warmth.

On the cool engineering side, however, the chip will come with 10 digital heat sensors to warn of problems and another sensor to regulate temperature.

Memory: Cell comes with an integrated memory controller for high-performance XDR memory from Rambus--which means that the current design works exclusively with this pricey stuff. Sony used an earlier version of Rambus memory in the PlayStation, but it's been a tough sell outside of consumer electronics.

Cell is an outstanding achievement. But we have to wait and see whether it can get a job from someone other than its parents.
 
Haha, laugh it up. Brad Sharp
<@yahoo.com> to staff
Show options 11:51am (13 minutes ago)
OK, that last email was way too long and stupid, I just recommend you point your Ga'ers to the Ars Technica discussions

Here:http://episteme.arstechnica.com/eve/ubb.x/a/tpc/f/174096756/m/275002379631

and Here:http://episteme.arstechnica.com/eve/ubb.x/a/tpc/f/174096756/m/398003679631

They'll learn a lot more than they will from "Gofreak", I'll tell you that. And certainly, not all of it or even most bad about the Cell. Cell might be pretty good I dont deny that. I'm just saying, the hype on your forum was just ridiculous. Cell is NOT going to revolutionize computing, destroy Intel and Microsoft, be "orders of magnintude" more powerful than anything else, or any of the other stupidity your forum was spewing.

Real Cell comments Inbox

Brad Sharp
<@yahoo.com> to staff
Show options 11:44am (20 minutes ago)
"Finally, IBM won't release performance benchmarks, but they do claim a 10X speedup over a PC in the same power envelope. Take this claim with a large grain of salt, however, because there's no context to it (i.e. on what type of application, vs. what kind of PC, etc. etc.)."

BWAHAHAAA!!!!!


"The way I figure it is that the PPC core, the PPE, is going to be a stripped down version of the POWER series of proccessors"

BWAHAHAAA. From what I've learned, again none of it from your forum of technical MORONS, the Cell is a "gatekeeper" power PC cpu, coupled with 8 vector processing like units. Well what this comment points out, is that that gatekeeper CPU is going to be more like a shitty G4. Not even as good as a regular G4!! It wont even be G5 level, which itself gets smoked by Intel CPU's!!!! Granted, this is only the "main" CPU.

"So its just an in order chip with a bunch of parallel vector units?

Sounds like an amazing chip for DSP work, and scientific calculations, but kind of a let down for the desktop."

"Unless Hannibal's information was wrong, the controller core definitelly isn't anything special. P4 level clockspeed doesn't make up for the fact that it's a 2-way in order design. Keep in mind that this processor has to do the general purpose computing (where extra execution units, out of order execution, and a good branch predictor are important). Depending on how fast their logic is at 90nm and how simple the design is, the pipeline could be quite long also.

So far I think jason_watkins has done the best analysis. I don't really have anything else to add. In the end, I expect the Cell to be an excellent media processor. But it's not going to put Intel and AMD out of business anytime soon."

"Where exactly does all the supposed speed of the Cell come from? I assume the touted speed of 256Gflops is with all the SPEs doing 2 instructions continuously... how real world is this?"

"Also, it was cited somewhere (in the BF, I think...) that one of the G5's weaknesses is not having the best integer execution speed (while having great FP speed) and this was the reason it didn't run general use code too well, because most programs rely on integer math more than FP. But, it seems the Cell's SPEs are geared more for FP. Does this mean it will suffer when not doing FP friendly tasks?

To early for me to tell. By their very nature though SIMD unts solve a certian class of problems well. They are not general purpsoe divices. Now it does seem as though these new units have more capacity to sustain themselve than the run of the mill SIMD facility. The allusion to them being microcomputers should stand out in everyones mind. How those mircrocomputers support general purpose code is another issue all together."

"My main concern is that IBM might be pulling a PS/2. The PS/2 has a powerful distributed architecture capable of impressive performance, but alas it is also baroque and Sony did not supply an array of finished tools to harness all that power. The result is that a lot of games just poked along graphics-wise, coming nowhere near the potential of the hardware."

"Otherwise it seems to me (and like others pointing out) that moving the burden to programmers for this low level stuff like was done in the playstation 2, makes porting code and writing games problematic, and that's something you don't need. x86 tends to get used in embedded platforms because it's easy to program for, from what I understand."

"Am I the only one confused about how they're going to explain to "high end graphics workstation" buyers that a $4000 box has the same chip in it that their kid's $300 PS3 has?

huh?

Really cool stuff, but I'm afraid it looks so weird and specialized that it may be impossible to write decent code for. I remember when the "Emotion Engine" in the PS2 was so advanced that they needed permission from the EU to sell it, but GTA still looks like crap."

"Probably along the lines that Apple does business now. With Apple you get OS X on a 3000+ dollar box, were as you don't currently with the 400+ dollar Walmart PC that is nearly as powerfull.

:P"

"So IBM/Sony have taken the programmable vertex/pixel shaders out of your modern GPU and hung them, with a little bit of SRAM, off a PPC core... or am I missing something? I'm not sure this will live up to the hype, but I am looking forward to the next installment tomorrow..."

"To me, the big questions about how this would adapt to being in an Apple workstation depends on two things: does the PowerPC portion of the core provide decent performance for general use, and will IBM put the compiler tech that allows Cells to be used efficiently into GCC. If the answer to either of those is no, then this is going to be relegated to being a customized co-processor at best. My guess is that, no matter what's being said at this stage of the game, we won't know the answers on these for a year or so."

"Hi.

Why no one is talking about the memory access? Doesn't it make worse the problem to have the PPC core trying to fed 8 SPEs and himself. The only scenary where the SPEs are good is for computational intensive work without being data intensive work. Even the more simples GPUs today access to 32 MB at least..."

"SPEs are more advanced than DSPs and other co-processor designs. They are full processors, albeit much simpler that the curretn GP processors. They are intended to be autonomous, execute their own code under their own supervision, no hand holding by the PowerPC master. Instead, there is occasional interaction where the PowerPG hands code chunks (APUlets) to SPEs and received results back from them.

In terms of application code, the graphcis processing is cited as the main user of this model; after all it is designed for PS3. However, keep in mind that SPEs are still general purpose, only optimized for number crunching.

The reason why they will benefit all kinds of computers is because all sorts of machines are doing more and more number crunching. For instance, desktop OSes are using more advanced graphics for their UI (Windows Avalon, Macintosh OSX,...) even for the standard 2D interfaces. Just maintaing the UI can consume great amount of CPU's ti! me. Web servers do a lot of text processing, which in effect is integer arithmetic. And so on.

Moreover, typical apps on desktops are changing. There is more and more audio/visual stuff, like music players, instant messaging, tomorrow even video phones,... running at the same time. If we offload all the number crunching onto SPEs then you free up the main processor (PowerPC in this case) and all of a sudden you don't need a 10 Ghz CPU!!! In fact, a simple 2 Ghz PowerPC chip with the help of SPEs could blow away the latest and fastest Intel/AMD offerings. That is the idea.

The problem is like I said, that current application will not automatically take advantage of SPEs. Remember that you C?/Java program starts with main() - 1 thread. They spawn additional threads, of course. The problem is that all threads use the single - main memory!!!! Instead of threads we'll have to program "APUlets". They are similar in many ways except:
1) APUlets has its own memory that is ! separate from main
2) APUlets have to the concept of a "master" with whom they interact (mainly send results back to them)

As an example, of course pure guessing on my part:

public class CRCApulet extends com.ibm.Apulet
{

public void run()
{
byte inputData[] = (byte[]) getContext().getInput("data");
// perform CRC computation
getMaster().sendResult( toBytes(crcResult) );
}

}

So you have an app running on the PowerPC master and it creates this little APUlet and gives it data (say file contents) in the form of byte array. Apulet executes and sends back the result. The "context" is a facade for the local memory and "master" for the PowerPC controller.

So the app could offload CRC calculation on SPEs and thus complete its processing faster!

This doesn't mean that all existing software has to be redone! Only the computationally intensive processing will need to be moved to apulets. The remaining code is fine."

"'m also curious what the Nvidia GPU will be responsible for, like some others~
Why the confusion? The Nvidia GPU will render the graphics.

Of course you're really asking "Why use a GPU when you have all this new processing power available?"

Probably multiple reasons:

Most important reason is probably risk management: By using the Nvidia GPU, Sony does not have to port a rendering pipeline to an entirely new CPU architecture. It can't be trivial to rewrite OpenGL, or whatever Sony uses, to run on Cell. Sony gets an existing rendering implementation that they know works.

Second most important reason: Developer familiarity with using Nvidia GPUs. Why give developers one more hurdle in figuring out the platform.

This may also be a factor: Sony didn't spend all that time/effort/money on Cell just to get rid of the GPU. They want all that power to be available to developers to do other cool stuff with"

"That sure makes all of the shouting of "4GHz!" a lot more sensible. It tells me that the SPEs had better be put to good use though. Doesn't seem like a 2 way inorder design will be breaking into the supercomputing list anytime soon. Call me weird, but I'm almost more interested in this than I am in the SPEs right now. Until we see the SDKs IBM/Sony/whoever can provide to use them, it's certainly going to be difficult to judge performance."

"That's what i was trying to point. Great raw processing power, but how do you feed it with data? I think this approach will make bigger the gap between processor and memory."

"
Let me try. This is largely supposition and inference; I have not read Hannibal's sources. But I think I'm right.

Do not think about the SPE as a coprocessor. Those execute individual instructions. Do not think of the SPE as a processor in parallel with the PPE. The PPE is in front of the SPE, not in parallel with it. Do not think of the SPE as a processor slave of the PPE. While this is not necessarily inaccurate, it is not the best way to understand the SPE.

Think of the SPE as a whole computer in its own right. The SPE is the center of it's own universe. It executes programs, and is particularly good at executing certain classes of number-crunching programs. It has its own private memory (the 256KB LS memory). It has an I/O Controller (the DMAC).

I think it is important to realize that the DMAC is not to be thought of as a memory controller. It is an I/O controller.

In the early 80's we had I/O controllers to access hard drives and s! erial ports. These were high-latency devices that we needed to access in a non-blocking fashion, so that we could do other things with the processor while the I/O request completed. Memory, in contrast, was treated as fast. We waited for memory, because it was fast, and the waits were not long in terms of lots cycles.

Today, memory latency is huge, just like I/O latency was huge in the 80's. Memory access is no longer cheap. It is very expensive. We have maintained the illusion of fast memory using caches and other tricks, but the fact is, memory is slow.

The genius of the SPE, and the Cell in general, is that memory is fast again. Each SPE has a small, fast memory. It also has and I/O controller that treats main system memory like we used to treat hard disks.

The ISA will probably muddy much of this picture by treating memory in a conventional manner, wrt addressing. But it is my suspicion that apulets written with the above attitude in mind will make the b! est use of the system.

There is a reason they chose to present the SPE before the PPE. It is the core of the system. The PPE is a glorified bureaucrat. It will handle the high-level logic for the PS3, but the SPEs will handle the computation. I suspect this will include unit and enemy AI, graphics, sound, media encode/decode, vector math, and everything else that the next generation of Playstation game needs to think about. The PPE, in contrast, will spend its time dealing with network and mass storage I/O, process and apulet scheduling, user input, and other administrative tasks."

"nteresting post, thanks. Can you expand on these bits any more? Specifics about how the SPEs are general purpose? I was under the impression they were like beefed up vector units from the PS2 CPU."

"They're general purpose in the sense that they can execute their own thread, handle branch instructions, etc. But they're obviously not tuned to be fast for anything other than keeping the vector ALU's busy in FPU heavy code (ie games). It's probibly more fair to compare them to the shader cores in GPU's rather than call them general purpose cpu's in the sense of an SMP system"

"There's a lot of bullcrap being posted in this thread. This chip isn't meant for your word processor. It isn't meant for your laptop running OSX and some magic emulation software so that suddenly your iLife applications run at warp speed. It's targeted at exactly one application domain: media processing. And for that domain, it looks to be quite well designed IMHO.

I'd also mention that there have been attached processor cards for x86 for quite a while. They rarely get used outside of very nitch embeded applications. The most general application I'm aware of was for a "MIPS on a Card" rig that came out in the pentium classic era. You could right Lightwave 3D and a few specially written Photoshop filters on it. It, of course, flopped. I don't think intel has anything to fear from cell on the short term. And on the long term, nothing is stopping them from comming out with an x86 system chipset or cpu that has a bundle of attached vector processors with hardware me! mory sync management as well."

"I think it's fairly obvious that the SPE will not be handling graphics. The SPE units combined can only push about 256 GFLOPS max. Even is this theoretical amount could be attained, that only makes it about 50 GFLOPS more than a current top-of-the-line NVIDIA card. Since it's not set to debut for another 2 years inside the PS3, it would have a less-than-stellar debut, being equivalent to what would then be a $150 video card."

"I don't think any of us really expected a VMX/Altivec unit in the controlling processor. It still remains to be seen just how verstile the controlling core is. Before people go ga ga over it getting into a Mac, think of 2 things: first, applications would have to be rewritten to use the SPE's. Second, the space occupied by the SPE's could instead be used for more cache and a 2nd general purpose core, which should yield better performance on desktop applications anyhow.

Edit:

Hannibal just posted in the 2nd article that the controlling core is an in order design. Unless the G4 is a much crappier processor than I imagine, that means the cell controlling core is nothing really worth getting excited about from a mac point of view."

"http://pc.watch.impress.co.jp/docs/2005/0208/kaigaip046.jpg

tells us of a new *dual* controller with a rate of 25.6GBps @ 3.2 Gbps (?!).. they probably wrote this wrong.. (!).. its probably ~ 3.2Gbytes/sec -> ~= 25Gbits/sec.. hmmm..


and since it's dual it is 6.4GBytes/sec..


hmm.. isn't this EXACTLY the i/o controller used on the G5?.. funny thing...


this message started as a quest to find out how the great thirst of the cell would be satisfied.

I now understand that it won't be satisfied (!). So where is the gain with this new architecture? I assume due to the fact that the Altivec on the G5 is almost useless but when it comes to using 8 simpler simd cores it might have a chance to come forward again with this new implementation."
"Strengths of the Cell includes:
1. Multiple SIMD/CPU/Core units - resulting in massive parallelism on a chip.
2. Extremely high bandwidth - 100 Gigabytes a second - compared to current desktops like the PowerMac G5 at 16 gigabytes a second.

Weaknesses:
1. Integer processing speed will only be similar to Intel CPUs unless you can do parallel SIMD work - e.g. with multimedia.
2. Only one full core - the PowerPC core.
3. The need to explicitely optimize code for the Cell, rather than just having the compiler do it - this is the problem with the Itanic processor.

It seems today that the most powerful CPUs are the GPUs by ATI and nVidia. These work similarly by having multiple parallel units. Apple with Mac OS X seems to be on the right track for doing multimedia by taking advantage of GPUs for offloading work from the CPU. I wonder if the PowerMac PowerPC CPU plus ATI/nVidia GPU may actually accomplish - for a personal computer - what the Cell ! is trying to do - with the advantage of having multiple PowerPC cores rather than one as the Cell does, in the future dual-core models.

The fact that the PowerPC core of the Cell can run up to 4 Ghz - in the lab - with a much shorter pipeline than the G5 - makes me question why IBM can't rev up the G5 past 3 Ghz. To me, IBM's inability to do so bodes poorly for being able to produce actual 4 GHz Cells - just as Intel reach it's limits in processor speed."

:lol
 
No idea, but the email he provided matches one that was used for an account called X@X.com at MrCranky.com.

As well, he was branded a pedo over there.

SOOOOO
 
Looks like it. Deadmeat has been being banned for semi-tech-literate trolling over at the Beyond3D forums at least twice since the Cell announcement. Poor thing.
 
iapetus said:
Looks like it. Deadmeat has been being banned for semi-tech-literate trolling over at the Beyond3D forums at least twice since the Cell announcement. Poor thing.

Twice since this monday: I think he has been banned a bit more than that over-all at Beyond3D's forums ;).
 
while GFLOPS and MIPS isnt end all of general computing performance, outside of lack of information about int performance and possible programming headaches *that can be fixed* i dont see too many kinks in Cells armor. If anything it reinforces the notion that general CPU's and GPU's are converging to the same thing. The email states that GPU's will smoke cell soon but at the same time those GPU's could be limited by a CPU that cant keep up. Coupled with this powerful CPU it looks like NVIDIA will have a GPU that will be kept well supplied and left idle minimally. NOt to mention the ultra fast memory controller that will keep latency down, I dont see where all this hate is coming from. I can understand reservations about it being a general cpu but as a media device? Looks like its gonna make a big impact to me.
 
Nah, this guy seems a lot more crazy and misinformed than Deadmeat (yeah, I just said that!). When have you seen Deadmeat quoting such lowly sources as Ars Technica, Slashdot, or other people in general!? He makes (up) his own calculations dammit!
 
Marconelly said:
Nah, this guy seems a lot more crazy and misinformed than Deadmeat (yeah, I just said that!). When have you seen Deadmeat quoting such lowly sources as Ars Technica, Slashdot, or other people in general!? He makes (up) his own calculations dammit!

My favorite deadmeat moment of the past week:

http://www.beyond3d.com/forum/viewtopic.php?t=19815&postdays=0&postorder=asc&start=80
Automated Mech said:
Something is not right. Each CELL APU burns only 1 watt @ 0.9 V at 2 Ghz???? 11 watts at 5 Ghz??? If IBM had such technology, it can forget about making chips for a living, license that tech to Intel and make billions/year.

I will wait for the full set of slides posted to analyze CELL. Because this smells very fishy indeed.

http://www.electronicsweekly.co.uk/articles/article.asp?
liArticleID=38754&liArticleTypeID=1&liCategoryID=1&liChannelID=114&liFlavourID=1&sSearch=&nPage=1
"The busses connect to the SPEs through local memory, 256kbyte for each SPE. The developers have tested the memories to 5.4GHz at 1.3V and 52°C."

4-5+ Ghz was the SRAM speed and not the ALU speed.

Now it makes perfect sense, CELL ALUs run at 1/4th the clock of XDR input signal. In other world, that 4 Ghz input = 1 Ghz internal operating clock.

This is funny as hell. The whole processor industry used upcloking(internal clock is X times higher than input clock) since 486, SCEI is the first to use downclocking in recent history.

In other word, 4 Ghz XDR clock = 1 Ghz CELL ALU operating clock.

Now it makes perfect sense, CELL ALUs run at 1/4th the clock of XDR input signal. In other world, that 4 Ghz input = 1 Ghz internal operating clock.

I have seen no evidence that suggests that CELL really runs at 5 Ghz. In fact, SCEI's refusal to claim 256 GFLOPS in press release would suggest it does not. Kutaragi Ken is the kind of person who would do such a shameless thing if it was possible on paper, but even he does not do it.

All the transistor and thermal information on CPU core of CELL suggests it is indeed a sub 1.4 Ghz design. You will have to wait until the slides are posted at Japanese sites sometime tomorrow to make it official. IBM's own CPUs fail to clock past 2.5 Ghz, so why should I believe that a 5 Ghz processor exists???

The pretense at figuring such a thing out just made me laugh out loud :lol

"Now it makes perfect sense" :lol
 
Yeah, that's more like him allright.

Funny thing is, he can't even pretend to write like someone else, despite changing the username.
 
They'll learn a lot more than they will from "Gofreak", I'll tell you tha

:o I don't even think I've brought most news of Cell here!

And I see he's falling into the trap of reading speculation about Cell's suitability as a desktop processor, and inferring that all the negatives implied in that analysis apply to a videogames console aswell :lol This is why certain people shouldn't be allowed read certain websites. And I don't think anyone here spouted off about Cell as a x86 replacement! I for one would be the first to talk down such a possibility.
 
There seems to be more overzealous effort going towards pre-emptively downplaying the Cell chip that has been shown than there is going towards overhyping what's been announced. News outlets still seem to be getting the facts mixed up with earlier speculation, but forum chatter has been surprisingly low-key in overhyping this. AFAICS, at least.
 
kaching said:
There seems to be more overzealous effort going towards pre-emptively downplaying the Cell chip that has been shown than there is going towards overhyping what's been announced. News outlets still seem to be getting the facts mixed up with earlier speculation, but forum chatter has been surprisingly low-key in overhyping this. AFAICS, at least.

People who like sony will notice the downplaying of cell more.

People who don't like soy will notice the hyping of cell more.
 
Top Bottom