Official *** CELL processor announements **** Thread

iapetus said:
Depends how high their screen resolution is. Eclipse is a bit ugly to work with on anything less than 1600 x 1200.

Eclipse is good with Java, and has gotten A LOT better in C++. It still has a long way to go though in terms of the fancy features. And I'm uncertain how Eclipse handles other repository systems outside of CVS.
 
rastex said:
Eclipse is good with Java, and has gotten A LOT better in C++. It still has a long way to go though in terms of the fancy features. And I'm uncertain how Eclipse handles other repository systems outside of CVS.

Something tells me SCEA is hard at work on Subversion :D.

;).
 
Iapetus said:
Depends how high their screen resolution is. Eclipse is a bit ugly to work with on anything less than 1600 x 1200.
Well as soon as I get a monitor that can refresh at 100+hz in that res I'll be ok with it then. Can you believe that in this age and day, CRTs capable of that virtually don't exist?
 
I'm afraid we are gonna wait till March for that :/

I'd really lile to see all this obscure tech talk explained with some demos :)
 
Panajev2001a said:
Something tells me SCEA is hard at work on Subversion :D.

;).

Don't know what that is...

One of the really cool things that SCEA is pushing is collada which is an industry-wide standardized asset solution. Considering how popular it gets, it could really simplify the whole pipeline and asset management schemes, which will really help smaller companies.
 
Surely Eclipse should be nicely compatible with ClearCase too? :P (though I wouldn't put it beyond IBM if it weren't).

And yeah Collada is cool - it's got a nice backing so far from some of the big names in CC tools too, which actually makes it pretty viable format.
 
Ryudo said:
Rambus ? Nice high bandwidth but sucky high latency as well, at least that was what it was like when it was first released.
AFAIK, RDRAM's latency problems were exaggerated by the memory interface to the Intel chips. The EE had the mem controller on-chip, so it fared much better. Anyway, the memory system being used here will clown anything on the PC side...period. PEACE.
 
rastex said:
Eclipse is good with Java, and has gotten A LOT better in C++. It still has a long way to go though in terms of the fancy features. And I'm uncertain how Eclipse handles other repository systems outside of CVS.

Well, it's a while since I've had any involvement with Eclipse as a development environment per se, but I'm sure it supports multiple source management systems - Clear Case at least, and almost certainly others. There are plugins available for Subversion (Subclipse), AccuRev, Stellation and others.
 
DonasaurusRex said:
The interesting thing is that you could feasibly put a PCIe card with a cell chip on it and use it as a general co processor tee hee. Some people are saying that the CELL wont perform very well in serial applications due to the lack of main memory access. We'll see. No matter who loses we win , i think processing power for the masses is about it explode.

I'm sure that's possible. The only problem it will bump up against is oversaturation of the PCI bus. Remember, you've got lot's of other things running on that bus as well. Times, they are a changin' though. If STI is as successful as they're hoping, the x86 crowd will have to pick up the slack.

Lot's of games left to be played though. There's just no way this can NOT be interesting.
 
Panajev2001a said:
I am a bit less worried about the compiler, the quality of the other development tools and the documentation: might it be that having IBM helps SCE/Sony and Toshiba in the job of producing nice PlayStation 3 SDK's ?


I think it will. I also think that having Nvidia, Cg, and OpenGL can't hurt.
 
HokieJoe said:
I'm sure that's possible. The only problem it will bump up against is oversaturation of the PCI bus. Remember, you've got lot's of other things running on that bus as well.
PCI Express doesn't share its bandwidth like PCI. Each lane has 2.5Gbps dedicated bandwidth in each direction.
 
All this talk about Eclipse. I tried it for Java programming on a G4 workstation. I hated it. Then again that was just Java but I can't deny how miserable I was with Eclipse.

soundwave05 said:
So is that it for CELL announcements? No fancy Ridge Racer girl or rubber ducky tech demos?

Nope, this is just CELL. We have to wait for NVIDIA's GPU for that.
 
GhaleonEB said:
Okay the bar has been set. Get hopping, MS. :)
Not really. We still have no idea what the actual specs for the PS3 chipset are. This whole discussion is just a product roadmap for the cell architecture. You're not going to see the same component in DVD player as in a workstation.

So there isn't much for MS to react to.
 
Panajev2001a said:
Uhm... I disagree, it is much worse to have a good CPU paired with a super-super GPU than the opoosite: rendering is not all you do ;).

No, its not all you do - but there is a ratio such that you have a lot of CPU that's sitting around idle most of the time. Since the CPU isn't apparently not involved in any rendering at all, at most its doing is scene graph management, physics, sound, etc. I wouldn't want cray supercomputer power to do those tasks - it would be wasted. Especially since it would drive my hardware costs up significantly.
 
BUT WHAT DOES IT ALL MEAN!?

greenownedpurple.gif
 
VS.NET has gotten a bit too 'feature rich' for my liking. I preferred the VS6 IDE but the compiler wasn't standards compliant. The VS7 compiler is better but the IDE is a monstrosity. Ah well, at least there is always the command line and makefiles or...open source IDEs.
 
No, its not all you do - but there is a ratio such that you have a lot of CPU that's sitting around idle most of the time. Since the CPU isn't apparently not involved in any rendering at all, at most its doing is scene graph management, physics, sound, etc. I wouldn't want cray supercomputer power to do those tasks - it would be wasted.
PS3 CPU should be doing most if not all of the vertex processing - so that kinda involves it in the rendering :P
Besides, there's stuff like fancier compression algorithms that eat up massive amounts of computation resources(actually that's one of my primary target areas of research for PS3), and physics simulation can be quite a bottomless pit if you really have power to waste :D

VS.NET has gotten a bit too 'feature rich' for my liking. I preferred the VS6 IDE but the compiler wasn't standards compliant. The VS7 compiler is better but the IDE is a monstrosity.
Well I kinda agree - It both improves and breaks things from 6, so it's really a mixed bag :P
I wasn't a big fan of 1.52, I hated 4.0, I hated 5.0 a little less, I rather liked 6, and now I like .Net a bit less then 6.
 
Has OpenGL ES actually been confirmed as being a part of the PS3 toolchain or are we still making assumptions about the toolchain?
 
What japanese word does babelfish call tip/chip? I once auto-translated a sprite drawing tutorial and tip/chip pops up everywhere. Automated translations from Japanese are really quite poor. :(
 
>>>And an absurdity in overkill considering that the CELL is secondary to the nVidia hardware for rendering. Having an uber CPU that so overpowers the GPU would just be wasteful IMO. Like putting a P4 in a machine with a Rage128 video card <<<<

With a powerful enough CPU, you wouldn't need 3D acceleration. Software renderers are much, much better than hardware-accelerated ones, anyway.
Ideally, you'd have a CPU (MANY teraflops) capable of rendering complex Mental Ray scenes or RIBs (PRMan) in 1/60th of a second or less, coupled with a 2D graphics chip.
 
TAJ said:
With a powerful enough CPU, you wouldn't need 3D acceleration. Software renderers are much, much better than hardware-accelerated ones, anyway.

Whoa, what? A hardware accelerator is just a specific purpose CPU. Now what are you talking about with software renderers being "much, much better than hardware-accelerated ones". Both a CPU and a hardware accelerator just do the mathematical operations for the graphics pipeline. perhaps you'd like to clarify what you're talking about?
 
What are we likely to see in Playstation 3 CPU, which should be a second generation implementation of Cell architecture, compared to this first generation implementation of Cell architecture that we're seeing this week?

just a smaller, slightly faster, tweaked version of the chip we see today, or a new chip with parallel/multipul 2nd generation Cell Processors?
 
soundwave05 said:
So is that it for CELL announcements? No fancy Ridge Racer girl or rubber ducky tech demos?


no, that stuff should come in March. kinda like in 1999. at ISSCC we just got a presentation of Emotion Engine with no demos. then in March the full PS2 chipset was shown with demos.
 
Pimpwerx said:
That's nothing really. The beauty of the architecture lies in its scalability and software hierarchy. The power gains are made to come from stacking PEs. So some of the fud from today is that the PS3 will probably see 2 of these cores (PEs) in its CPU. That would be pretty badass IMO. But we'll see how they solve the powerhandling problems. People wigged out at the 85C temp, but I thought that was without a fan. I assume there was a heatsink, or something like it though. We'll see.

Oh yeah, Rambus gets a lot of stick for a company that's still developing great new products. 100GB/s? :O That's just bananas. External bandwidth is really picking up. :) Can't wait to see what GPU is gonna accompany this monster. With that much bandwidth, they better not fuck up again like they did with the GIF bus. PEACE.



I like your post ^___^ sounds good.


but if it's only one PE, they could stuff an extra 8 APUs (now called SPEs) in there. since Cell is meant to be scalable. not just scalable in that you can add more PEs, but scalable in that you can build PEs with more APUs/SPEs.
 
Also, how many PEs are used to make that 16 TFLOPs workstation, well duh, probably 64 PEs right?

can I have a PS3Cubed pretty please :D
 
kaigai008.jpg


Looks like Cell is dual-core plus extra (eDRAM?) in this shot. However, that's one focking big ass die, like 600m^2-700mm^2. This is ridiculously large and I'm guessing it's some sort of developement or testing chip on the wafer and the actual production chip is the separate one on top.
 
Ars Technica

http://arstechnica.com/articles/paedia/cpu/cell-1.ars


Introducing the Cell — Part I: the SIMD processing units

By Jon "Hannibal" Stokes
Introduction

The Cell processor consists of a general-purpose POWERPC processor core connected to eight special-purpose DSP cores. These DSP cores, which IBM calls "synergistic processing elements" (SPE), but I'm going to call "SIMD processing elements" (SPE) because "synergy" is a dumb word, are really the heart of the entire Cell concept. IBM introduced the basic architecture of the SPE today, and they're going to introduce the overall architecture of the complete Cell system in a session tomorrow morning.

In this brief overview, I'm first going to talk in some general terms about the Cell approach — what it is, what it's like, what's behind it, etc. — before doing an information dump at the end of the article for more technical readers to chew on and debate. Once the conference is over and I get back to Chicago and get settled in, I'll do some more comprehensive coverage of the Cell.
Back to the future, or, what do IBM and Transmeta have in common?

It seems like aeons ago that I first covered Transmeta's unvieling of their VLIW Crusoe processor. The idea that David Ditzel and the other Transmeta cofounders had was to try and re-do the "RISC revolution" by simplifying processor microarchitecture and moving complexity into software. Ditzel thought that out-of-order execution, register renaming, speculation, branch prediction, and other techniques for latency hiding and for wringing more instruction-level parallelism out of the code stream had increased processors' microarchitectural complexity to the point where way too much die real-estate was being spent on control functions and too little was being spent on actual execution hardware. Transmeta wanted to move register renaming, instruction reordering and the like into software, thereby simplifying the hardware and making it run faster.

I have no doubt that Ditzel and Co. intended to produce a high-performance processor based on these principles. However, moving core processor functionality into software meant moving it into main memory, and this move put Transmeta's designs on the wrong side of the ever-widening latency gap between the execution units and RAM. TM was notoriously unable to deliver on the intitial performance expectations, but a look at IBM's CELL design shows that Ditzel had the right idea, even if TM's execution was off.

IBM's Cell embodies many of the "RISC redivivus" principles outlined above, but it comes at these concepts from a completely different angle. Like TM, IBM started out with the intention of increasing microprocessor performance, but unlike TM, simplifying processor control logic wasn't the magic ingredient that would make this happen. Instead, IBM attacked from the very outset the problem that TM ran headlong into: the memory latency gap. IBM's solution to the memory latency problem is at once both simple and complex. In its most basic form IBM's Cell does what computer architects have been doing since the first cache was invented — Cell moves a small bit of memory closer to the execution units, and lets the processor store frequently-used code and data in that local memory. The actual implementation of this idea is a bit more complicated, but it's still fairly easy to grasp.
Eliminating the Instruction Window

If you've read my series on the Pentium and the PowerPC line or my introduction to basic computer architecture fundamentals, then you're familiar with the concept of an instruction window. I don't want to recap that concept here, so check out this page if you're not familiar with it before moving on.

figure2.gif


The diagram above shows the development of the microprocessor divided into three phases. The first phase is characterized by static execution, where instructions are issued to the execution units in the exact order in which they're fed into the processor. With dual-issue machines like the original Pentium, two instructions that meet certain criteria can execute in parallel, and it takes a minimal amount of logic to implement this very simple form of out-of-order execution.

In the second phase, computer designers included an instruction window, increased the number of execution units in the execution core, and increased the cache size. So more code and data would fit into the caching subsystem (either L1 or L1 + L2), and the code would flow into the instruction window where it would be spread out and rescheduled to execute in parallel on a large number of execution units.

The third phase is characterized by a massive increase in the sizes of the caches and the instruction window, with some modest increases in execution core width. In this third phase, memory is much farther away from the execution core, so more cache is needed to keep performance from suffering. Also, the execution core has been widened slightly and its units have been more deeply pipelined, with the result that there are more execution slots per cycle to fill.

This increased number of execution slots per cycle means that the processor has to find yet more instruction-level parallelism in the code stream, a necessity that gives rise to a massively-increased instruction window (i.e., rename registers, reorder buffer entries, and reservation stations). Now take a look at the diagram below. Notice how the all of that control logic that is associated with the instruction window makes up a huge proportion of the logic in the processor.

figure3.gif


Such control logic took up a vanishingly small amount of space in the early static-issue RISC designs like the PPC 601. Of course, back when RISC was first introduced, "control logic" meant "decode logic," since there was no instruction window on those early designs. So RISC reduced the amount of control logic by simplifying the instruction decoding process; this left more room for execution hardware and storage logic in the form of on-die L1 cache.

The end result is that there is this massive amount of control logic that now sits between the processor's cache and its execution core, just as therei s a massive amount of latency that sits between the cache and main memory. This control logic eats up a lot of die space and adds pipeline latency, in return for extracting extra parallelism from the code stream.

Now let's switch gears a moment and look at the issues I raised in my recent Moore's Spring post. The diagram below represents fundamentally the same phenomenon as the diagram in that post, but from a perspective that should look familiar to you.

figure5-small.gif


The evolution charted above shows how memory moves further and further away from the execution hardware, while the amount of execution hardware increases (in the form of added processors). What I've tried to illustrate with this diagram and the preceding ones is that there is a homology between the growth of on-die control logic that intervenes between the cache and the execution core and the growth of memory latency. The result is that a trend at the system level is somewhat replicated at the level of the microprocessor. Now let's take a look at a single Cell SPE.


figure6.gif


As you can see, IBM has eliminated the instruction window and its attendant control logic, in favor of adding more storage space and more execution hardware. A Cell SPE doesn't do register renaming or instruction reording, so it needs neither a rename register file or a reorder buffer. The actual architecture of the Cell SPE is a dual-issue, statically scheduled SIMD processor with a large local storage (LS) area. In this respect, the individual SPUs are like very simple, PowerPC 601-era processors.

The main differences between an individual SPE and an early RISC machine are twfold. First, and most obvious, is the fact that the Cell SPE is geared for single-precision SIMD computation. Most of its arithmetic instructions operate on 128-bit vectors of four 32-bit elements. So the execution core is packed with vector ALUs, instead of the traditional fixed-point ALUs. The second difference, and this is perhaps the most important, is that the L1 cache has been replaced by 256K of locally addressable memory. The SPE's ISA, which is not VMX/Altivec-derivative (more on this below), includes instructions for using the DMA controller to move data between main memory and local storage. The end result is that each SPE is like a very small vector computer, with its own "CPU" and RAM.

This RAM functions in the role of the L1 cache, but the fact that it is under the explicit control of the programmer means that it can be simpler than an L1 cache. The burden of managing the cache has been moved into software, with the result that the cache design has been greatly simplified. There is no tag RAM to search on each access, no prefetch, and none of the other overhead that accompanies a normal L1 cache. The SPEs also move the burden of branch prediction and code scheduling into software, much like a VLIW design.

The SPE's very simple front end can take in two instructions at a time, check to see if they can operate in parallel, and then issue them either in parallel or in program order. These two instructions then travel down one of two pipes, "even" or "odd," to be executed. After execution, they're put back in sequence (if necessary) by the very simple commit unit and their results are written back to local memory. The individual SPUs can throw a lot overboard, because they rely on a regular, general-purpose POWERPC processor core to do all the normal kinds of computation that it takes to run regular code. The Cell system features eight of these SPUs all hanging off a central bus, with one 64-bit POWERPC core handling all of the regular computational chores. Thus all of the Cell 's "smarts" can reside either on the PPC core, while the SPUs just do the work that's assigned to them.

To sum up, IBM has sort of reapplied the RISC approach of throwing control logic overboard in exchange for a wider execution core and a larger storage area that's situated closer to the execution core. The difference is that instead of the compiler taking up the slack (as in RISC), a combination of the compiler, the programmer, some very smart scheduling software, and a general-purpose CPU doing the kind of scheduling and resource allocation work that the control logic used to do.
The technical dirt

Now that the big picture is out of the way, here's the raw technical info for those who care. (Note that I'm following the order of the abstract from the program, which will hopefully become accessible on the web sometime soon.) The 256K LS on the SPUs is just a very simple, flat address space with no multiuser support built in. So there's no way to segregate out pages for use by users with different levels of access. This helps simplify the LS design by keeping complexity to a minimum. The LS is accessed in 16-byte or 128-byte lines, and instructions are fetched from it in 32, 4-byte groups.

The various clients for the LS use a cycle by cycle arbitration scheme, where the DMA takes first priority, loads and stores take second priority, and instruction fetch is third. The instruction format is a 32-bit fixed-length format, with up to three sources and one target. Here's a sample opcode for a floating-point multiply-add

OP | RT | RB | RA | RC

Once the instructions are in the SPE, the SPE's control unit can issue up to two instructions per cycle, in-order. The SPE has a 128-entry register file (128-bytes per entry) that stores both floating-point and integer vectors. As stated above, there are no rename registers. All loop unrolling is done by the programmer/compiler using this very large register file.

Note also that the register file has six read ports and two write ports. The SPEs can do forwarding and bypass the register file when necessary. The SPE has a DMA engine that handles moving data between main memory and the register file. This engine is under the control of the programmer as mentioned above. Each SPE is made of 21 million transistors: 14 million SRAM and 7 million logic. Finally, the instruction set for the SPEs is not VMX compatible or derivative, because its execution hardware doesn't support the range of instructions and instruction types that VMX/Altivec does.

Conclusion

There's a whole lot more to say about Cell, but that will have to wait until later. Tomorrow, after the next CELL session, I'll cover more of the Cell's basic architecture, including the mysterious 64-bit POWERPC core that forms the "brains" of this design.
 
HyperionX said:
kaigai008.jpg


Looks like Cell is dual-core plus extra (eDRAM?) in this shot. However, that's one focking big ass die, like 600m^2-700mm^2. This is ridiculously large and I'm guessing it's some sort of developement or testing chip on the wafer and the actual production chip is the separate one on top.

Aren't you the guy who thought there were only 16 possible outputs for the PSP analog nub? :)
 
Holy shit my brain is melting. Someone please translate this into "Toy Story graphics" or whatever it means :P
 
PS3 wont be doing Toy Story graphics, not in its wildest dreams. try thinking more along the lines of PS1/PSone prerendered CG FMV scenes. at best.
 
This RAM functions in the role of the L1 cache, but the fact that it is under the explicit control of the programmer means that it can be simpler than an L1 cache. The burden of managing the cache has been moved into software, with the result that the cache design has been greatly simplified.

The difference is that instead of the compiler taking up the slack (as in RISC), a combination of the compiler, the programmer, some very smart scheduling software, and a general-purpose CPU doing the kind of scheduling and resource allocation work that the control logic used to do.

All loop unrolling is done by the programmer/compiler

Hmm... also I think they are making the compiler open source so that people can alter it to make it better. I am not sure if that is a good sign.

It sounds like they stripped alot of things out to make it fast but really dumped much of the complexity onto the OS/compiler people which will ultimately push it onto the console programmer.
 
Hey guys, I just got a free Ipod! One of my freinds told me about this site, and I signed up for it, http://www.freeiPods.com/?r=14754629 and refered a few freinds and did one of their offers. Then about 3 weeks later (due to a backup at their storage facility), I recieved a FREE IPOD! Unbelievable! You guys got to try this.

- Mike
 
For chrissakes, aren't you mods gonna stop him? Although it was funny when he requested someone to make a thread for him, hahah
 
Sony chip to transform video-game industry

TECHNOLOGY ENVISIONS ALL-IN-ONE BOX FOR HOME

By Dean Takahashi

Mercury News

Sony's next-generation video-game console, due in just two years, will feature a revolutionary architecture that will allow it to pack the processing power of a hundred of today's personal computers on a single chip and tap the resources of additional computers using high-speed network connections.

If key technical hurdles are overcome, the ``cell microprocessor'' technology, described in a patent Sony quietly secured in September, could help the Japanese electronics giant achieve the industry's holy grail: a cheap, all-in-one box for the home that can record television shows, surf the Net in 3-D, play music and run movie-like video games.

Besides the PlayStation 3 game console, Sony and its partners, IBM and Toshiba, hope to use the same basic chip design -- which organizes small groups of microprocessors to work together like bees in a hive -- for a range of computing devices, from tiny handheld personal digital assistants to the largest corporate servers.

If the partners succeed in crafting such a modular, all-purpose chip, it would challenge the dominance of Intel and other chip makers that make specialized chips for each kind of electronic device.

``This is a new class of beast,'' said Richard Doherty, an analyst at the Envisioneering Group in Seaford, N.Y. ``There is nothing like this project when it comes to how far-reaching it will be.''

Game industry insiders became aware of Sony's patent in the past few weeks, and the technology is expected to be a hot topic at the Game Developers Conference in San Jose this week. Since it can take a couple of years to write a game for a new system, developers will be pressing Sony and its rivals for technical details of their upcoming boxes, which are scheduled to debut in 2005.

Ken Kutaragi, head of Sony's game division and mastermind of the company's last two game boxes, is betting that in an era of networked devices, many distributed processors working together will be able to outperform a single processor, such as the Pentium chip at the heart of most PCs.


With the PS 3, Sony will apparently put 72 processors on a single chip: eight PowerPC microprocessors, each of which controls eight auxiliary processors.

Using sophisticated software to manage the workload, the PowerPC processors will divide complicated problems into smaller tasks and tap as many of the auxiliary processors as necessary to tackle them.

``The cell processors won't work alone,'' Doherty said. ``They will work in teams to handle the tasks at hand, no matter whether it is processing a video game or communications.''

As soon as each processor or team finishes its job, it will be immediately redeployed to do something else.

Such complex, on-the-fly coordination is a technical challenge, and not just for Sony. Game developers warn that the cell chips do so many things at once that it could be a nightmare writing programs for them -- the same complaint they originally had about the PlayStation 2, Sony's current game console.

Tim Sweeney, chief executive of Epic Games in Raleigh, N.C., said that programming games for the PS 3 will be far more complicated than for the PS 2 because the programmer will have to keep track of all the tasks being performed by dozens of processors.

``I can't imagine how you will actually program it,'' he said. ``You do all these tasks in parallel, but the results of one task may affect the results of another task.''


But Sony and its partners believe that if they can coordinate those processors at maximum efficiency, the PS 3 will be able to process a trillion math operations per second -- the equivalent of 100 Intel Pentium 4 chips and 1,000 times faster than processing power of the PS 2.

That kind of power would likely enable the PS 3 to simultaneously handle a wide range of electronic tasks in the home. For example, the kids might be able to race each other in a Grand Prix video game while Dad records an episode of ``The Simpsons.''

``The home server and the PS 3 may be the same thing,'' said Kunitake Ando, president and chief operating officer of Sony, at a recent dinner in Las Vegas.

Sony officials said that one key feature of the cell design is that if a device doesn't have enough processing power itself to handle everything, it can reach out to unused processors across the Internet and tap them for help.

Peter Glaskowsky, editor of the Microprocessor Report, said Sony is ``being too ambitious'' with the networked aspect of the cell design because even the fastest Internet connections are usually way too slow to coordinate tasks efficiently.

The cell chips are due to begin production in 2004, and the PS 3 console is expected to be ready at the same time that Nintendo and Microsoft launch their next-generation-game consoles in 2005.

Nintendo will likely focus on making a pure game box, but Microsoft, like Sony, envisions its next game console as a universal digital box.

A big risk for Sony and its allies is that in their quest to create a universal cell-based chip, they might compromise the PS 3's core video-game functionality. Chips suitable for a handheld, for example, might not be powerful enough to handle gaming tasks.

Sony has tried to address this problem by making the cell design modular; it can add more processors for a server, or use fewer of them in a handheld device.

``We plan to use the cell chips in other things besides the PlayStation 3,'' Ando said. ``IBM will use it in servers, and Toshiba will use it in consumer devices. You'd be surprised how much we are working on it now.''

But observers remain skeptical. ``It's very hard to use a special-purpose design across a lot of products, and this sounds like a very special-purpose chip,'' Glaskowsky said.

The processors will be primed for operation in a broadband, Net-connected environment and will be connected by a next-generation high-speed technology developed by Rambus of Los Altos.

Nintendo and Microsoft say they won't lag behind Sony on technology, nor will they be late in deploying their own next-generation systems.

While the outcome is murky now, analyst Doherty said that a few things are clear: ``Games are the engine of the next big wave of computing. Kutaragi is the dance master, and Sony is calling the shots.'


Overall, I'm quite excited about Cell in how it relates to scientific computing, but it seems it will be a nightmare to write software for it due to the massive parallelism. As far as PS3 goes, how many GFLOPS it has won't matter either way, since we've been proven again and again that the average consumer doesn't give a shit about how impressive a game is technically. I do hope that the gap won't be as big as it was this gen, though.
 
Alright, so the biggest part of this unveiling goes as such

1. The chick's got a mustache, fucking gross.

2. PS3's gonna have some major G-FlopS. You know you like that

3. The chip is a shiny multicolored thingy.

4. Lastly, there's a gpu, intersecting with a cpu, creating a shift in the continuum proccessing several particular nuclehydroxytampon rectulytes in the, and this is what really has me splooging, vehectular vagina..

excuse me, that's storage logic. not vagina... I always get them confused.
 
Top Bottom