Official *** CELL processor announements **** Thread

fugimax said:
First, your assuming. Never do that...especially not with Sony. :)
fugimax said:
I respect nVidia, but definitely not Sony when it comes to developer support -- and I doubt they are the one's in charge.
Didn't take you long to go against your own admonishment.
 
Didn't take you long to go against your own admonishment.
My assumption is grounded in more than hope. It's not nVidia PS3, it's Sony PS3. There is no reason for nVidia to be heading up the development of CPU development tools.
 
My assumption is grounded in more than hope
So is his (Sony's developer support has been very good lately, and there's already been several examples given in the last couple of posts) On the other hand, your assumptions (why do you even assume stuff about Sony, when you said never assume? :P) are based on dwelling on things long passed by.

So PS3 will be easy to code for as long as you have someone else doing the dirty work with all the PE. Woooo !
Well, yes. Thing is, that someone won't be game programmers (unless they really want to) but library programmers, already experienced with that kind of stuff.
 
gofreak said:
Just one little bit of info that some may have been wondering about (well, I know I was): it's been confirmed that the Power core is clocked at the same speed as the APUs/SPEs. So 4+Ghz. The core is a from scratch design, not derived from any other PowerPC chips - it's quite lean.

link: http://www.realworldtech.com/forums...ostNum=3098&Thread=13&entryID=46031&roomID=13

Some other interesting info in that discussion for real tech heads..
Interesting. It also "validates" the possibility of 3.5 Ghz for the Xenon CPU cores.
 
Blimblim said:
Interesting. It also "validates" the possibility of 3.5 Ghz for the Xenon CPU cores.

Depending on the core being used, yes...not all Power cores could hit these speeds (or, I should say, are hitting these speeds now).
 
fugimax said:
My assumption is grounded in more than hope. It's not nVidia PS3, it's Sony PS3. There is no reason for nVidia to be heading up the development of CPU development tools.

Considering that nvidia designed the core around which *rendering* takes place, I would think they'd be very much in a leading role on the developer relations front and be hand in hand with sony on anything presented about the game development possibilities on the platform. Some people here have commented that openGL ES is to be an option in the toolchain and while I think that a full OpenGL stack would make more sense, it should be noted that nVidia has materials already for developing games with OpenGL ES. nVidia has an SDK already for building games. nVidia was very involved in things related to the original Xbox as well so I think that the idea that they would somehow just disappear and leave their volumes of experience in the core and the development of games around their technology is delusional.

On top of that I can say having worked first hand with SCEA that they aren't exactly throwing CDs over the fence to developers and leaving them to their own devices. At the beginning of the PS2s cycle Sony held a variety of camps to explain the architecture and help people understand how to do things with the VMUs. A huge problem for them, however, was that (at least at the camp I went to) they had an audience who largely hadn't dealt with constrained memory devices, hardware registers, and 'to the metal' programming. Building technology around those things took time, even for Sony's own internal teams - especially having had to 'go it alone'. This time Sony has a wonderful team in IBM and nVidia who will be able to assist with knowledge transfer and do so in the United States (which was a huge problem with the information that WAS transfered from the PS2 developers - it wasn't actually usable/understandable right away).

It is highly likely that there will be C/C++ level SDKs available with the hardware platforms that will transparently abstract out the inner workings of the hardware for developers and 3d party engine makers based entirely on the people behind the platform.
 
BlimBlim said:
Interesting. It also "validates" the possibility of 3.5 Ghz for the Xenon CPU cores.
As well as validating the rest of info that leaks have stated about X-Cores.
If you look at the other info, the two PPC cores (XCore and Cell PPC) are awfully similar (VMX, dual issue, 2-way multithreaded, in-order execution).
 
It also "validates" the possibility of 3.5 Ghz for the Xenon CPU cores.
3.5gHz triple core is definitely possible, but I'm not sure if IBM is going to be able to pull off a triple core at that speed by July.

I think Apple only just got 2.x gHz dual-cores last November.
 
Fafalada said:
As well as validating the rest of info that leaks have stated about X-Cores.
If you look at the info, the two PPC cores (XCore and Cell PPC) are awfully similar (VMX, dual issue, 2-way multithreaded, in-order execution).

True, they could be very similar cores, though I doubt they're the exact same. We shall see.
 
The quoted GFLOPS figures assume maximum utilization of all the parallel execution units. I have no doubt that the PS3 may be operating, in many instances, well below its theoretical capacity.

EDIT: It will take some pretty clever software engineering (Xenon and N360 may also feature multicore designs). Still, this is the programmers job and it isn't going to deter anyone, IMO. In fact, I can imagine many excited about the possibilities. Not everyone even remotely despises 'coding to the metal'. Indeed, some would prefer doing that to making API calls all day.
 
Blimblim said:
Interesting. It also "validates" the possibility of 3.5 Ghz for the Xenon CPU cores.

On the other hand, it also means that they're not as impressive as first thought since these new cores appear to be a lot "leaner" than the PPC970.
 
Fafalada said:
As well as validating the rest of info that leaks have stated about X-Cores.
If you look at the info, the two PPC cores (XCore and Cell PPC) are awfully similar (VMX, dual issue, 2-way multithreaded, in-order execution).
Interesting. Does the Cell core feature Altivec too ?
 
Diffense said:
The quoted GFLOPS figures assume maximum utilization of all the parallel execution units. I have no doubt that the PS3 may be operating, in many instances, well below its theoretical capacity.

Of course. What else can we talk about? There's no performance benchmarks. Theoretical peaks are given for every chip, this is this PE's particular figure (or it's SPE's performance, since we're ignoring that VMX unit :P).

Hmm...I'm :( that we haven't heard anything from this mornings presentation yet, seemingly. Perhaps everyone is too busy attending other presentations to report back yet (?).
 
Of course. What else can we talk about? There's no performance benchmarks.
Bingo.

I went to a talk by someone on the G5 hardware team at Apple and he did a demo where he gradually tweaked an open source program using their performance analysis tools. By the time he was done, it was over 120x faster.

Utilizing the power of a multi-cored machine is not easy at all. In fact, it's usually a pain. Look at even very technical games/engines like Doom3...it's not coded to use more than one processor. It just results in a lot of hassle and extra work that I think developers will be feeling this gen. Will they stand up and decide to do that extra work? Probably...but probably not to the full extent that is possible.
 
On the other hand, it also means that they're not as impressive as first thought since these new cores appear to be a lot "leaner" than the PPC970.
They are also targeted primarily at different applications as PPC970. You have to remember Xenon is a console, not a Office PC ;)

BlimBlim said:
Interesting. Does the Cell core feature Altivec too ?
Yes the ISSCC core also has a VMX unit (which is IBM's naming for Altivec basically).

Anyway, I wasn't suggesting the cores are the same thing btw(I'm pretty sure XCPU "VMX" equivalents are highly customized for one) - just that they definately share some design concepts.
 
Fafalada said:
Yes the ISSCC core also has a VMX unit (which is IBM's naming for Altivec basically).

Anyway, I wasn't suggesting the cores are the same thing btw(I'm pretty sure XCPU "VMX" equivalents are highly customized for one) - just that they definately share some design concepts.
That's more or less what I was expecting, thanks !
 
Yes the ISSCC core also has a VMX unit (which is IBM's naming for Altivec basically).
As a complete side note, thank Apple for this. It was a requirement they put on IBM before they'd buy PPC chips off of them again (i.e. the G5). I think it's a great addition to the processor, and when used properly is crazy powerful.
 
fugimax said:
My assumption is grounded in more than hope. It's not nVidia PS3, it's Sony PS3. There is no reason for nVidia to be heading up the development of CPU development tools.
But I thought we were *never* supposed to assume, according to you? Besides, that wasn't the only assumption you made. You also assumed that Sony wouldn't be able to change in any way to alter your perceptions of their developer relations. The only way you could make such an assumption is if you have no intention of judging their efforts fairly :)
 
fugimax said:
Bingo.

I went to a talk by someone on the G5 hardware team at Apple and he did a demo where he gradually tweaked an open source program using their performance analysis tools. By the time he was done, it was over 120x faster.

Utilizing the power of a multi-cored machine is not easy at all. In fact, it's usually a pain. Look at even very technical games/engines like Doom3...it's not coded to use more than one processor. It just results in a lot of hassle and extra work that I think developers will be feeling this gen. Will they stand up and decide to do that extra work? Probably...but probably not to the full extent that is possible.

All I'll say is, I'm glad I took concurrent & distributed programming classes this year ;)

This will likely be an issue for all the consoles, though. PS3 might just be the most parallel of them all, though.

edit - and Faf, just to clarify, it's one VMX unit in the PE Power Core, not two, right?
 
Also, even while knowing very little about them, I'll say that I expect Xenon and Revolution to perform competitively. Sony's preference for providing a really powerful general purpose processor is interesting though.
 
I think it's a great addition to the processor, and when used properly is crazy powerful.
I know, and in this case it would also increase the peak rating of the Cell chip by another 32GFlops, wonder why they didn't include it in the numbers :P
I actually didn't expect the PPC core to have its own VMX - not when you have 8 SPUs on board already. Though I am always happy to get more :D

Then again who knows what exactly the one going into PS3 will be like.
 
fugimax said:
Bingo.

I went to a talk by someone on the G5 hardware team at Apple and he did a demo where he gradually tweaked an open source program using their performance analysis tools. By the time he was done, it was over 120x faster.

Utilizing the power of a multi-cored machine is not easy at all. In fact, it's usually a pain. Look at even very technical games/engines like Doom3...it's not coded to use more than one processor. It just results in a lot of hassle and extra work that I think developers will be feeling this gen. Will they stand up and decide to do that extra work? Probably...but probably not to the full extent that is possible.

Sorry but no.

First Shark (more than likely the tool you're talking about) along with the OpenGL profiler are easy tools to use, but you have to know what you're doing. I can profile a shark application across a dual g5 (like the one I'm working on right now) and the opengl profiler (like the application running in a window next to this one) and see the hotspots. Dealing with the hotspots will take a while, but dealing with a multiprocessor machine and dealing with a multicore machine are actually VERY similar. You mention Doom3 and talk about coding it for more than one processor. Let me just say that there are a few answers to this.

First companies such as Intel (who have been able to identify this problem and approach it with hyperthreading) have introduced hardware solutions to make applications that can support out of order execution of code work with little to no additional work by the developer.

Second, the problem has a lot to do with how your compiler can break up the instructions into something more easily digestable by multiple processors cores.

Third, having multiple processes/cores simply means that certain tasks can be spawned off as threads or lightweight processes on those processors/cores. Xbox2 (which is rumored to have multiple cores as well) will, of course, have the same problems if it does indeed have multiple cores.

Fourth, most games are written for one processor because the vast majority of personal computers in the world are sold with exactly one processor (and no option to add a second). Spreading functionality over to additionall processors isn't lifeshattering and most competent programmers know something about threads/semaphores/ and other mechanisms to offload work units.

The whole key to why its almost embarassingly easy to start moving work to multiple CPUs/cores is because the languages we use today C/C++/Java/etc. understand the concept of threads and thread schedulers can transparently offload work themselves to these various work units. This functionality is often, though not always, a service of the operating system (no matter how lightweight) which runs on the processors.

Todays games have to deal with multiple texture units, multiple shader units, multiple game components, etc. But you don't see people crucifying themselves about it. Why? Because its all hidden behind the OS, SDKs, etc. I highly doubt that Sony, Microsoft, Nintendo or anyone else is going to ship you a board, the machines instruction set, and a development guidelines specification and have you start coding to the metal.
 
Also, even while knowing very little about them, I'll say that I expect Xenon and Revolution to perform competitively. Sony's preference for providing a really powerful general purpose processor is interesting though.

Define "competitively"....lots of Formula One teams have been compeating against Ferrari for the last several years.....doesn't mean they are in the same ballpark performance-wise, however...
 
Siboy at B3D has reported back on today's presentation: not a lot of new info, apparently. No live demos ;)

The overall CELL paper this morning was a little disappointing from a disclosure standpoint. They stuck pretty much to the written paper which I already posted some info on.

The package is a 42.5x42.5mm flip chip BGA. There is going to be a paper at ECTC later this year which is supposed to go into more of the packaging side.

90nm SOI process with 8 layers of metal (copper interconnect).

They mentioned 20% of the power was due to leakage and another 20% due to clock tree power, but wouldn't give the absolute numbers (which tells you something...).

The device taped out in January 2004, about 10 months after the high-level architecture was completed.

The EIB (bus interconnect) contains 4 128-bit rings with a 64-bit tag. No clue on the actual configuration. The EIB runs at HALF the PPE/SPE clock rate (so 2GHz-ish). The EIB can move 96 bytes total per cycle.

Everything connected to the EIB (I listed this earlier) can each individually move 16 bytes per cycle in to/out of the EIB, except for the FlexIO interface which can move twice this much. When they say "per cycle" I'm not sure if they're referring to the EIB half-rate cycle or the PPE/SPE full-rate cycle.

The PPE and SPE, local 256KB SRAMs, etc. are all on one clock network (same frequency, sorry AutomatedMech). The EIB is another clock network, and the external memory interface a third.

Incidentally, the SPU/SPE paper referred to 3 other papers submitted to the IEEE Symposium on VLSI Circuits for June '05 (describing physical design details, the fixed point unit of the SPE and the floating point unit of the SPE).

One last detail from the SPE/SPU talk I forgot to list:

The even pipeline contains the simple fixed point and SP float instructions, shifts/rotates, integer multiply-acc, byte operations (pop count, absolute differences, byte average, byte sum).

The odd pipeline contains the permute, load/store, channel read/write (built-in blocking message passing interface supported by 3 instructions: channel read, channel write and read channel capacity) and branch instructions.

http://www.beyond3d.com/forum/viewtopic.php?t=19815&start=240

Yeah, quite technical. Interesting that the first tape-out was in Jan 04 though.
 
The whole key to why its almost embarassingly easy to start moving work to multiple CPUs/cores is because the languages we use today C/C++/Java/etc. understand the concept of threads and thread schedulers can transparently offload work themselves to these various work units. This functionality is often, though not always, a service of the operating system (no matter how lightweight) which runs on the processors.

DING DING DING!

The bitch will be the allocation of such threads and whether Sony provides a performance analyzer along with the OS.
 
Programs have a network of data dependencies. If task A depends on the result of Task B you simply can't start A before B is finished even if there's another processor idle. So programming for these systems will involve analysis to see what tasks can be executed in parallel. Furthermore, lets say tasks X, Y and Z are independent but task Q depends on the result of all three. Q will have to wait for the max(X, Y, Z) which means that processors that finish first will be idle. Of course, you can always invent non-critical work for the idle processors to do (since there's power to burn) but if Q must absolutely be completed each frame then you must take max(X, Y, Z) + Q time and no less.

You can't be more parallel than the depency graph of your application allows simply becuase well...you can't put on your socks and shoes at the same time even if you had 10 servants eager to do it for you. Therefore (among other reasons), N times the number of processors doesn't mean a factor of N increase in performance.

Good libraries definitely make things easier but may hide opportunities for optimization. I don't think we should overstate the programming and engineering challenges of the next generation but I don't think we should understate them either. Also, only benchmarks/demos/games will but the 256GLOP figure in context for me becuase you might not be able to put all the processors to work on the task you most deperately want to complete.
 
IJoel said:
DING DING DING!

The bitch will be the allocation of such threads and whether Sony provides a performance analyzer along with the OS.

A performance analyzer is pretty much a lock. They've been producing those at least since the PS1. The one for the PS1 was a big piece of hardware, but hopefully they will have something more 'software' like for the PS3 :)

Much of what we are discussing is going to be difficult to quantify without knowing the OS that sits atop the platform. Without an OS hell the AMD64 would be a bitch to program for :)
 
fugimax said:
First, your assuming. Never do that...especially not with Sony.

fugimax said:
IBM + Microsoft seems like a better team than IBM + Sony/nVidia, yes.

So, others can't assume things in regards to Sony. Still, YOU can assume negative things, especially when it comes to spitting on non-existant Sony, IBM and Nvidia tools that no one knows anything about. Don't call that bias and incoherence...


Edit: ops, kaching beat me to it ;)
 
Fafalada said:
I know, and in this case it would also increase the peak rating of the Cell chip by another 32GFlops, wonder why they didn't include it in the numbers :P
I actually didn't expect the PPC core to have its own VMX - not when you have 8 SPUs on board already. Though I am always happy to get more :D

Then again who knows what exactly the one going into PS3 will be like.

I do not think that, PlayStation 3's CPU will have less cache, a less powerful PU, etc... compared to the 1+8 configuration that was presented at ISSCC considering that PlayStation3 will not go on sale at the earliest before Q4 2005 and it is not excluded that they might ship with 65 nm chips (also, at the 90 nm node the chip is alreay smaller tha the first EE revision the first Japanese PlayStation 2 consoles shipped with and now they have bigger wafers too).
 
The article at Ars Tech sums it up in four succinctly written paragraphs:

“The burden of managing the cache has been moved into software, with the result that the cache design has been greatly simplified. There is no tag RAM to search on each access, no prefetch, and none of the other overhead that accompanies a normal L1 cache. The SPEs also move the burden of branch prediction and code scheduling into software, much like a VLIW design…

To sum up, IBM has sort of reapplied the RISC approach of throwing control logic overboard in exchange for a wider execution core and a larger storage area that's situated closer to the execution core. The difference is that instead of the compiler taking up the slack (as in RISC), a combination of the compiler, the programmer, some very smart scheduling software, and a general-purpose CPU do the kind of scheduling and resource allocation work that the control logic used to do….

Once the instructions are in the SPE, the SPE's control unit can issue up to two instructions per cycle, in-order. The SPE has a 128-entry register file (128-bits per entry) that stores both floating-point and integer vectors. As stated above, there are no rename registers. All loop unrolling is done by the programmer/compiler using this very large register file…

Note also that the register file has six read ports and two write ports. The SPEs can do forwarding and bypass the register file when necessary. The SPE has a DMA engine that handles moving data between main memory and the register file. This engine is under the control of the programmer as mentioned above. Each SPE is made of 21 million transistors: 14 million SRAM and 7 million logic. Finally, the instruction set for the SPEs is not VMX compatible or derivative, because its execution hardware doesn't support the range of instructions and instruction types that VMX/Altivec does.”

To summarize:

1) Xenon will offer better online play, revolutionary downloadable content schemes to removable media, and generally a feature-rich gaming experience compared to Sony’s obvious ultimate vision of controlling format and delivery, which is closer to what you would imagine a scary, monopolistic company would want. Microsoft is empowering developers to make gamers happy and buy more games. It’s the same strategy they use with other software development platforms, and it works. #1) Make the developers happy, #2) make the gamers happy.
2) Sony is heading into the same directions (proprietary technology, ignoring Western game cos) which Nintendo was 10 years ago, and it’s not going unnoticed and you will surprising defections to MS camp because of this. Sony is doing nothing but hype hype hype and try to position themselves for financial windfalls from royalties and the poor Japanese who are going to pay their life savings away to own their 49900 yen PS3.
3) Without a doubt MS has Sony beat in dev support, today, right now, and likely for the next 10 years. It’s black and white. They don’t seem to understand the concept of offering a helping hand. Defend them all you want, they make great games and have awesome marketing/hype machines. However, even Nintendo has Sony beat in dev support for crying out loud. DS has development tools which poop on PSP, and this talk of open source is making me groan. Open source tools are made by the comic book dude from the Simpsons, who would rather scratch his butt than lift a finger to write a legible and useful piece of documentation.
4) My money is on Xenon to have more main memory, maybe even twice as much (something any game programmer will be crying with joy about), and additionally larger general purpose cache than PS3. It’s going to flow naturally from the difference in necessities of memory bandwidth. Basically, Xenon will be kicking butt out of the box, you won’t have to tell it what to do, it knows what to do, and it has plenty of space to do it in. It’s a programmer friendly architecture – what needs to be cached is cached and this works well for anything pushing lots of data back and forth, which you know…all freaking games do, especially living breathing worlds like Fable and what you’ll be seeing on Xenon.

Memory latency is overblown into tying our hands behind back, all I want to do is store X and retrieve X when I need it, not store X, Y, Z because I know I’m going to need Z, Y, X in that order, for every object, for every possible combination of stores. It’s not something as trivial as working it into the compiler. All games store and load data – BUT, paying attention to order, temporality, and cache size should be ALL YOU NEED to do to optimize, not attempt to predict what’s going to happen when you start pushing data through the CELL. It’s turning the CPU into a Chess partner you have to predict moves with…while this may seem fun to some, you can have that I just want something fast enough to do rigid body and AI at the same time and that is something that is not bottlenecked by memory latency….

CELL is going to be nice for math heavy computation and making Gran Turismo 5 look and feel great. Would it be good for the massive content games which Microsoft are brewing up? No. I’m seeing a divergence in not only architecture but games themselves. Western games are going to become all about what you can do, what you can experience, what you can change, Japanese games are going to continue to be stuff which is all about the moment, all about the CG…that’s fine and dandy, I’m sure we’ll be wowing about some new Zone of Enders in Fall 2006 with lots of pretty robots and pixel effects, but long term, long run, the difference in architecture is not as important as the difference in game development support and things like mature online play support systems.

Having an OS would NOT abstract any of the CELL workload away…what you do with the data is purely a game to game, line of code to line of code decision. More importantly, is it possible to work CELL’s architecture into middleware so that it’s easy to make better looking games? The answer is no, without a doubt middleware would probably mainly make porting from Xenon to PS3 easier, and without a miracle predict the future Turing machine embedded into CELL, there’s no way to make it go fast without lots more work and non stop line to line, game to game consternation.
 
Vortac said:
To summarize:

1) Xenon will offer better online play, revolutionary downloadable content schemes to removable media, and generally a feature-rich gaming experience compared to Sony’s obvious ultimate vision of controlling format and delivery, which is closer to what you would imagine a scary, monopolistic company would want. Microsoft is empowering developers to make gamers happy and buy more games. It’s the same strategy they use with other software development platforms, and it works. #1) Make the developers happy, #2) make the gamers happy.
2) Sony is heading into the same directions (proprietary technology, ignoring Western game cos) which Nintendo was 10 years ago, and it’s not going unnoticed and you will surprising defections to MS camp because of this. Sony is doing nothing but hype hype hype and try to position themselves for financial windfalls from royalties and the poor Japanese who are going to pay their life savings away to own their 49900 yen PS3.
3) Without a doubt MS has Sony beat in dev support, today, right now, and likely for the next 10 years. It’s black and white. They don’t seem to understand the concept of offering a helping hand. Defend them all you want, they make great games and have awesome marketing/hype machines. However, even Nintendo has Sony beat in dev support for crying out loud. DS has development tools which poop on PSP, and this talk of open source is making me groan. Open source tools are made by the comic book dude from the Simpsons, who would rather scratch his butt than lift a finger to write a legible and useful piece of documentation.



:lol :lol :lol
 
Pana, I didn't say I expect it to have less stuff, just that I don't know if it will be the same as ISSCC Cell.

Memory latency is overblown into tying our hands behind back, all I want to do is store X and retrieve X when I need it, not store X, Y, Z because I know I’m going to need Z, Y, X in that order, for every object, for every possible combination of stores. It’s not something as trivial as working it into the compiler.
Xenon Cores are in-order executing also, you'll deal with these problems on both machines.
 
Vortac said:
Assumptions
Falsehoods
Pointless speculation disguised as inevitable future
Vague but deep-seated, bitter Sony hatred
It's like somebody taught WULFER the English language! :lol
 
Vortac said:
The article at Ars Tech sums it up in four succinctly written paragraphs:

“The burden of managing the cache has been moved into software, with the result that the cache design has been greatly simplified. There is no tag RAM to search on each access, no prefetch, and none of the other overhead that accompanies a normal L1 cache. The SPEs also move the burden of branch prediction and code scheduling into software, much like a VLIW design…

To sum up, IBM has sort of reapplied the RISC approach of throwing control logic overboard in exchange for a wider execution core and a larger storage area that's situated closer to the execution core. The difference is that instead of the compiler taking up the slack (as in RISC), a combination of the compiler, the programmer, some very smart scheduling software, and a general-purpose CPU do the kind of scheduling and resource allocation work that the control logic used to do….

Once the instructions are in the SPE, the SPE's control unit can issue up to two instructions per cycle, in-order. The SPE has a 128-entry register file (128-bits per entry) that stores both floating-point and integer vectors. As stated above, there are no rename registers. All loop unrolling is done by the programmer/compiler using this very large register file…

Note also that the register file has six read ports and two write ports. The SPEs can do forwarding and bypass the register file when necessary. The SPE has a DMA engine that handles moving data between main memory and the register file. This engine is under the control of the programmer as mentioned above. Each SPE is made of 21 million transistors: 14 million SRAM and 7 million logic. Finally, the instruction set for the SPEs is not VMX compatible or derivative, because its execution hardware doesn't support the range of instructions and instruction types that VMX/Altivec does.”

Why does that not sound impressive? Did I miss something?
 
To summarize:

1) Xenon will offer better online play, revolutionary downloadable content schemes to removable media, and generally a feature-rich gaming experience compared to Sony’s obvious ultimate vision of controlling format and delivery, which is closer to what you would imagine a scary, monopolistic company would want. Microsoft is empowering developers to make gamers happy and buy more games. It’s the same strategy they use with other software development platforms, and it works. #1) Make the developers happy, #2) make the gamers happy.
2) Sony is heading into the same directions (proprietary technology, ignoring Western game cos) which Nintendo was 10 years ago, and it’s not going unnoticed and you will surprising defections to MS camp because of this. Sony is doing nothing but hype hype hype and try to position themselves for financial windfalls from royalties and the poor Japanese who are going to pay their life savings away to own their 49900 yen PS3.
3) Without a doubt MS has Sony beat in dev support, today, right now, and likely for the next 10 years. It’s black and white. They don’t seem to understand the concept of offering a helping hand. Defend them all you want, they make great games and have awesome marketing/hype machines. However, even Nintendo has Sony beat in dev support for crying out loud. DS has development tools which poop on PSP, and this talk of open source is making me groan. Open source tools are made by the comic book dude from the Simpsons, who would rather scratch his butt than lift a finger to write a legible and useful piece of documentation.

:lol :lol :lol


This is a WONDERFUL post (for me to poop on)
 
Define "competitively"....lots of Formula One teams have been compeating against Ferrari for the last several years.....doesn't mean they are in the same ballpark performance-wise, however...

When everything pans out, I expect the relative technological landscape among the consoles will look alot like it does today. I certainly don't expect ps3 to be in a different 'ballpark'.

This is just a guess mind you...we don't even know what the final machines look like.
 
Amir0x said:
Now this thread is going to get awesome.
I dunno. He will probably just get laughed out like with his earlier bit about Xbox 2 offering more features. I was going to respond seriously, but it just seemed like too much work. It would have been full of "non stop line to line" repudiations, addressing every little bit....as long, frustrating, and pointless as programming for the evil "proprietary" Cell architecture ;)
 
border said:
I dunno. He will probably just get laughed out like with his earlier bit about Xbox 2 offering more features. I was going to respond seriously, but it just seemed like too much work. It would have been full of "non stop line to line" repudiations, addressing every little bit....as long and frustrating as pointless as programming for the evil "proprietary" Cell architecture ;)

:lol :lol

Ah, it's good to be back.

Also, because I'm so stupid... what exactly does this news tell us in the simplest terms possible? Like, if I was mentally handicapped (which I very well may be!) and you had to explain what the implications of this announcement were, what would you say?

Points for being condescending!
 
It would have been full of "non stop line to line" repudiations, addressing every little bit....as long and frustrating as pointless as programming for the evil "proprietary" Cell architecture ;)

:lol
 
Vortac said:
Having an OS would NOT abstract any of the CELL workload away…what you do with the data is purely a game to game, line of code to line of code decision. More importantly, is it possible to work CELL’s architecture into middleware so that it’s easy to make better looking games? The answer is no, without a doubt middleware would probably mainly make porting from Xenon to PS3 easier, and without a miracle predict the future Turing machine embedded into CELL, there’s no way to make it go fast without lots more work and non stop line to line, game to game consternation.

Yoy obviously don't know anything about programming computing clusters or mutli-programming. Why did I even bother you don't know anything...


Amir0x said:
:lol :lol

Ah, it's good to be back.

Also, because I'm so stupid... what exactly does this news tell us in the simplest terms possible? Like, if I was mentally handicapped (which I very well may be!) and you had to explain what the implications of this announcement were, what would you say?

Points for being condescending!


To be honest there really isn't any reason to read this thread if you want concrete info on the PS3.
 
marsomega said:
To be honest there really isn't any reason to read this thread if you want concrete info on the PS3.

Wrong answer, mars! You must forcibly suck out every detail to mold together something concrete about PS3! If you do not, you lose your techno-geek license for all time.
 
Vortac said:
1) Xenon will offer better online play, revolutionary downloadable content schemes to removable media, and generally a feature-rich gaming experience compared to Sony’s obvious ultimate vision of controlling format and delivery, which is closer to what you would imagine a scary, monopolistic company would want. Microsoft is empowering developers to make gamers happy and buy more games. It’s the same strategy they use with other software development platforms, and it works. #1) Make the developers happy, #2) make the gamers happy.
2) Sony is heading into the same directions (proprietary technology, ignoring Western game cos) which Nintendo was 10 years ago, and it’s not going unnoticed and you will surprising defections to MS camp because of this. Sony is doing nothing but hype hype hype and try to position themselves for financial windfalls from royalties and the poor Japanese who are going to pay their life savings away to own their 49900 yen PS3.
3) Without a doubt MS has Sony beat in dev support, today, right now, and likely for the next 10 years. It’s black and white. They don’t seem to understand the concept of offering a helping hand. Defend them all you want, they make great games and have awesome marketing/hype machines. However, even Nintendo has Sony beat in dev support for crying out loud. DS has development tools which poop on PSP, and this talk of open source is making me groan. Open source tools are made by the comic book dude from the Simpsons, who would rather scratch his butt than lift a finger to write a legible and useful piece of documentation.
4) My money is on Xenon to have more main memory, maybe even twice as much (something any game programmer will be crying with joy about), and additionally larger general purpose cache than PS3. It’s going to flow naturally from the difference in necessities of memory bandwidth. Basically, Xenon will be kicking butt out of the box, you won’t have to tell it what to do, it knows what to do, and it has plenty of space to do it in. It’s a programmer friendly architecture – what needs to be cached is cached and this works well for anything pushing lots of data back and forth, which you know…all freaking games do, especially living breathing worlds like Fable and what you’ll be seeing on Xenon.
:lol :lol :lol :lol
Why is this guy still around here? :lol
 
Vortac said:
The article at Ars Tech sums it up in four succinctly written paragraphs:

“The burden of managing the cache has been moved into software, with the result that the cache design has been greatly simplified. There is no tag RAM to search on each access, no prefetch, and none of the other overhead that accompanies a normal L1 cache. The SPEs also move the burden of branch prediction and code scheduling into software, much like a VLIW design…

You should have trimmed this one out of your post. This is a good thing. By moving this into software (SDK/OS), the developer doesn't have to manage it themselves bringing out ease of use.

To sum up, IBM has sort of reapplied the RISC approach of throwing control logic overboard in exchange for a wider execution core and a larger storage area that's situated closer to the execution core. The difference is that instead of the compiler taking up the slack (as in RISC), a combination of the compiler, the programmer, some very smart scheduling software, and a general-purpose CPU do the kind of scheduling and resource allocation work that the control logic used to do….

Again, this is a good thing. Bolding the programmer and leaving out the compiler, CPU and the scheduler just baffles the mind. This is what you want.

Once the instructions are in the SPE, the SPE's control unit can issue up to two instructions per cycle, in-order. The SPE has a 128-entry register file (128-bits per entry) that stores both floating-point and integer vectors. As stated above, there are no rename registers. All loop unrolling is done by the programmer/compiler using this very large register file…

Did you just read until you saw programmer and stop reading?

Note also that the register file has six read ports and two write ports. The SPEs can do forwarding and bypass the register file when necessary. The SPE has a DMA engine that handles moving data between main memory and the register file. This engine is under the control of the programmer as mentioned above. Each SPE is made of 21 million transistors: 14 million SRAM and 7 million logic. Finally, the instruction set for the SPEs is not VMX compatible or derivative, because its execution hardware doesn't support the range of instructions and instruction types that VMX/Altivec does.”

This doesn't support your position either. The programmer has control of the parts of the system that make sense and the parts that are more problematic for a multicore system are hidden in software.


I don't want to just laugh at your post, but damn... You deleted/omitted so much, perhaps you should just delete your post and start over.
 
Top Bottom