Official *** CELL processor announements **** Thread

Argyle said:
Kinda like how the PS2 without vector units was only about as powerful as a Dreamcast...

Actually it would be slower if you used the scalar FPU alone: 0.6 GFLOPS vs 0.9-1.4 GFLOPS.
 
I have 5 Dreamcasts and 1 PS2.

DC - 900~1400 mflops
PS2 - 6200 mflops

5 DCs (4500~7000 mflops) vs 1 PS2 (6.2 gflops)


who wins? :lol
 
doncale said:
yeah I'll bet most early first generation PS3 games only make use of the PU aka PPE (the POWER cpu core) plus the nvidia GPU, avoiding to use the multiple APUs aka SPUs aka SPEs. using the SPUs to divide up processing tasks is going to be a bitch, no matter what STI says. even if they provide better documentation than Sony did during the early days of PS2.

take away the SPUs and you have, basicly an Xbox2 :lol


[/wild speculation]

I hope you really meant "wild" speculation, because that's bordering on "don't do drugs" speculation ;)
 
So is a 4 Processor Element CELL configuration (as in 4 PPEs and 32 SPEs) totally out of the question for PS3???

Could someone explain to me why this is (serious question, no sarcasm here) and what limits it to 1 or 2 PEs?
 
I suppose a 4 PE-Cell CPU is not totally out of the question. it just seems *very* unlikely now.

to put it simply, 4 PEs would make a very very large chip. even on 65 nm
(but maybe not so bad on 45 nm?) probably too large to manufacture at reasonable cost and yield. thus a 1 or 2 PE Cell CPU is the most likely option at this point.

but then, why couldn't we have two seperate chips with 2 PEs each? or 4 seperate PEs ?

give the CPU-system a single chip with 2 PEs on the die and another 2-PE die to be the geometry processor that sits next to the Nvidia GPU(s) like ELAN next to the PowerVR2DCs in Naomi2.

or give the CPU-system 2 seperate PEs. and another 2 seperate PEs for the front end of the GPU(s)

there are a million ways the PS3 could be ultimately configured.

i'd be slightly disappointed if PS3 only consists of 1-PE (9 processors, 10 threads) in total and 1 fairly conventional (even tho nextgen) Nvidia GPU. but even that could be an absolutely awesome gaming machine if implemented well and taken advantage of by developers (it would).


should i quit babbling now :lol
 
Kleegamefan said:
So is a 4 Processor Element CELL configuration (as in 4 PPEs and 32 SPEs) totally out of the question for PS3???

Yes IMHO.

Could someone explain to me why this is (serious question, no sarcasm here) and what limits it to 1 or 2 PEs?

Three reasons:

1.) chip size.

2.) power consumption/heat dissipation.

3.) the extra money is better spent on other components.

Currently you would need a 800+ mm^2 chip to realize a 4 PEs+32 SPU's/APU's configuration and maybe even a larger area than that (you need a wide PE to PE bus system) using 90 nm technology.

Scaling it to 65 nm could in theory reduce the area down to maybe 400-450 mm^2, but not down to the 200-224 mm^2 level I think SCE wants the CPU to be at to have good yeilds and not too high manufacturing costs.

Using an MCM solution would not be cheap either: it allows to put more chips in a package where they would not fit in a single die, but that does not mean it is free.
 
doncale said:
I suppose a 4 PE-Cell CPU is not totally out of the question. it just seems *very* unlikely now.

to put it simply, 4 PEs would make a very very large chip. even on 65 nm
(but maybe not so bad on 45 nm?) probably too large to manufacture at reasonable cost and yield. thus a 1 or 2 PE Cell CPU is the most likely option at this point.

but then, why couldn't we have two seperate chips with 2 PEs each? or 4 seperate PEs ?

give the CPU-system a single chip with 2 PEs on the die and another 2-PE die to be the geometry processor that sits next to the Nvidia GPU(s) like ELAN next to the PowerVR2DCs in Naomi2.

or give the CPU-system 2 seperate PEs. and another 2 seperate PEs for the front end of the GPU(s)

there are a million ways the PS3 could be ultimately configured.


Again, the only real answer is: We don't know.

Reasons why it might:
-----------------------------------------
1) Sony said the PS3 would have the broadband engine which was to be composed of 4 cells. Given that they were right on the mark for the CELL specifications that they announced back in 2001, it would seem they are right on track for the plans they announced then.

Reasons why it might not:
------------------------------------------
1) Die Size - At 90nm, the chip looks to be too large and even at 65nm (which may not be available for PS3 launch), there are those who still belive it will be too big. I haven't seen any real discussion on this to this point to back up these claims

2) Cost - If the cost for 1 Cell is high (again, I know it's based on die size and yields), 4 Cells may be completely prohibitive

3) Heat Dissipation - A single CELL is rumored a be running quite warm, so 4 Cells may not be doable in a reasonable and cost efficient way for the PS3.

4) Misunderstanding - Maybe Sony's comments regarding the broadband engine, the CELL, and the PS3 have been misinterpreted from years ago.
 
Panajev2001a said:
Do you realize how much R&D money Sony as a whole has put into Blu-Ray ? Including a Blu-Ray ROM drive into PlayStation 3 would help Blu-Ray as a format, give more hype points to layStation 3 and reduce the cost of blu-laser solutions for Sony as it would increase a lot the manufacturing volume of said solutions.

They have already developed an optical pick-up system that combines CD/DVD/Blu-Ray reading capabilities into one single device, they know what they are doing.


ok

Any Blu Ray players out in the market that can read cd and dvds? If so, how much do they cost?
 
Doom_Bringer said:
ok

Any Blu Ray players out in the market that can read cd and dvds? If so, how much do they cost?

Yes, there are such players and yes they are not cheap.

Still... do you know how much of a profit are they currently makign on them ? Do you know exactly how much it will cost in late 2005/early 2006 to include Blu-Ray ROM technology into a high volume (those players we just mentioned are not really produced in very high volumes which increases the cost per unit manufactured) device such as PlayStation 3 ?

How much were DVD players in 1999 ? Probably more than $50, but it made it in PlayStation 2.

Your analogy, the one you are trying to bring here, is IMHO flawed.
 
Well...how large is Intel's Montecito chip because THAT badboy has 1.72B transistors and it too is on the 90nm process..

One CELL is "only" 230M or so transistors so 4 of them would still undercut Montecito by a wide, wide margin...


Any Blu Ray players out in the market that can read cd and dvds? If so, how much do they cost?

Every single BRD product can read CDs and DVDs....most of them do that via 2 optical pickups but the Samsung BRD recorder is shipping with a single pickup that can read all three formats....those with long memories will note that early generation DVD players were just like this....they had more than one set of laser optics...

Moreover, CD/DVD BW compatibility is part of the BRD spec...that is, if you product can't read DVD and CDs it will not get the BRD license....like the Nelly/Tim McGraw song...this has been stated over and over again...

From what I have been told, HD-DVD will also be like this...single optics will come eventually but BW compatibility is a go folks....
 
Kleegamefan said:
Well...how large is Intel's Montecito chip because THAT badboy has 1.72B transistors and it too is on the 90nm process..

One CELL is "only" 230M or so transistors so 4 of them would still undercut Montecito by a wide, wide margin...

I think Montecito is 464 mm^2, but most of its transistors are all very compact SRAM blocks which helps work out power consumption issues as SRAM transistors generally do consume less power than logic transistors IIRC.
 
The second Cell presentation should be well underway now. Anyone know when it might be finished (and when we can expect some reports back)? Presumably there will be new stuff to tell..I doubt they'll gather people in a room together just to go over everything that's been discussed on the web in the last 24 hours ;)
 
My main question right now is on the compiler. I sure hope they show some examples of this and to what extent the compiler optimizes task handling.
 
Panajev2001a said:
Yes IMHO.



Three reasons:

1.) chip size.

2.) power consumption/heat dissipation.

3.) the extra money is better spent on other components.

Currently you would need a 800+ mm^2 chip to realize a 4 PEs+32 SPU's/APU's configuration and maybe even a larger area than that (you need a wide PE to PE bus system) using 90 nm technology.

Scaling it to 65 nm could in theory reduce the area down to maybe 400-450 mm^2, but not down to the 200-224 mm^2 level I think SCE wants the CPU to be at to have good yeilds and not too high manufacturing costs.

Using an MCM solution would not be cheap either: it allows to put more chips in a package where they would not fit in a single die, but that does not mean it is free.


To add to Pana's response it is also because of the nature of these computing structures. For example, yes it is capable of 256 GFLOPS but will we ever see it? Not likely and you wouldn't be too out there saying it never will. No matter how many Cells’ you have in parallel working together they still can't over come the nature of what they are computing. It is wishful thinking saying that all will be used 100% of the time or naive to believe that just adding GFLOP numbers will reflect real world performance. Truth is that processes are also dependant on other processes one way or another. Processes either rely on other processes or must wait for access to parts of the memory other processes are also writing to. And that’s not even getting into critical processes that will lock until their critical task is completed.

In reality, there has to be a balance in the system because the more Cells you add the more you emphasize the diminishing return. With research and observations I assume Sony would be making, they will come to a point where they will say "Ok that’s it; the extra cell is not worth the cost." Parallel computing is great on paper just that in the real world not everything can be computed in such a way. Moreover, rules and restrictions to avoid critical errors that bring the system down will prevent you from ever reaching 100% use.

This is purely speaking on Cell and the PS3. IBM's workstations are another story however, they are still bound by the same rules and restrictions however they are not limited to the bounds of a console. And truth be told there is no perfect solution to process synchronization or scheduling just probabilities of failure or starvation.

EDIT.

I'm limiting myself to 4 post a day on Cell threads today. Tee hee. :D
 
Maybe it's been said already, but a GFLOPS rating means nothing without regard to the instruction set, etc. People are talking about the low number of transistors Cell has compared to the next Intel chip, and while some of the criticism of Intel is true, all those extra transistors are performing some logic in a way Cell cannot. Also worth mentioning is the amount of collisions Cell is likely to have (based on the rest of the ppc/power line of processors).

Ultimately I think Cell's performance will be weaker than most expect right now. Mostly from developers not being able to program for it efficiently. The tools for Xbox2 are much more advanced than anything Sony has ever done.

And just to make sure I don't get flamed for being biased towards either of these "power machines", I'm more excited about revolution than either PS3 or Xbox.
 
fugimax said:
Maybe it's been said already, but a GFLOPS rating means nothing without regard to the instruction set, etc. People are talking about the low number of transistors Cell has compared to the next Intel chip, and while some of the criticism of Intel is true, all those extra transistors are performing some logic in a way Cell cannot.

Not sure what the thrust of your argument is here, but I'll try and respond in two ways. First, the GFLOP number is of course a theoretical peak, but it is derived from the available instructions. Second, the SPEs aren't general purpose processors - of course the instruction set will be limited compared to a full blown Intel processor. No one ever expected differently (?)
 
First, the GFLOP number is of course a theoretical peak, but it is derived from the available instructions.
GFLOP is just a generic measure -- what *is* a floating point operation? Well, I can make a 1 TFLOP CPU easily...it can't do much, but it'll be rated as such.

Second, the SPEs aren't general purpose processors - of course the instruction set will be limited compared to a full blown Intel processor. No one ever expected differently (?)
That's not what I meant really. I just meant to say that full-blown Intel chips, using more transistors, can execute and manage more complicated logic. This is true of even the PE, not just the SPEs.
 
cellbefore8wt.jpg


Id be smiling too if i made it into history as one of the first people to hold a buttload of cells :)

Cell +1 for sony
 
fugimax said:
Ultimately I think Cell's performance will be weaker than most expect right now. Mostly from developers not being able to program for it efficiently. The tools for Xbox2 are much more advanced than anything Sony has ever done.

How do you figure? I would assume that Sony and nVidia are working closely on the whole toolchain and nVidia's developer relations are THE best - without question of any kind. Next, most developers are moving over to a variety of 3rd party rendering/game engines and as such it is more likely that optimal output will be possible through collaboration between nvidia/sony and 3rd party engine developers such as Renderware, NDL, etc. While there will of course be developers who roll their own solutions - the industry as a whole is outsourcing the development of technology to more dedicated engine developers.
 
fugimax said:
GFLOP is just a generic measure -- what *is* a floating point operation? Well, I can make a 1 TFLOP CPU easily...it can't do much, but it'll be rated as such.

I'm pretty sure there's a standard way of counting these things..


fugimax said:
That's not what I meant really. I just meant to say that full-blown Intel chips, using more transistors, can execute and manage more complicated logic. This is true of even the PE, not just the SPEs.

Well a PE has a Power core on there. I don't know how it compares to other Power cores (I don't think anyone knows for sure yet). But some may take issue with the suggestion that Power chips don't do as much stuff as, or aren't as complicated as Intel processors ;)
 
How do you figure? I would assume that Sony and nVidia are working closely on the whole toolchain and nVidia's developer relations are THE best - without question of any kind. Next, most developers are moving over to a variety of 3rd party rendering/game engines and as such it is more likely that optimal output will be possible through collaboration between nvidia/sony and 3rd party engine developers such as Renderware, NDL, etc. While there will of course be developers who roll their own solutions - the industry as a whole is outsourcing the development of technology to more dedicated engine developers.
First, your assuming. Never do that...especially not with Sony. :)

Second, remember that Microsoft is a software company. From what I've seen/heard, Xbox2's development tools are making developer's jaws drop. The challenge for next-generation tools is making extensive thread-safe libraries that parallelize well. I agree with the idea that developers no longer want to build everything from the ground up. Sony/nVidia's answers seems to be "let 3rd party engine developers do it for them," while Microsoft is saying "here, use this.." Microsoft isn't providing engines or anything, but they are making it very easy to build one.
 
fugimax said:
First, your assuming. Never do that...especially not with Sony. :)

Second, remember that Microsoft is a software company. From what I've seen/heard, Xbox2's development tools are making developer's jaws drop. The challenge for next-generation tools is making extensive thread-safe libraries that parallelize well. I agree with the idea that developers no longer want to build everything from the ground up. Sony/nVidia's answers seems to be "let 3rd party engine developers do it for them," while Microsoft is saying "here, use this.." Microsoft isn't providing engines or anything, but they are making it very easy to build one.

Don't you forgot that IBM is in the mix ? Sony is less experienced, but IBM and NVIDIA can help.

IBM is more experienced in massive parallel systems than Microsoft.
 
I'm pretty sure there's a standard way of counting these things..
For getting on the TOP500 list there is, yes...they use a benchmarking program (linpack i think).

I'm pretty sure that's not what Sony is using...but like I said, until they actually run something on it and give us real numbers, 256 GFLOPS doesn't mean much of anything except it *might* be really powerful.

Well a PE has a Power core on there. I don't know how it compares to other Power cores (I don't think anyone knows for sure yet). But some may take issue with the suggestion that Power chips don't do as much stuff as, or aren't as complicated as Intel processors ;)
A Power4 or Power5 derivative I assume...probably actually some hybrid form. The Power series of chips are great -- my dual 2.5gHz powermac screams -- but there are some design flaws with ppc in general to which I'm referring. Mainly prediction/collision issues.
 
Don't you forgot that IBM is in the mix ? Sony is less experienced, but IBM and NVIDIA can help.

IBM is more experienced in massive parallel systems than Microsoft.
You do realize IBM is helping Microsoft too....yes?
 
First, your assuming. Never do that...especially not with Sony.
He's not really assuming much. That Renderware-like tools will be used is a given. EA, and some other bigger companies have already secured themselves into such deals, and I even remember some EA people saying that they don't plan to use Microsoft's XNA, but rather their own tools.
 
Do you mean

1. IBM helps Sony -> Nothing special

2. IBM helps Microsoft -> Everything great
IBM + Microsoft seems like a better team than IBM + Sony/nVidia, yes. IBMs technical knowledge of the power core with Microsoft's extensive ability to create developer-friendly tools seems good to me. And like I said, from what I've seen so far, things look good.

On top of this, it's ultimately IBM/Sony's responsibility for putting out development tools for Cell. I doubt nVidia will be contributing much in terms of CPU tools, which is going to be the real issue this generation.
 
He's not really assuming much. That Renderware-like tools will be used is a given. EA, and some other bigger companies have already secured themselves into such deals, and I even remember some EA people saying that they don't plan to use Microsoft's XNA, but rather their own tools.
I was referring to his comment about the toolchain, not that third-party companies will create engines -- of course the will. The only problem there is that developers usually give up control / power to utilize such engines and games can start to look/feel the same.
 
Dylx said:
cellbefore8wt.jpg


Id be smiling too if i made it into history as one of the first people to hold a buttload of cells :)

Cell +1 for sony

looks like a standart wafer. did they produce that in 90nmeter technology???
 
fugimax said:
IBM + Microsoft seems like a better team than IBM + Sony/nVidia, yes. IBMs technical knowledge of the power core with Microsoft's extensive ability to create developer-friendly tools seems good to me. And like I said, from what I've seen so far, things look good.

On top of this, it's ultimately IBM/Sony's responsibility for putting out development tools for Cell. I doubt nVidia will be contributing much in terms of CPU tools, which is going to be the real issue this generation.

Sony might be arrogant during the PS2 launch period (raw Japanese only tools translated to English), but I suppose this round the tool set will be mainly from IBM (and then translated to Japanese) and NVIDIA (for the 3D tools and the GPU part).

MS has good tools, they can be better but that doesn't mean the tools from IBM/Sony/NVIDIA will be shit as implied in your words.

Anyway, there is nothing worth elaborating further as there is no comment from the users of the PS3 SDK (since there is no PS3 SDK yet). Everything is educated guess at best.
 
I was referring to his comment about the toolchain, not that third-party companies will create engines -- of course the will. The only problem there is that developers usually give up control / power to utilize such engines and games can start to look/feel the same.
Some developers will utilize 3rd party engines, some will make their own, just as they do today. Tools have never proven to be much of a problem, and even the most ridiculous hardware designs (like Saturn) have been utilized to their fullest. At the end of the day, more powerful hardware will give you the best results, and tools can only improve over the time to make it easier.

As for the Sony speciffically, they have obviously learned their lesson, as evidenced by PSP devkits and documentation.
 
Sony might be arrogant during the PS2 launch period (Japanese only tools translated to English), but I suppose this round the tool set will be mainly from IBM (and then translated to Japanese) and NVIDIA (for the 3D tools and the GPU part).
I doubt IBM will do much aside from provide a good compiler. Developing libraries is going to be all Sony/nVidia. I respect nVidia, but definitely not Sony when it comes to developer support -- and I doubt they are the one's in charge.
 
I respect nVidia, but definitely not Sony when it comes to developer support
Perhaps if you read some of the developers interviews in the past few years, your opinion could change. SCEA, much like Microsoft, has a dedicated team of people helping developers get the best out of hardware, and I've seen praises for those teams on many ocasions, reading that Game Dev magazine.
 
fugimax said:
I doubt IBM will do much aside from provide a good compiler. Developing libraries is going to be all Sony/nVidia. I respect nVidia, but definitely not Sony when it comes to developer support -- and I doubt they are the one's in charge.

Well, that's your doubt. If Sony is as arrogant as before, no one can help them. Certainly I hope not as the competition is not like the older days.
 
doncale said:
PS3 wont be doing Toy Story graphics, not in its wildest dreams. try thinking more along the lines of PS1/PSone prerendered CG FMV scenes. at best.
Well, I think it will be (Xbox 360 probably too). Toy Story was the first CG movie, it's "graphics" aren't nearly as advanced as recent CG movies such as Finding Nemo...
 
Well, that's your doubt.
Look at their past relations with companies. Particular to the power cores, look at their relationship with Apple. They provide the hardware and a compiler. They help by providing info for making performance analysis tools, but that's about it. I don't expect their relationship with Sony to extend past this.
 
Jeez, judging by some people you'd think 4.6Ghz was locked in stone as the PS3' speed... instead of the chip's max production number.

http://ps2.gamespy.com/articles/585/585956p2.html

Interesting article on Gamespy:

"As an ex PC Magazine lab rat, it was a really exciting presentation to watch. For most gamers, though, there were three things that stuck out.

The first point concerns piracy. CELL is a security-enabled architecture, meaning it has security features on chip. Piracy has typically been combated by software solutions. Tom Halfill on Microprocessor Report says, "A lot of (piracy) techniques rely on one application being able to access the same memory region as another application. With CELL, you can't do that because memory regions are locked down by the application." Publishers are sure to love this feature since it's another way to protect their software, though there might be concern on the development end with how much processing power this takes. For gamers that indulge in pirated goods, it looks like the PlayStation 3 will be the toughest console to crack to date."


Good news if this speculation is correct.
 
Actually, top speed is 5GHz or something above that. 4GHz is some kind of 'nominal' speed from what I remember reading, and 2GHz the slowest option.
 
Enigma said:
Jeez, judging by some people you'd think 4.6Ghz was locked in stone as the PS3' speed... instead of the chip's max production number.

http://ps2.gamespy.com/articles/585/585956p2.html

Interesting article on Gamespy:

"As an ex PC Magazine lab rat, it was a really exciting presentation to watch. For most gamers, though, there were three things that stuck out.

The first point concerns piracy. CELL is a security-enabled architecture, meaning it has security features on chip. Piracy has typically been combated by software solutions. Tom Halfill on Microprocessor Report says, "A lot of (piracy) techniques rely on one application being able to access the same memory region as another application. With CELL, you can't do that because memory regions are locked down by the application." Publishers are sure to love this feature since it's another way to protect their software, though there might be concern on the development end with how much processing power this takes. For gamers that indulge in pirated goods, it looks like the PlayStation 3 will be the toughest console to crack to date."


Good news if this speculation is correct.

I believe the security is to prevent other concurrent applications from modifying data from another applications workspace. This doesn't really apply to a single application protecting it from itself, does it?
 
Enigma said:
Jeez, judging by some people you'd think 4.6Ghz was locked in stone as the PS3' speed... instead of the chip's max production number.

Actually, no, they've hit 5.2Ghz in terms of clockspeed. That's their max. 4Ghz is their "nominal" clockspeed and should be very achievable in PS3 one year from now. The flops rating going around is based on the 4Ghz number too.
 
sonycowboy said:
I believe the security is to prevent other concurrent applications from modifying data from another applications workspace. This doesn't really apply to a single application protecting it from itself, does it?

well, yeah, doing what you're asking (a single application protecting it form itself) would be somewhat difficult, considering that most applications have to access and modify their own data stored in memory at some point. :D
 
Hmz.. PS3 easier to program for than Xbox2/360 and Revolution? To most devs the answer would be yes according to this GamesIndustry.Biz piece.. Don't know if it's been posted here though...

However, game developers contacted by GamesIndustry.biz downplayed speculation that the PS3 would be incredibly difficult to program as a result of the new architecture, saying that the main difficulty would be the move to a multi-core system - a design shared by the Xbox 2 and almost certainly by the Nintendo Revolution.

The game development model which is used for PlayStation 2, where a few programmers work directly with the low level code to create libraries for specific functions and other developers simply use those libraries, masking the complexity of the underlying system, is likely to work just as well on PlayStation 3, while the prevalence of middleware such as Criterion's RenderWare or the Havok physics engine will also make the transition less painful.

Another factor fingered by developers is the fact that Sony's PlayStation Portable libraries and documentation have been widely praised by those working on the system, indicating that Sony has learned an important lesson from the PS2 launch - where much of the development difficulty lay not with the system itself, but with poorly translated (or un-translated) documentation and difficult to use libraries.

Whole article can be found here: GamesIndustry.biz Cell article
 
Nerevar said:
well, yeah, doing what you're asking (a single application protecting it form itself) would be somewhat difficult, considering that most applications have to access and modify their own data stored in memory at some point. :D

Exactly, I was just referring to the fact that on consoles, their is usually a single application space, as opposed to a PC, which can have any number of concurrent applications.
 
fugimax said:
I was referring to his comment about the toolchain, not that third-party companies will create engines -- of course the will. The only problem there is that developers usually give up control / power to utilize such engines and games can start to look/feel the same.

Don't think so. Renderware and NDL (now unfortunately named Gamebryo) have been used in countless games and those games don't look or feel the same. If you're talking about a specialized engine such as Unreal or Doom which are engineered specifically for 1st person shooters, sure. But most 3rd party engines are nothing more than a facility to hide the complexities of the hardware on which they sit. As such you the developer as the writer of all the shaders (which makes everything look different) and the art packages (which make everything look different) and physics packages (which still make things look different, sometimes embarassingly so), and the audio packages have the opportunity to exploit the hardware in significant ways without having to be exposed to the scheduling algorithms, microcache strategies, or interrupts that are happening behind the scenes. During the PS1 days a radical hardware strategy was a sure way to ensure poor development for your platform - these days its much less so. More and more of a game is abstracted in objects/art assets/and tools and a whole lot less of it is bit fiddling code.

While of course you are going to give up some flexibility and some control (if you really want to store textures in audio memory you're going to have to break convention of your engine), game development is becoming more of an assembly line of ideas compiled in tools and delivered to consumers. There isn't as much advantage to writing your own technology these days as it is expensive, unproven, and error prone. The business side of game development is against the development of 'to the metal' coding unless it is absolutely necessary because schedule slips and testing cycles require that more time be spent making the game, else you could significantly eat into or deplete your profits.
 
segasonic said:
Well, I think it will be (Xbox 360 probably too). Toy Story was the first CG movie, it's "graphics" aren't nearly as advanced as recent CG movies such as Finding Nemo...

Its all in the lighting/shading. While you will able to fake much of it on todays hardware - the quality and accuracy that comes in movies is still a bit beyond what todays consoles are capable of. They can produce a 'its good enough' equivalent (similar to how Sony real time rendered the Final Fantasy movie with lower detail, worse lighting, etc), but they still will have a hard time producing a 'subpixel accurate' equivalent.
 
Vashu said:
Hmz.. PS3 easier to program for than Xbox2/360 and Revolution? To most devs the answer would be yes according to this GamesIndustry.Biz piece.. Don't know if it's been posted here though...

Whole article can be found here: GamesIndustry.biz Cell article
Wow, that was... stupid.
The game development model which is used for PlayStation 2, where a few programmers work directly with the low level code to create libraries for specific functions and other developers simply use those libraries, masking the complexity of the underlying system, is likely to work just as well on PlayStation 3, while the prevalence of middleware such as Criterion's RenderWare or the Havok physics engine will also make the transition less painful.
So they are saying that PS2 (and so PS3 too) is easy to program because you can use middlewares and/or have some low level programmer do the dirty work for you ?
So PS3 will be easy to code for as long as you have someone else doing the dirty work with all the PE. Woooo !
 
Top Bottom