UK studios already working with PlayStation 3 dev kits

http://www.gamesindustry.biz/content_page.php?aid=7525

Rob Fahey 15:46 21/03/2005

Britsoft developers get their first glimpse at Sony's next-gen plans in the flesh

Several studios in the UK are already working with development hardware for the next-generation PlayStation console, GamesIndustry.biz has learned, as Sony continues quietly rolling out dev kits to select partners ahead of the system's E3 unveiling.

A number of Japanese companies have had development hardware for the console for several months, and some are believed to be working on software demos for E3, but today brought the first confirmation that kits have shipped in the UK as well.

So far, however, the roll-out of systems appears to be to a small group of Sony's close partners, and we're only aware of two studios in the country which have hardware - although some others may simply be remaining more tight-lipped about the kits.

Details of what exactly is present in the next-generation PlayStation dev kits are sparse, but according to one development source, "they're more advanced than the PowerMac kits [Microsoft] has given us [for Xenon] - they're still prototypes, but they're closer to what'll be in the final console... The graphics chip isn't there, say, but we can get a pretty good idea by taking an NVIDIA 6800 and saying, okay, it'll be like this but faster."

However, he did note that developers are expecting Microsoft to update the prototype PowerMac-based Xenon kits with more advanced hardware "pretty much any day now" - an important step for the company, since it's still planning to launch the next-generation Xbox before the end of the year, and industry rumours suggest that it may even have recently pulled the launch schedule forward by several weeks.

Sony plans to show the next-generation PlayStation off in public for the first time at its pre-E3 conference in Los Angeles in May, where it will almost certainly debut within a few hours of the public unveilings of Nintendo's Revolution and Microsoft's next-gen Xbox.

However, the system - which is based on a new chip called Cell, which was co-developed by IBM and Sony, and an NVIDIA graphics board - is not expected to start shipping to consumers until the second quarter of 2006 at the earliest.
 
"A number of Japanese companies have had development hardware for the console for several months, and some are believed to be working on software demos for E3, but today brought the first confirmation that kits have shipped in the UK as well."

Reminds me of those comments from Tecmo's president bashing the PS3 because Sony wouldn't give them any info on it, and then a few days later a report broke that Sony had been sending out kits to what they considered to be top Japanese developers. :P
 
Sony's always been very cosy with a lot of the UK devs.

This'll could, in all probability, be the likes of the fellas at Codemasters, SCi, Eidos' devs, Rockstar North, Core Design, Big Blue, Lionhead, Evolution Studios etc, the list goes on.

That they have dev kits further along than Microsofts is either very worrying or favoritism/politics-at-work on the part of the dev.
Or maybe PS3 is such a bitch that they'll need all the time they can get to figure it out :)
 
Sony could actually have a near-final CPU in their devkit now that they have made first Cell prototypes. MS I think was last distributing devkits with dual G5 CPUs which is not really the same thing that will end up in the final hardware.

Maybe that's why they say PS3 devkit is more advanced, as it's technically more finalized hardware.

They both are lacking final GPUs though. I also suspect that those GPUs won't just be more of the same only faster, but we'll see. For example, MS had in their GDC presentation a slide that described realtime displacement mapping, using mipmap levels, etc. It was a presentation for new Windows 3D development, but still.
 
Marconelly said:
Sony could actually have a near-final CPU in their devkit now that they have made first Cell prototypes. MS I think was last distributing devkits with dual G5 CPUs which is not really the same thing that will end up in the final hardware.

Maybe that's why they say PS3 devkit is more advanced, as it's technically more finalized hardware.

They both are lacking final GPUs though. I also suspect that those GPUs won't just be more of the same only faster, but we'll see. For example, MS had in their GDC presentation a slide that described realtime displacement mapping, using mipmap levels, etc. It was a presentation for new Windows 3D development, but still.

or maybe they are just handing out incomplete dev kits so that the devs don't leak or give the final specs to the competition.
 
radioheadrule83 said:
Sony's always been very cosy with a lot of the UK devs.

This'll could, in all probability, be the likes of the fellas at Codemasters, SCi, Eidos' devs, Rockstar North, Core Design, Big Blue, Lionhead, Evolution Studios etc, the list goes on.

That they have dev kits further along than Microsofts is either very worrying or favoritism/politics-at-work on the part of the dev.
Or maybe PS3 is such a bitch that they'll need all the time they can get to figure it out :)

Look at it this way: Xenon dev-kits still sue the Dual G5 for now (the new chip should arrive quite soon though) while CELL development kits (let's call them thsi way) have been shipping with a CELL chip for a while.

Conversely I expect Microsoft to be farther ahead on the Software side of the SDK-issue: XNA, DirectX, Visual Studio .NET 2005, etc...
 
I think this post does a good job of proving my point that no one has final dev kits for the PS3 (and most likely the Xenon as well).

Doom_Bringer said:
or maybe they are just handing out incomplete dev kits so that the devs don't leak or give the final specs to the competition.

I sincerely doubt that would be a reason why final dev kits haven't been released.
 
Noone will have final devkits of PS3 before the GPU is finalized at the end of this year, and conversely Xenon devkit can only be final as of this summer when it's GPU is finalized.
 
Developers are hoping for the final Xenon GPU late summer, so you can take it for granted no one has final Development kits. In fact they are still G5's and what MS tells them going on spec wise. As in 256MB/512MB memory changes that are rummoured to under consideration. In terms of stuff for E3 its going to be a 6800 vs ATI 800. Basically in terms of graphical quality nothing in it, as for "Art" I will let the funny people argue the toss.
 
Pug said:
Developers are hoping for the final Xenon GPU late summer, so you can take it for granted no one has final Development kits. In fact they are still G5's and what MS tells them going on spec wise. As in 256MB/512MB memory changes that are rummoured to under consideration. In terms of stuff for E3 its going to be a 6800 vs ATI 800. Basically in terms of graphical quality nothing in it, as for "Art" I will let the funny people argue the toss.

I'll bet it'll be Nvidia's NV40 refresh (NV48, G70 or whatever) vs ATI R520 Fudo, which should be more than ready to go and closer to the Xenon's final R500 than X800 since R520 has SM3.0.

GeForce 6800 and Radeon X800 are already old technology. mostly completed in 2003. whereas the E3 demos for PS3 and Xenon should be running on late 2004/early 2005 technology (NV48 or G70 and R520) before the final PS3 GPU and final Xenon GPU are ready.
 
Pug said:
Developers are hoping for the final Xenon GPU late summer, so you can take it for granted no one has final Development kits. In fact they are still G5's and what MS tells them going on spec wise. As in 256MB/512MB memory changes that are rummoured to under consideration. In terms of stuff for E3 its going to be a 6800 vs ATI 800. Basically in terms of graphical quality nothing in it, as for "Art" I will let the funny people argue the toss.

Does this article differ much from what's in Edge this month? I wonder if Gamesindustry.biz is just basing this on that and adding in a little bit of their own info (be it made up or actually true ;))?
 
Gofreak, theres very little on the PS3 in EDGE this month, just a peice on a Middleware comapny Sony employed with PS2 who are working on Cell compilers for Sony (and finding it tough) There is an article on XB2 with the specs as leaked 3 cores, etc although it did state 256MB of Ram. xexex you maybe correct on the GPU's but may isonly 2 months or so away and I'm not sure if the 520 is available and as for the Nvidia I was just taking the quote from the developer who seems to be using a 6800.
 
They could technically go with the SLI X800 and 6800 if they wanted, that would bring the performance much closer where it needs to be for the final hardware. Of course, I have no idea if either devkit is doing this.
 
I wonder if they would have mentioned if the distribution of graphics work was different than expected...(i.e. if the CPU handles vertex processing, the GPU only pixel processing) ;) I'm still hoping for that, but one wonders if they're extrapolating functionality based on something like the 6800. In a way, it kind of sounds like there isn't actually a full GPU in the kit at all yet..(i.e. no 6800 - that sounds more like the dev's guess than what's actually in the kit). Perhaps these kits are only exposing the CPU side for now, with a basic graphics chip to output stuff to the screen, if that's possible (?)
 
Gofreak I think we need Faf in here, I'm sure in another post he said something about knowing what GPU was in the development kits. Maybe I got that wrong but I'm sure I read it somewhere. At guess it would make sense to stick a GPU in the kit. As for the Vertex processing handled on the APU's who knows, but does it realy make that much sense giving the power of the next gen GPU'S?
 
On the off-chance that this isn't complete BS (which it probably is, given the source), these are likely not full development "kits" with a functional set of development tools and a complete runtime environment. It's probably an early Cell sample (probably not even the final configuration), some Collada stuff and some system architecture info.
 
Nostromo said:
Devkits have a 6800. Don't expect the final GPU to be much more advanced than that features wise.

You sound very assured :) I guess we should take a hint ;) It's believable that functionally, the GPU won't be that much ahead of a 6800 - the 6800s are SM3.0 compliant already, afterall..



As for the Vertex processing handled on the APU's who knows, but does it realy make that much sense giving the power of the next gen GPU'S?

I think so. Cell has a lot of power there that'd be ideally used for vertex processing, and you could do things more flexibly there than on a shader, I think. They wouldn't be as fast as dedicated hardware, but they could still pump out plenty of polys - enough, imo, as to not make a difference. Beyond a certain point, imo, you really want more pixel shading than vertex shading performance - just dedicating a couple of SPEs to vertex processing would probably result in enough performance to have multiple polygons per pixel (!) Yes, a GPU's vertex shading hardware could probably do more, but when you're at that level..Consider also, that you could dedicate all your GPU logic to pixel shading - it'd be a monster. It'd be like if Xenon could dedicate all its unified shaders to pixel shading (except probably better due to greater efficiencies in dedicated shaders), and was still able to pump through vast numbers of polys.
 
Nostromo said:
Devkits have a 6800. Don't expect the final GPU to be much more advanced than that features wise.

what part of Nvidia's next-gen GPU being the basis for PS3's GPU don't you understand ?

rolleyes_big.gif
 
sonycowboy said:
:lol :lol

OK.

In terms of shading functionality, why would they be vastly ahead of the 6800? The 6800 is already SM3.0. To get to 3.0+ isn't a big leap, functionally. Of course, the performance difference should be large, due to better architecture, pure clockspeed improvement, more shader pipelines etc.
 
xexex said:
what part of Nvidia's next-gen GPU being the basis for PS3's GPU don't you understand ?
I was not being sarcastic, I'm serious :) If you are expecting some new uber feature, some new shader model, etc..you're going after a big let down..
I'm not saying it will not be very fast and powerful, just don't expect SM4.0 or stuff like that.
 
Nostromo said:
I was not being sarcastic, I'm serious :) If you are expecting some new uber feature, some new shader model, etc..you're going after a big let down..
I'm not saying it will not be very fast and powerful, just don't expect SM4.0 or stuff like that.
Please explain to me what is SM4.0
 
ThirdEye said:
Please explain to me what is SM4.0

The spec's not even locked down, AFAIK.

Nostro is right - people were expecting more than SM3.0+?

If they go with vertex shading on the CPU, that'd be well beyond SM3.0+ btw..maybe more like what SM4.0 will be ;)
 
Nostromo said:
I was not being sarcastic, I'm serious :) If you are expecting some new uber feature, some new shader model, etc..you're going after a big let down..
I'm not saying it will not be very fast and powerful, just don't expect SM4.0 or stuff like that.

Who really cares at this point? The PS2 was lacking a lot of features that have become standard in the PC market, yet it continues to impress. The largest flaws will be non-issues with the PS3 (image quality and texture resolution). I mean, top end PC stuff technically stomps on the best PS2 has to offer...but I still find myself impressed by many PS2 titles and those same titles often display a much higher level of polish.
 
Well, R500 in XBox 2 is supposed to have a shader model improved over the 3.0. Maybe not full 4.0 but something in between. Isn't it supposed to have unified shaders too? Nvidia has also said their next gen GPU will have 3.0 and "everything else they developed after it".

(Usable) Displacement mapping alone would be a huge new function.
 
ThirdEye said:
Please explain to me what is SM4.0

SM4.0 is Shader Model 4.0 that will have Vertex Shader 4.0
and Pixel Shader 4.0


currently, we're at Shader Model 3.0 or 3.0+ with Nvidia
(in GeForce 6800) and Shader Model 2.0+ with ATI
(in Radeon X800) who is soon moving to Shader Model 3.0+
with R520/Fudo which will probably be called Radeon X900
or something like that .

Xbox2 GPU and PS3 GPU will both be SM3.0++ or like SM3.5
probably almost-but-not-quite SM4.0
 
Nostromo said:
Mainly it's about a unified shader model for vertex and pixel shaders.

Which PS3 doesn't need considering it will be much better on this system to have vertex shading on Cell and a lot of specialized and powerful pixel shader units on the GPU.
 
Marconelly said:
Well, R500 in XBox 2 is supposed to have a shader model improved over the 3.0. Maybe not full 4.0 but something in between. Nvidia has also said their next gen GPU will have 3.0 and "everything else they developed after it".

(Usable) Displacement mapping alone would be a huge new function.

This is all possible..encapsulated in that "+" ;)

Displacement mapping isn't allowed by SM3.0?

Functionally, Xenon's GPU and PS3's GPU will be very similar..the difference may be in performance, and how that functionality is implemented. Unified shading is not a feature in and of itself - it's just a different route to roughly the same destination (which is why I'd be surprised if SM4.0 is "simply" a move to unified shading, btw). There are sound reasons for sticking to dedicated, seperate shaders - for now - or so NVidia would argue.
 
Rhindle said:
On the off-chance that this isn't complete BS (which it probably is, given the source), these are likely not full development "kits" with a functional set of development tools and a complete runtime environment.
Given that the article didn't say they were full dev kits, what's your point other than to reinforce the fact that you're not a big fan of GI.biz? ;)
 
Marconelly said:
Well, R500 in XBox 2 is supposed to have a shader model improved over the 3.0. Maybe not full 4.0 but something in between. Isn't it supposed to have unified shaders too? Nvidia has also said their next gen GPU will have 3.0 and "everything else they developed after it".

(Usable) Displacement mapping alone would be a huge new function.

PS2 and GC can do displacement mapping.

http://www.ati.com/developer/gdc/GDC2003-DisplacementMappingNotes.pdf
 
gofreak said:
This is all possible..encapsulated in that "+" ;)

Displacement mapping isn't allowed by SM3.0?

Functionally, Xenon's GPU and PS3's GPU will be very similar..the difference may be in performance, and how that functionality is implemented. Unified shading is not a feature in and of itself - it's just a different route to roughly the same destination (which is why I'd be surprised if SM4.0 is "simply" a move to unified shading, btw). There are sound reasons for sticking to dedicated, seperate shaders - for now - or so NVidia would argue.

http://msdn.microsoft.com/library/d...ncedtopics/VertexPipe/DisplacementMapping.asp
 
I wonder if they would have mentioned if the distribution of graphics work was different than expected...(i.e. if the CPU handles vertex processing, the GPU only pixel processing) ;) I'm still hoping for that, but one wonders if they're extrapolating functionality based on something like the 6800. In a way, it kind of sounds like there isn't actually a full GPU in the kit at all yet..(i.e. no 6800 - that sounds more like the dev's guess than what's actually in the kit). Perhaps these kits are only exposing the CPU side for now, with a basic graphics chip to output stuff to the screen, if that's possible (?)

Yeah, I noticed that 6800 quote from the developer too.....really didn't seem like there was any GPU part at all in PS3 devkits, the way he was talking....

So it seems that developers haven't recieved the triple core XENON devkits and as of now, Xenon developers have been using either Dual G5 and a Radeon 9800 (early kits) or now a Dual G5 and a Radeon X800 or X850 (pretty powerfull piece of kit if you think about it)... so *IF* those PS3 kits contain the 256Gflop CELLs which were shown @ ISSCC last month then you have something more "final" than any of the Xenon kits, though a true comparison is a little iffy right now, not to mention quite unfair...

Gofreak I think we need Faf in here, I'm sure in another post he said something about knowing what GPU was in the development kits. Maybe I got that wrong but I'm sure I read it somewhere. At guess it would make sense to stick a GPU in the kit. As for the Vertex processing handled on the APU's who knows, but does it realy make that much sense giving the power of the next gen GPU'S?


But imagine if you had something with the power of a next generation GPU (R500 or G80 class) except *all* that power was used just for pixels!!!

Nobody outside of SCEI/nVidia and their NDA-ed developers know for sure but it seems to me 8 SPEs would be able to do a decent amount of AI/Physics/animation operations PLUS do a great number of vertex ops to boot.......

That is just my opinion, though.....


I think so. Cell has a lot of power there that'd be ideally used for vertex processing, and you could do things more flexibly there than on a shader, I think. They wouldn't be as fast as dedicated hardware, but they could still pump out plenty of polys - enough, imo, as to not make a difference. Beyond a certain point, imo, you really want more pixel shading than vertex shading performance - just dedicating a couple of SPEs to vertex processing would probably result in enough performance to have multiple polygons per pixel (!) Yes, a GPU's vertex shading hardware could probably do more, but when you're at that level..Consider also, that you could dedicate all your GPU logic to pixel shading - it'd be a monster. It'd be like if Xenon could dedicate all its unified shaders to pixel shading (except probably better due to greater efficiencies in dedicated shaders), and was still able to pump through vast numbers of polys.

I agree 100%


Nostro is right - people were expecting more than SM3.0+?

If they go with vertex shading on the CPU, that'd be well beyond SM3.0+ btw..maybe more like what SM4.0 will be ;)

I think we should all keep in mind we are NOT talking about PCs, we are talking about the console space and so it is very possible we will see something beyond SM3.0...

Look at the Xbox...although DX8.1 was used, there were also some Xbox-specific stuff on nV2A that extended DX8.1...

You can afford to do this on a console because it *is* a closed environment...

With R500 and PS3 GPU, you will probably see SM3.0 stuff IN ADDITION TO some stuff that you will not see on something like an R600 or G80 because of the closed environment of a console....

You can get away will a lot of crazy shit on a console architecturally you could never do on a PC, do not forget...
 
This bodes well for the PS3 though.

The system has to be at least a year off from launch in Japan and devs in the UK already have kits.

Developers should have a good 18 months to work on launch software for the US launch.
 
Kleegamefan said:
Yeah, I noticed that 6800 quote from the developer too.....really didn't seem like there was any GPU part at all in PS3 devkits, the way he was talking....

So it seems that developers haven't recieved the triple core XENON devkits and as of now, Xenon developers have been using either Dual G5 and a Radeon 9800 (early kits) or now a Dual G5 and a Radeon X800 or X850 (pretty powerfull piece of kit if you think about it)... so *IF* those PS3 kits contain the 256Gflop CELLs which were shown @ ISSCC last month then you have something more "final" than any of the Xenon kits, though a true comparison is a little iffy right now, not to mention quite unfair...




But imagine if you had something with the power of a next generation GPU (R500 or G80 class) except *all* that power was used just for pixels!!!

Nobody outside of SCEI/nVidia and their NDA-ed developers know for sure but it seems to me 8 SPEs would be able to do a decent amount of AI/Physics/animation operations PLUS do a great number of vertex ops to boot.......

That is just my opinion, though.....




I agree 100%




I think we should all keep in mind we are NOT talking about PCs, we are talking about the console space and so it is very possible we will see something beyond SM3.0...

Look at the Xbox...although DX8.1 was used, there were also some Xbox-specific stuff on nV2A that extended DX8.1...

You can afford to do this on a console because it *is* a closed environment...

With R500 and PS3 GPU, you will probably see SM3.0 stuff IN ADDITION TO some stuff that you will not see on something like an R600 or G80 because of the closed environment of a console....

You can get away will a lot of crazy shit on a console architecturally you could never do on a PC, do not forget...


nice post Klee.... I have to agree with most if not all of it :)
 
I can't imagine the CELL even existing if its not being used for vertex processing. The power of that thing should be able to transform way more polys than any GPU solution out there or planned.

Then you have all your pipes in the GPU dedicated to pixel shading. Prolly use the space you don't need for vertex transforms to slam some eDRAM on there for the framebuffer too.

Its gonna be good.
 
mrklaw said:
I can't imagine the CELL even existing if its not being used for vertex processing. The power of that thing should be able to transform way more polys than any GPU solution out there or planned.

Not sure...if that 10x the performance rumour is true and applies to vertex shading on Xenon's GPU, it should be capable of ~7bn vertices/sec (theoretical peak). Although I'd say that'd come from using all it's unified shading units as vertex shaders which may not be particularly fair ;)

The guys at Beyond3d guesstimate that a 8SPE Cell chip @ 4Ghz can transform 4bn vertices/sec (theoretical peak).

I think the more important point is that either of those would be MORE than enough. If you put 2 SPEs to work on vertices (which would probably be reasonable, even with fewer SPEs in the PE..), that's a theoretical peak of 1bn vertices/sec. Let's assume real world figures hit 25% of that figure - which would be in line with PS2/Xbox proportions - so that'd be 250m polys/sec. In a 60fps game, that'd be 4.16m polys per frame. If your frame is 1280x720 (720p), then you've got on average ~4.5 vertices per pixel. When you get to that kind of level (sub-pixel polygons) it's arguable that the value of more polys diminishes - pixel shading, however, will retain its value..you won't get enough of that this gen. The bigger argument for using SPEs as vertex shaders is a) their far greater flexibility and b) the use of the entire GPU for pixel shading only.

On a slightly off point, if the CPU is to be used for vertex shading, I could see a high proportion of the SPEs being used for just that, in the short term at least - vertex shading would be one of the more easy uses of the SPEs, it being trivially parallel etc. so I could see developers flocking toward that usage while they become more accustomed to what they can do with the SPEs in other areas.

That said, even if Cell isn't to be used for vertex shading, it certainly won't go under-utilised, certainly in the medium-to-long term. Devs would just lose one of its more easy applications.
 
Wow! What a humourously speculative story! Almost written entirely in the style of the Eidos being bought by Rupert Murdoch piece. Brilliant.
 
Not sure...if that 10x the performance rumour is true and applies to vertex shading on Xenon's GPU, it should be capable of ~7bn vertices/sec
That applies to shader power explicitly though, not polygon throughput. The amount of polygons it can actually draw is much much lower then that.
The former is more important then pure polynumbers anyway.
 
gofreak said:
Not sure...if that 10x the performance rumour is true and applies to vertex shading on Xenon's GPU, it should be capable of ~7bn vertices/sec (theoretical peak). Although I'd say that'd come from using all it's unified shading units as vertex shaders which may not be particularly fair ;)

The guys at Beyond3d guesstimate that a 8SPE Cell chip @ 4Ghz can transform 4bn vertices/sec (theoretical peak).
.

There is no way that performance is true. On the one hand, you have (potentially) a chip running 8 cores, at 4GHz, almost solely focussed on vector manipulation. On the other hand you have a bit of a chip that also has to do pixel shading etc.

I just cannot see the T&L section of a GPU outperforming CELL. If that were the case, then PS3 wouldn't need a CELL, it could just use the Nvidia GPU. And why does Xenon have a triple core CPU - thats wasted if its just doing a bit of AI and physics (relatively low power apps compared to transforming millions of polys).
 
Fafalada said:
That applies to shader power explicitly though, not polygon throughput. The amount of polygons it can actually draw is much much lower then that.
The former is more important then pure polynumbers anyway.

Cheers for the clarification! :)

mrklaw said:
There is no way that performance is true. On the one hand, you have (potentially) a chip running 8 cores, at 4GHz, almost solely focussed on vector manipulation. On the other hand you have a bit of a chip that also has to do pixel shading etc.

Well, I'm imagining they'll count its peak vertex throughput when all shaders are being used for vertices (which of course, really won't happen). R500 will be even MORE flexible in terms of coming up with nice theoretical peaks ;) But yeah, as Faf has pointed out, that 7bn figure isn't accurate..so yeah, the gap apparently narrows in terms of pure throughput between what Cell might do and what a "regular" GPU with vertex shaders might do.


mrklaw said:
And why does Xenon have a triple core CPU - thats wasted if its just doing a bit of AI and physics (relatively low power apps compared to transforming millions of polys).

The Xenon CPU is supposedly used for some geometry manipulation, but not the kind of workhorse processing that SPEs or its own GPU can do. I think with Xenon the CPU and GPU can share the processing to different degrees - the GPU will do most of the work, the CPU stepping in those perhaps less frequent instances when you want to do stuff that's not so easy, or not possible, on the GPU.
 
Top Bottom