PS3 to transform at least 6.4 GigaPolys/sec? (not rendering speed)

doncale

Banned
6.4 GigaPoly/s!

http://www.beyond3d.com/forum/viewtopic.php?t=19815&postdays=0&postorder=asc&start=320

So can we say the shortest vertex trasformation loop is just 5 cycles?
That's 6.4 GigaPoly/s! doubleLOL


6.4 billion polygons per second, and I am guessing that means calculation / transform speed. using just one Cell Processor Element. of course this says nothing about what the Nvidia GPU can rasterize & display on-screen. example: Nintendo DS can transform 4 million polys (or verts) per second but can only rasterize & display a mere 120,000 polys. another example, GSCube, it can transform 1.2 GigaPolys/sec. in the realtime Antz demo, it pushed about 65 million polys with texture, lighting, features on, and the developer, Criterion said they pushed GScube to over 300 million polys.

So I expect PS3 to be still transforming billions of polygons but rendering & displaying 300 million or maybe several hundred million more but under 1 billion.
 
Powdered Toastman said:
Great Scott! 1.21 Jigawatts!

future.jpg
 
hey doncale, go easy w/ beyond3d and other technical boards. just a suggestion. dont just jump on random numbers because they seem to be big.
 
I know. 6.4 Gpolys means nothing. like i said or implied, i expect PS3 to be rendering hundreds of millions of polys in actual games, not billions.
 
That and unless they're doing something bizarre, the lifting of transforming geometry would rest in the GPU - not the CPU.
 
That's a big jump over this generation's video cards (X800 XT, 6800 Ultra) which do 600-800 million poly/sec.
 
Phoenix said:
That and unless they're doing something bizarre, the lifting of transforming geometry would rest in the GPU - not the CPU.

They might be doing something bizarre.
 
Izzy said:
They might be doing something bizarre.


Hehe, yeah. I actually can see vertex ops being put on the CPU in PS3, or some of them anyway - given the amount of power that's there, it would be a good idea, freeing up a lot of logic etc. for just pixel ops on the GPU side. It's not that crazy an idea - you'll be able to do all vertex ops on the Xbox2 CPU if you want (the leaked specs described the GPU's ability to read directly from the Cores' L2 cahce), and then dedicate the GPU shaders to pixel ops (however this may not be as efficient as designing with that particular distribution in mind..). Sony may or may not do this though, just an idea..


Where did the term "gigapoly" come from? We never spoke amout megapolys before. Billions of polys just sounds more right :P Still, those calcs are probably making assumptions (afaik we don't have reciprocal latency for example, are they taking into account memory bandwidth, communication etc. etc.?

I believe those estimates are using one transformation, the smallest, with all SPEs doing it. That's the absolute paper max you could consider for the transform rate, and it's probably less than that when we find out exactly about the instruction set of the chip ;) It also wouldn't be very useful - the graphics pipeline includes multiple transformations.
 
I believe those estimates are using one transformation, the smallest, with all SPEs doing it. That's the absolute paper max you could consider for the transform rate, and it's probably less than that when we find out exactly about the instruction set of the chip ;)
Well yeah, it's the same kind of max that's used to get 600-800MPolys for current GPUs.
It was also a joking reference to what we've been figuring out may be the latency for the reciprocal.
 
Hitler Stole My Potato said:
256 Gigafuck polysaturated terra fubernaughts.


The next generation tech talk is pissing me off already.
It's funny when I picture Bob Newhart saying that.
 
gofreak said:
Hehe, yeah. I actually can see vertex ops being put on the CPU in PS3, or some of them anyway - given the amount of power that's there, it would be a good idea, freeing up a lot of logic etc. for just pixel ops on the GPU side.

Considering that the shader pipeline actually needs the vertex coordinates sure you could burn all the CPU you want to transform the geometry - you'll STILL have to move it all back through the rendering pipeline (expensive ass copy operations) to render stuff to the screen.
 
you'll STILL have to move it all back through the rendering pipeline (expensive ass copy operations) to render stuff to the screen.
You're thinking of this backwards - the APUs would be 'part' of the rendering pipeline, there's no need for copying back and forth through external memory. All you need is a fast interconnect from CPU to GPU (which should be there anyway, fast two-way GPU-CPU communication is important for more things then just geometry).
 
Top Bottom