Detailed overview of PS3 development station vs. PS3 console

j^aws said:
G70 has 16 ROPs not 24 ROPs.

http://www.beyond3d.com/misc/chipco...d=106&orderby=release_date&order=Order&cname=

It's fillrate is 6.88 GPixels/sec not 10.32 Gpixels/sec. I presume the above is a typo because it's assuming the same number of ROPs as 24 fragment pipes...
That's what I thought, Jaws, but it's clear that this is straight from the horse's mouth, so to speak. I mean, I linked straight to nvidia.com. It doesn't get more official than that. I only noticed it b/c I went there to see if they had the official triangle setup rate. But perhaps...just maybe, there's been a change in the final spec? A change for the better? :lol Seriously, is there enough bandwidth to support that? 10.32GP*128bit/8 = 165GB/s or 82.5GB/s for 64bit color. I assume even with compression, you can't get that fillrate to comply with the buses on either the G70 or RSX. A theoretical max that's impossible to reach? Help me out here, I just put down my first after-work beer, and my brain's running on alpha hardware at present. :lol

I can't see why nvidia.com would have less accurate info than B3D. So I can only assume there's been a change, b/c both the fillrate AND ROPs figures are off. One typo is a mistake. Two on the same official spec sheet (that should have been proofread) is more than a coincidence IMO. PEACE.
 
gofreak said:
It doesn't say if that's the setup rate of the transform rate..
It's setup based on the figures from Anandtech a month or so ago. B3D lists it as the geometry rate, but I don't know if that means setup or just VS power. But even Xenos can transform billions of verts, can't it? I would assume this is setup just b/c it would seem like a low number for 8VS @ 430MHz. PEACE.
 
Pimpwerx said:
That's what I thought, Jaws, but it's clear that this is straight from the horse's mouth, so to speak. I mean, I linked straight to nvidia.com. It doesn't get more official than that. I only noticed it b/c I went there to see if they had the official triangle setup rate. But perhaps...just maybe, there's been a change in the final spec? A change for the better? :lol

Well, I haven't seen anyone else report 24 ROPs so I doubt it's true... ;)

Pimpwerx said:
Seriously, is there enough bandwidth to support that? 10.32GP*128bit/8 = 165GB/s or 82.5GB/s for 64bit color. I assume even with compression, you can't get that fillrate to comply with the buses on either the G70 or RSX. A theoretical max that's impossible to reach? Help me out here, I just put down my first after-work beer, and my brain's running on alpha hardware at present. :lol

You must be a black belt at 'drunken math' coz I have no idea what you're trying to do! :)

Pimpwerx said:
I can't see why nvidia.com would have less accurate info than B3D. So I can only assume there's been a change, b/c both the fillrate AND ROPs figures are off. One typo is a mistake. Two on the same official spec sheet (that should have been proofread) is more than a coincidence IMO. PEACE.

If it's not a typo then it must be an attempt at being economical with the truth or they must've used a branch of 'drunken math' too! :)
 
I think you're asking if 64bit HDR and 128bit HDR are possible?

A 720P backbuffer with 4xSSAA, 64bit HDR,

Backbuffer size = 1280x720x12bytes*x4SSAA ~ 42 MBytes

*12 bytes/pixel = 96 bits/pixel = 64bit (FP16, RGBA) + 24bit (Z) + 8bit (Stencil)


Backbuffer bandwidth = 42 MB x 60 FPS x (2 read|write x 5 overdraw) ~ 24.6 Gbytes/sec

I think you're confusing fillrate with bandwidth but I'm not sure? :)
 
j^aws said:
I think you're asking if 64bit HDR and 128bit HDR are possible?

A 720P backbuffer with 4xSSAA, 64bit HDR,

Backbuffer size = 1280x720x12bytes*x4SSAA ~ 42 MBytes

*12 bytes/pixel = 96 bits/pixel = 64bit (FP16, RGBA) + 24bit (Z) + 8bit (Stencil)


Backbuffer bandwidth = 42 MB x 60 FPS x (2 read|write x 5 overdraw) ~ 24.6 Gbytes/sec

I think you're confusing fillrate with bandwidth but I'm not sure? :)
I highly doubt that anyone would make use of 4x SSAA in a next gen. game ... the performance hit is too huge, IMO


although, I suppose 4xSSAA may be used in very rare cases..
 
I highly doubt that anyone would make use of 4x SSAA in a next gen. game ... the performance hit is too huge, IMO
It was just as a big a hit this gen - and it still got used a few times - when there's no other option, you use what there is...

Personally what I find depressing about SSAA is that since we still don't have actual hardware support for it, all we get is ordered grid, so we piss a lot of performance for only so-so results.
If I could at least control sample positions, the performance sacrifice would feel less painful...
 
Fafalada said:
It was just as a big a hit this gen - and it still got used a few times - when there's no other option, you use what there is...

Personally what I find depressing about SSAA is that since we still don't have actual hardware support for it, all we get is ordered grid, so we piss a lot of performance for only so-so results.
If I could at least control sample positions, the performance sacrifice would feel less painful...

Well this NV spec suggests that we get RGAA too,

NV spec said:
...
• Gamma-adjusted rotated-grid antialiasing removes jagged edges for incredible image quality...

http://nvidia.com/page/specs_gf7800.html

So it seems you get OGAA and RGAA...?


Wunderchu said:
I highly doubt that anyone would make use of 4x SSAA in a next gen. game ... the performance hit is too huge, IMO


although, I suppose 4xSSAA may be used in very rare cases..

As Faf mentioned, if it's usable, it'll get used. Anyway I was just showing an estimation of B/W requirements for a given example, without any bandwidth saving compression taken into account.

Also, in the calculation, replace 12 bytes/pixel for 64bit HDR (FP16) with 20 bytes/pixel for 128bit HDR (FP32) and remove the 4*SSAA for another rendering option! ;)
 
Top Bottom