• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

nVidia:"GF 6800U=56Gigaflops

Kleegamefan

K. LEE GAIDEN
So I am reading through the OpenGL .pdfs from GDC2005

http://developer.nvidia.com/object/gdc_2005_presentations.html

and I see this slide:

Motivation: Computational Power Motivation: Computational Power
• GPUs are fast GPUs are fast…
– 3 GHz Pentium 4 3 GHz Pentium 4 theoretical theoretical: 12 GFLOPS : 12 GFLOPS
• 5.96 GB/sec peak memory bandwidth 5.96 GB/sec peak memory bandwidth
– GeForce FX 5900 GeForce FX 5900 observed observed*: 40 : 40 GFLOPs GFLOPs
• 25.6 GB/sec peak memory bandwidth 25.6 GB/sec peak memory bandwidth
– GeForce 6800 Ultra GeForce 6800 Ultra observed observed*: 53 : 53 GFLOPs GFLOPs
• 35.2 GB/sec peak memory bandwidth 35.2 GB/sec peak memory bandwidth

*Observed on a synthetic benchmark: Observed on a synthetic benchmark:
• A long pixel shader of nothing but MAD A long pixel shader of nothing but MAD
instructions instructions

I thought GF6s (like ATI X800s) were a couple of hundred GFlops??

WTF is the deal?
 
That's pixel shader flops (at least I calculated that as 51.2Gflops peak). Also note the word "observed", aka they actually got that out of a benchmark somewhere ;) Total theoretical flops, counting everything, including all the hard-wired logic, is where you get into hundreds of gflops (or 1Tflop for a 6800 Ultra, as apparently NVidia claims). Funnily enough, it's with the hardwired logic where you can probably be most liberal with your counting...;)
 
the hunderds of gflops / tflop+ calculations are all theoretical pr bs spread by ihv's and are unobservable in any real gaming situation.. take all these numbers with a grain of salt. I would also advise against taking any such misleading dedicated api benchmark figures, they are also very easy to manipulate.

I only take game benchmarks with customdemos seriously these days..
 
A lot of the floating point performance on a GPU isn't programmable.

When NVidia says a GeForce 6800 is hundreds of GFLOPs, they're not lying. It's real FLOPS, not imaginary made up FLOPS -- they're just not programmable under the developer's control.

Without the fixed function FLOPS in a GPU, you couldn't even do a basic filtered texture sample (just for example). A GPU just wouldn't work properly if you didn't have any of the fixed function FLOPs that fanboys like to ignore when doing meaningless FLOP comparisons.
 
Kleegamefan said:
So I am reading through the OpenGL .pdfs from GDC2005

http://developer.nvidia.com/object/gdc_2005_presentations.html

and I see this slide:



I thought GF6s (like ATI X800s) were a couple of hundred GFlops??

WTF is the deal?

Klee, that is not counting the floating point performance of the 6 Geometry Engines (Vertex Shaders) in NV40|GeForce 6800, and maybe also not counting other things.

btw, from googling, I am seeing Gflops figures as low as 40, and often 45, for GeForce 6800's pixel shader engine(s) maybe the differences in numbers are from the different variants of 6800? i guess.


Nvidia rates the much weaker NV2A in Xbox at 80 GFLOPs. that is total, and NV2A does not even have as much performance as the GeForce 4ti 4600 (NV25), let alone NV40|GeForce6800

If Nvidia rates NV2A at 80 GFLOPs total, no doubt they also rate NV40|GeForce 6800 at hundreds of GFLOPs total, since it is MUCH more powerful than NV2A.

but don't look at me as an expert on graphics or flops :)
 
Top Bottom