• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

RSX pics/Next gen Nvidia card benchmarks/info..

Cerebral Palsy said:
Too bad there isn't a single pc game coming out that makes me want to upgrade anytime soon. Definitely won't be upgrading this year at least.

Battlefield 2 :D
 
Blimblim said:
This description makes this sound a lot like the parallax mapping method described by Rare for PDZ.
I don't think it's the same thing, but regardless, it's not something that is exclusive to 7800. I think all these new cards can essentially do simillar things with shaders, the question is only how fast. On the other hand, the effective speed of execution might actually affect the complexity of shaders, I guess.
 
Blimblim said:
This description makes this sound a lot like the parallax mapping method described by Rare for PDZ.

I'm no expert on this, but IIRC, relief-mapping is a higher quality implementation aimed at the same thing. Parallax mapping is something of a hack, since ordinarily relief-mapping would be too expensive to use in realtime.

I'm not sure if NVidia is specifically accelerating this or if it's something that's simply become possible as a natural consequence of more power.
 
So this card isnt really that much use given the games out right now :p

Unless you want more fps at an insane resolution.

I'll wait for the games that use all these new killer effects before splasing out.

I still want this.
 
Deg said:
So this card isnt really that much use given the games out right now :p

Unless you want more fps at an insane resolution.

I'll wait for the games that use all these new killer effects before splasing out.

I still want this.

Exactly.

This card gives you something like 80 fps in BF2. 6800 gives something like 40-50 fps in BF2.
No reason to upgrade.

Wait for games that will achieve around 20 fps on 6800 and then upgrade.

But yeah, I want this as well.
 
Borys said:
Exactly.

This card gives you something like 80 fps in BF2. 6800 gives something like 40-50 fps in BF2.
No reason to upgrade.

Wait for games that will achieve around 20 fps on 6800 and then upgrade.

But yeah, I want this as well.


Yea maybe if you have a high end cpu. I mean will this card turn my p4 2.6 into a monster? Thats the only way i see this price even remotely feasible.
 
This card seems more designed for the new effects rather than just one upping current games performance so i wouldnt say its not worth it. Sounds like this card will last you quite long. :)

Oh and download the 720p vids. They are killer. I want to see those effects in games.
 
Doc Holliday said:
Yea maybe if you have a high end cpu. I mean will this card turn my p4 2.6 into a monster? Thats the only way i see this price even remotely feasible.

Nope, sadly, there's no magic in GTX :(

But a 2.6 GHz P4 isn't a bad CPU itself.
 
Doc Holliday said:
Yea maybe if you have a high end cpu. I mean will this card turn my p4 2.6 into a monster? Thats the only way i see this price even remotely feasible.

No, you'd likely be better off with a cheaper card. At least with today's games, this won't offer much better performance since the CPU would be holding it back. Maybe later with games that are very shader heavy and so forth, the bottleneck might migrate back to your GPU and give you some gains.

Unless of course you'd like transparent texture AA right now ;)

Speaking of which, the Anandtech review confirms that TMSAA is virtually free.
 
gofreak said:
I'm no expert on this, but IIRC, relief-mapping is a higher quality implementation aimed at the same thing. Parallax mapping is something of a hack, since ordinarily relief-mapping would be too expensive to use in realtime.

Can you source this information?
 
Here's some info on how parallax and relief mapping are related.

http://www.inf.ufrgs.br/~oliveira/pubs_files/Policarpo_Oliveira_Comba_RTRM_I3D_2005.pdf

This paper opens with:

This paper presents a technique for mapping relief textures onto
arbitrary polygonal models in real time. In this approach, the mapping
of the relief data is done in tangent space. As a result, it
can be applied to polygonal representations of curved surfaces producing
correct self-occlusions, interpenetrations, shadows and perpixel
lighting effects. The approach can be used to consistently add
surface details to geometric models undergoing deformations, such
as in the case of animated characters commonly found in games

And then talks about some related work, including parallax mapping:

Parallax mapping [Kaneko et al. 2001] uses textures augmented
with per-texel depth. In this approach, the texture coordinates along
the view direction are shifted based on the depth values using an
approximate solution. While this technique can produce interesting
results at very low cost, it is only appropriate for noisy irregular
bumps, as the surfaces are inaccurately and dynamically deformed
as the viewing position changes. No support for shadows has been
demonstrated for parallax mapping.

There's also a demo here that you can use to compare parallax mapping with relief mapping (you need SM3.0 hardware, I think, however - I can't view the relief mapping at all unfortunately. It's interesting to note the difference between normal mapping and parallax mapping, however - relief mapping is a further improvement presumably):

http://fabio.policarpo.nom.br/files/reliefmap3.zip

I think NVidia is pushing it to the fore as a demo for G70 because I guess it's an ideal candidate to showcase improvements in dynamic branch performance, which NVidia claims runs very well indeed now with G70 (relief mapping is branch-heavy, IIRC).

edit - that Mad Mod Mike demo is very impressive! I think I might have liked it more than Luna at the PS3 conference, I think..but I guess it's very nvidia specific (basically an ad for their pc parts ;)).
 
I know little about GPU, only the basics in the hardware part.

But I have the sensation that the Xenos and the RSX are equal in Shading area but RSX is better in fillrate and geometry performance. Is this the image that I have after I have readed some messages about the theme.
 
Beginning to wonder if that Chinese site's chart with rsx etc. was actually from Nvidia. There is hardly a peek about RSX out of any of the G70 previews I've read thusfar. Perhaps we'll have to wait a while longer for official info :( I'm not sure why, unless they just don't want to talk about something which is still effectively in development.
 
Nightbringer said:
I know little about GPU, only the basics in the hardware part.

But I have the sensation that the Xenos and the RSX are equal in Shading area but RSX is better in fillrate and geometry performance. Is this the image that I have after I have readed some messages about the theme.
Don't know why it would have a geometry edge given that the xenos can devote all its unit toward that if needed where as the rsx can only devote 8 pipes.
 
dorio said:
Don't know why it would have a geometry edge given that the xenos can devote all its unit toward that if needed where as the rsx can only devote 8 pipes.

I think he means in terms of peak setup and so forth (500m vertices vs, likely, 1.1bn on RSX, if they can be scaled up from the G70 figures).

In terms of processing, I don't think you'll ever see a situation where Xenos is only working on vertices, in most games the load is in fact weighted toward pixel shading. No games will only need vertex shading ;) So in terms of theoretical performance with the whole chip working on vertices, yes, you're correct, but in terms of practical performance in actual games, the comparison is muddier. And that's also looking at the GPUs in isolation vs the systems as a whole (which I do think may be relevant wrt to vertex shading at least).
 
Forsete said:
Jesus, that Mad Mod Mike demo was amazing! Almost Pixlar quality (well, the character at least .. :) ).
I think the other demo with the subsurface scattering was more impressive. I wonder what the cost of that effect is. We always get these incredible demos when these cards come out but hardly ever see the quality in actual games. I'm still waiting for a game that looks impressive as nvidias forest demo a few years back.
 
gofreak said:
Beginning to wonder if that Chinese site's chart with rsx etc. was actually from Nvidia. There is hardly a peek about RSX out of any of the G70 previews I've read thusfar. Perhaps we'll have to wait a while longer for official info :( I'm not sure why, unless they just don't want to talk about something which is still effectively in development.

I do not think it is, as the chart states frame buffer + PCI express bandwith is 44.4GB/s, maybe I am mistaken but PCI-express bandwith is 4GB/s and FB bandwith is 38.4GB/s totalling 42.4.
 
gofreak said:
I think he means in terms of peak setup and so forth (500m vertices vs, likely, 1.1bn on RSX, if they can be scaled up from the G70 figures).

In terms of processing, I don't think you'll ever see a situation where Xenos is only working on vertices, in most games the load is in fact weighted toward pixel shading. No games will only need vertex shading ;) So in terms of theoretical performance with the whole chip working on vertices, yes, you're correct, but in terms of practical performance in actual games, the comparison is muddier. And that's also looking at the GPUs in isolation vs the systems as a whole (which I do think may be relevant wrt to vertex shading at least).

Gofreak I must say I really enjoy reading your post....they are always a valueable addition to the tech discussions.....

I also like that your posts seem to be not only very informative but pretty balanced and you are open to corrections....

Thankx for your contributions :)
 
dorio said:
I think the other demo with the subsurface scattering was more impressive. I wonder what the cost of that effect is. We always get these incredible demos when these cards come out but hardly ever see the quality in actual games. I'm still waiting for a game that looks impressive as nvidias forest demo a few years back.

Which one is the forest demo you speak of ?
(I've got all nVidia tech demos installed on my disk)

The best forest scene I've seen is the 3DMark05 and Oblivion is a Far Cry (haha, get it?) from that.
 
Would i be correct in assuming that the RSX and Xenos are not likely to be CPU bound given the FP performance of both CELL and Tri-core?


Assuming that, the console devs should get as close as anyone to extracting max possible performance from their repsective graphics setups?

Graphics Wizards pls comment
 
Borys said:
Which one is the forest demo you speak of ?
(I've got all nVidia tech demos installed on my disk)

The best forest scene I've seen is the 3DMark05 and Oblivion is a Far Cry (haha, get it?) from that.
You're right it was 3dMark.
 
So what about any memory bandwidth issues. The 7800 is a pretty good card (wanna see R520 so damn bad now!!!). But with PS3 having a 128bit memory bus compared to the 7800's 256bit, what do you think the constraints would be like?

I mean that's like half the bandwidth taken away right there basically....
 
gofreak said:
I think he means in terms of peak setup and so forth (500m vertices vs, likely, 1.1bn on RSX, if they can be scaled up from the G70 figures).

In terms of processing, I don't think you'll ever see a situation where Xenos is only working on vertices, in most games the load is in fact weighted toward pixel shading. No games will only need vertex shading ;) So in terms of theoretical performance with the whole chip working on vertices, yes, you're correct, but in terms of practical performance in actual games, the comparison is muddier. And that's also looking at the GPUs in isolation vs the systems as a whole (which I do think may be relevant wrt to vertex shading at least).

Z Only Rendering Pass.
 
Tenacious-V said:
I mean that's likw half the bandwidth taken away right there basically....
You're forgetting the FlexIO. I am not gonna make any assumptions on when RSX might be bandwith limited or not, but it does have more bandwith available then a G70 part using the same 700mhz GDDR3.

YellowAce said:
Z Only Rendering Pass.
99% Trivial transforms, generally completely vertex setup limited. Xenos or not, this is just an example of a situation where most shader ALUs will be sitting idle.
 
Shompola said:
holy shit a new pacman level!! :lol
rsx21qy.jpg

level1.jpg

Hey, you're right!
 
YellowAce said:
Z Only Rendering Pass.

I'm thinking in terms of games that are only ever going to use vertices ;) Not specific instances within games..certainly there are some cycles where it may only be working on vertices, like with a z-pass. On a higher level with any game it's going to be dealing with some combo of vertices and pixels though, and usually pixel shading is a bigger load.
 
Gofreak the veticies setup on the G70 isn't know 1.1 billion verts can be calculated but remember ATI has stated that Xenos can reach 500 million "triangles" with shaders. The numbers can't really be equated becasue we don't really know how they are calculated. Also on Xenos if you wish you could calculate upto 6B triangles through the ALU's/ Obviously that wont happen but the possibility is there. As I see it its becoming more evident that the RSX is really a G70 with a few modification and a high speed link to CELL(not thats a bad thiing) but its made me even more convinced than I was previous that on screen PS3 and 360 are going to be very similar in terms of output. This is further confirmed when reading DeanoC and EDR on B3D both of which are developing on PS3 and both have highly praised Xenos on a technical level. Anyway numbers are great, the games are the real numbers and I can't wait to see the output of the top developers for both machines.
 
gofreak said:
I'm thinking in terms of games that are only ever going to use vertices ;) Not specific instances within games..certainly there are some cycles where it may only be working on vertices, like with a z-pass. On a higher level with any game it's going to be dealing with some combo of vertices and pixels though, and usually pixel shading is a bigger load.

Sure, your example shows how the unified shaders don't help. And from the raw numbers, RSX has almost double the pixel shading power.

But the Z-Pass is there for a reason, and could be one of the biggest features of Xenos. I'd expect it to be used as standard in games.

It is also really tailored to a unified approach. You do a vertex heavy pass (highly favours Xenos compared to RSX), calculating occluded faces so you have much less to actually texture. With a lot of overdraw, Xenos could save a lot of pixel shading and so 'half the power' could end up being *more* powerful.

The question is - what average overdraw is there in modern games, and what culling techniques can be used by RSX/CELL to close the gap?
 
slightly off-topic for a moment: I recall Playstation2 had some massive advantages over Dreamcast in raw numbers.

DC -- PS2

textured pixel fillrate:
100M vs 1200M

transformed or calculated polygons:
10M vs 66M

textured, shaded polygons:
3-5M vs 25M

main memory bandwidth:
0.8 GB/sec vs 3.2 GB/sec

graphics memory bandwidth:
0.8 GB/sec vs 48 GB/sec

memory amount:
26 MB vs 40 MB

floating point operations per second
1.4 Gflops peak vs 6.2 Gflops peak

CPU transister count:
3-4M vs 13M

graphics chip transistor count:
10M vs 43M

the PS2 totally creamed Dreamcast in raw performance.

any advantages that RSX has over Xenos, and that PS3 has as a whole over Xbox 360 as a whole, will be far less than the advantages that PS2 had over DC.

RSX and PS3 win in some areas - Xenos and Xbox360 win in other areas.

the rest, they are roughly the same. it will all come down to what developers DO with the new 'canvases' they have been given.
 
I loved the Dreamcast, so I'm going to resist the obvious comment about PS2 actually creaming it in both sales and performance.

Instead I'll agree that simple numbers aren't the only indicator of actual performance, actual output may or may not be noticable, and your home may be at risk if you fail to keep up repayments.
 
I love the Dreamcast also. I have freaking five of them, compared to one PS2 :lol

I am actually biased towards Dreamcast and Sega, and against PS2 and Sony :lol

but I wanted to point out that the huge raw numbers advantages that PS2 had in specs did not turn out relative in games, much less so than the numbers would make you believe, and the draw numbers difference between Xbox360 and PS3 is much, much less, which should result in the games being very similar, given equal developer effort.
 
Yeah, agree. Although I do get annoyed at people stating the difference will be minimal when neither is out yet and we don't have hard facts on many elements of both machines.

I only have two DCs and two PS2s, but I think I preferred the DC. Might have been my lifestyle changing with kids, but since getting a PS2 I've played less games.


Still, can't resist ringside seats for the next round :)
 
1119063771Y3O0GyEDBw_4_9_l.jpg


pre-emptive nvidia damage control? :lol :lol

4xAA is free as long as the games are cpu bound of course. which can be seen by comparing the 1024x768 framerates and 1600x1200 framerates, which are for the most part the same.
1119063771Y3O0GyEDBw_4_10_l.jpg


4xAA is hardly "free".
 
The USA architecture is the big xfactor. ATI better hope they did their homework because this will make or break the performance of the 360. I'm concerned that we're not hearing developers coming out and saying good things about performance now that they have beta hardware. If I was MS, the moment developers got near final hardware I would have released one of those pr of quotes from developers raving about the hardware. I'm also concerned that they can't get their ruby demo running at 60 fps or add AA which should be trivial for a developer.
 
Funny how nvidia has the most powerful gpu out early for the pc (indicating they are ahead of ati in the dev cycle), and has MORE time than ati on the console end to push things even further. RSX will be a monster and in the right hands (konami, square-enix, polyphony, ND, namco, gorilla? ;p...) im willing to bet the difference will be very noticeable.
 
thorns said:
1119063771Y3O0GyEDBw_4_9_l.jpg


pre-emptive nvidia damage control? :lol :lol

4xAA is free as long as the games are cpu bound of course. which can be seen by comparing the 1024x768 framerates and 1600x1200 framerates, which are for the most part the same.
1119063771Y3O0GyEDBw_4_10_l.jpg


4xAA is hardly "free".
Yeah, that's pretty bad. They are suggesting that the g70 ~ RSX by even making that comparison. Why else would you list 4X AA performance at HD resolutions for a pc card.

in the right hands (konami, square-enix, polyphony, ND, namco, gorilla? ;p...) im willing to bet the difference will be very noticeable.
It's not exactly like Bungie, Rare, Bizarre Creations, Epic and Bioware are slouches on the 360 side. They've been known to push a pretty pixel or two.
 
Top Bottom