• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

ATI interview on the new consoles.

midnightguy said:
RSX almost definitally has an advantage over Xenos in pure fillrate,

RSX: 13.2 Gpixels/sec or 13,200 Mpixels/sec (24 * 550) vs Xenos: 4 Gpixels/sec or 4,000 Mpixels/sec (8 * 500)

however if RSX has to do 4x FSAA that high fillrate is going to drop like a ROCK.

that is why Xenos has the 'equivalent' of 16 Gpixels/sec or 16,000 Mpixels/sec which comes out on top of RSX when anti-aliasing is figured into the equasion. this will give Xenos the advantage in anti-aliasing, but not pure fillrate, where RSX should be able to apply its strength. there have to be other areas where RSX has advantages over Xenos and where Xenos has advantages over RSX.

neither GPU is going to shit on the other.


Yeah, but keep in mind PS3 is more than just RSX.....you can do Pixel/Vertex shading on CELL in addition to RSX.....both can work together on graphics....


Capable perhaps, but still a lot less powerful than X360 and PS3, which is what this ATI guy is confirming (though Miyamoto practically confirmed it ages ago since that E3 IGN interview). And with no HDTV support, there's little left to argue concerning Revolution's grahical power against the other consoles.

Keep in mind by forgoing HD resolution they dont NEED to have the HW power of the PS3/X360 to do competitive graphics....in fact they only need to push 1/3 the pixels of 720p...

Unless you have an HD monitor, you will not see the difference anyway...
 
Kleegamefan said:
Yeah, but keep in mind PS3 is more than just RSX.....you can do Pixel/Vertex shading on CELL in addition to RSX.....both can work together on graphics....


how many developers will be able to take advantage of that? Very few...
 
Operations said:
Capable perhaps, but still a lot less powerful than X360 and PS3, which is what this ATI guy is confirming (though Miyamoto practically confirmed it ages ago since that E3 IGN interview). And with no HDTV support, there's little left to argue concerning Revolution's grahical power against the other consoles.

A lot less powerful is stretching it. I am expecting that the Rev will atleast be half as powerful as the ps3 and Xbox unlike being a quarter as powerful as the Xbox and half as powerful as the ps3. I am talking about pure polygon power. But in terms of efficiency, I would say that Nintendo's console is going to be better in terms of final output versus it's specs.
 
Monk said:
Until the middleware comes with it...

You have to pay $$$ for the middleware right?

Anyway this ATI chip sounds fantastic! Will ATI use the same technology for their next gen PC GPU or will this be 360 exclusive for a while?
 
I'm simply saying there are ways of managing your mix to a certain degree..you don't have to throw your game blindly at the hardware.
Sure but I'd much rather developers spend less time/money worrying about pixel/vertex comprimises to get art vision turned into digital reality and instead spend those resources in either adding new features or getting the game done earlier/cheaper.
 
I agree with you, but graphics and sequels is what tends to sell in large numbers, so I am not
holding my breath waiting for a large influx of innovative games with any of these new consoles...
 
but still a lot less powerful than X360 and PS3, which is what this ATI guy is confirming


I disagree, and disagree.

the ATI guy did not confirm that Hollywood - Revolution is a lot less powerful than Xenos - XBox360 and RSX - PS3.

you are twisting words.

we know almost nothing about Hollywood - it's still in development now, whereas Xenos is not, its finished, and now gearing up for manufacturing. Xenos has a different way of doing things than how Hollywood is likely to do things. just because Hollywood does not do something the way Xenos does it, is no indication that Xenos is so much more powerful. Hollywood is going to have some things that Xenos does not, no doubt. Hollywood will have its own strong points. Nintendo would not be stupid enough to order a GPU that gets the shit kicked out of it by Xbox 360 and PS3.
 
I think Revolution will surprise with it's true unveiling. E3 and subsequent interviews have seemed half baked or otherwise insane. Even though the shell rocked I'm not convinced we know everything yet.

People expecting a relative 20% performance out of Rev should hopefully be pleasantly surprised with ease.
 
radioheadrule83 said:
People expecting a relative 20% performance out of Rev should hopefully be pleasantly surprised with ease.
I think people hoping (praying) for a 20% perfomance to the 360 won't be "pleaseantly surprised"...more like morbidly suicidal.
 
all this talk of USA is intresting, I mean if it is 40% or so more efficient and the raw power is only a marginal drop off 10-20% then it will obviously be the way to go..

however, the thing that will help Xenos more than anything is free FSAA... even if the GPU's are a wash (or even if Xenos is a bit underpowered), the IQ of Xenos will be better than RSX... you put the two side by side in a store and even if PS3 is pushing a few more polys and a bit better textures, the jaggies would make it look worse IMO. If you do 4x FSAA in your game for the PS3, then performance takes a big hit and while the IQ may be similar, the Xenos would be pushing more polys, textures, shaders, etc.

Obviously we will know more in a few months, plus Cell is the wildcard, because if it is used to do some GPU functions, it throws a huge wrench in this debate.
 
the thing is, I think RSX will be able to do 4x FSAA. why not, since Nvidia GPUs already can do 8x FSAA. but on Nvidia GPUs, the FSAA is NOT for free. so RSX will have to trade its higher fillrate for 4x FSAA.

in the end, they're both going to end up really close
 
You know this thread got me thinking. Why the fuck was Nintendo talking with IBM for better output on existing televisions? Even if you consider that Nintendo may have been trying to get aa and anisotropic filtering for free, it would be a strange choice talking with IBM. A seperate processor for AA and Anisotropic filtering?
 
midnightguy said:
the thing is, I think RSX will be able to do 4x FSAA. why not, since Nvidia GPUs already can do 8x FSAA. but on Nvidia GPUs, the FSAA is NOT for free. so RSX will have to trade its higher fillrate for 4x FSAA.

in the end, they're both going to end up really close

thats my point, even if RSX is a bit mroe powerfull, the FSAA will be an equalizer of sorts, either the dev doesnt use it and the game is jaggy as hell or they do use it and things are even. Assuming the RSX is more powerfull, which is a big assumption given we know little about it.
 
the thing is, I think RSX will be able to do 4x FSAA. why not, since Nvidia GPUs already can do 8x FSAA. but on Nvidia GPUs, the FSAA is NOT for free. so RSX will have to trade its higher fillrate for 4x FSAA.

in the end, they're both going to end up really close

If RSX can't do 4X FSAA at good framerates, someting is wrong...

4X FSAA on GF6800U usually has a big framerate hit, but 2X FSAA is a very small hit with most games....
 
Monk said:
A lot less powerful is stretching it. I am expecting that the Rev will atleast be half as powerful as the ps3 and Xbox unlike being a quarter as powerful as the Xbox and half as powerful as the ps3. I am talking about pure polygon power. But in terms of efficiency, I would say that Nintendo's console is going to be better in terms of final output versus it's specs.

nice try at attempting to suggest PS3 is twice as powerful as Xbox 360 :lol


(even if Cell is able to calculate 2x as many polygons as Xbox 360, doesnt mean PS3 can display that many)
 
It should have been...

A lot less powerful is stretching it. I am expecting that the Rev will atleast be half as powerful as the ps3 and Xbox360 unlike being a quarter as powerful as the Xbox and half as powerful as the ps2. I am talking about pure polygon power. But in terms of efficiency, I would say that Nintendo's console is going to be better in terms of final output versus it's specs.
 
Superior is not always more powerful...and it wont matter to gamers. MS has done exactly what they wanted to do. Make a console powerful enough to hang with ps3 and they will have it to market at leasst 8 months prior to sony in the US and over a yr earlier in Europe.

Now the question is....will this plan work? The games will tell.
 
if you wanted to do graphics work on the the xbox 360 cpu, you can do it too. You can do geometry on a single core and run it over to the gpu. no problem.

it's not a new concept. not a new one at all.
 
Monk said:
But it isn't it against the trend to do that? What would be the benefit in doing so?

Imagine you had your 3 xbox 360 cores, you dedicate 2 to geometry and 1 for everything else. Now those 2 process the geometry where it goes to the gpu where you can have it just do pixel shading (one of the major pluses of a unified system is balancing :) )

That way if your cpus can output a lot of untextured geometry (afaik, cpus can normally do this fairly well) and then have the gpu doing all the others stuff you can technically maximise your game for texturing/shading.

With PS3 it's not the same as the RSX is more fixed pipe. You could offload more geometry work to cell but dedicating RSX for pixel shading wouldn't make much sense.

Sony might be pimping the idea more, but it seems that Xbox 360 is actually more designed around the concept just because of the way the gpu is. (even though the bus between the gpu/cpu isn't as fast as PS3s. (all in my opinion, as always) Either way it will be interesting to see what developers do to max these systems out :)
 
DopeyFish said:
Imagine you had your 3 xbox 360 cores, you dedicate 2 to geometry and 1 for everything else. Now those 2 process the geometry where it goes to the gpu where you can have it just do pixel shading (one of the major pluses of a unified system is balancing :) )

That way if your cpus can output a lot of untextured geometry (afaik, cpus can normally do this fairly well) and then have the gpu doing all the others stuff you can technically maximise your game for texturing/shading.

With PS3 it's not the same as the RSX is more fixed pipe. You could offload more geometry work to cell but dedicating RSX for pixel shading wouldn't make much sense.

Sony might be pimping the idea more, but it seems that Xbox 360 is actually more designed around the concept just because of the way the gpu is. (even though the bus between the gpu/cpu isn't as fast as PS3s. (all in my opinion, as always) Either way it will be interesting to see what developers do to max these systems out :)

this is mostly true I think.

however, even though RSX is not a unified shader architecture, it reportedly has 24 pixel pipelines and 8 vertex shaders, so most of it is going towards pixel processing & pixel shading, anyway.
 
Kleegamefan said:
This is a key point that I am very much wondering myself....

Unified shaders can perform operations on either vertices or pixels...for this flexibility you sacrifice performance, which is why nVidia claims they didn't want to go with unified shaders at this time...

ATI is talking big about 100% efficiency with the USA but what they are *not* talking about is how fast the USA is and how good its performance will be vs. a vertex/pixel shading architecture...

If the Xenos USA has ~100% efficiency but is only 50% as fast as a GPU with dedicated pixel/vertex shaders then it is a wash....

Unfortunately, ATI isn't giving performance numbers of how Xenos USA compares to even their own internal ATI PC cards (something they could easily do) so the "100% efficiency™"-card is useless within the context of how it performs vs., say RSX....



That is only one side of the coin....its not just about efficiency but speed/performance...if general purpose USAs only have 50-60% performance of dedicated Pixel/Vertex shaders, then that would nullifiy most of its advantage...

ATI wont do that comparison......not even against their own PC cards!!!!


Speaks volumes, IMO....
I don't know where you're getting all of that. Microsoft has already stated they will render 500 million non-trivial shaders.
 
midnightguy said:
500 million polygons with non-trivial shaders :)

what is the most Polys per second the Xbox wound up pushing, I know this is more than 2x the Raw limits of Xbox... but am curious to see what the actual difference is.
 
They are the same kind of numbers as that spec sheet. Though in reality after everything is said and done, I don't know, 30mpps? I don't know just a wild guess.

But I believe that the Xbox 360 will be more than 4x as powerful since they decided to go for a more efficient machine this time, ie push more polys at the end in comparison to the raw specs.
 
"If the Xenos USA has ~100% efficiency but is only 50% as fast as a GPU with dedicated pixel/vertex shaders then it is a wash...."

What do you mean by FAST? If you mean clockspeed then the Xenos is 500MHz and RSX 550MHz, if they can still get there. There's hardly much difference in speed.


Put it this way....you can run Xenos with 8 vertex shaders and 24 pixel shaders at 500Mhz constantly just like the RSX at 550Mhz and still have 16 shaders to spare.....


Also whoever said 4x FSAA is the most important thing about Xenos....did you know about the following?

Let me list some features of Xenos that are quite a bit more interesting and I beleive to be quite a bit more important than 4X FSAA.

1.first USA unified shader architecture---it's been talked about enough already

2.eDRAM- high speed on chip memory designed to save main bandwidth and allow for some really cool features. will store small particle effects and objects such as leafs, grass, water, etc which otherwise would have kept going back and forth across main ram....expect to see a lot more environements that mixes trees and water with a lot of other geometry because now these features won't take up as much bandwidth.

3. tesselation unit- instead of creating rounded objects from the start and the GPU having to process everything on the higher polygong model....it is capable of accepting traingles, rectangles and quads as primitive outputs(ex: nstead of a rounded rim on a car, a rim that looks like an octagon), perform all operations, and tesselate them at the end(make the wheel rounded after it has been textured, filtered, etc).

5.displacement mapping can now be achived quite easily thanks to the tesselation unit and USA architecture. displacement mapping is a better and more evolved version of bump-mapping. bump-mapping only creates the illusion of bricks sticking out of the wall(but no geometry actually created)...displacement mapping actually CREATES the bricks sticking out of the wall,cand the geometry created can cast shadows and be affected just like any other real object in the game.

6. HDR lighting in basically one pass.

7.MEMEXPORT- In short it allows a GPU to do what a CPU can do. It allows it tu access vertices directly from RAM and create its own geometry. So if the 360 chip is too busy with physics and can't handle any more geometry developers can have the GPU pick up the slack. At a cost of course, this is not free.

8. more stuff.....
 
Guys, don't tell me some of you are using the GPU to determine polygon performance of a system.

I might be totally off here but I believe that is still, and for the imediat future, will remain the job of the CPU. The GPU only ACTS on the polygons created by the CPU. It does display them(after they have been created in cyber world by the CPU), so it's true that it has to be able to display all the polygons created by the CPU....but if the CPU can only do 200 million polygons...there's not going to be any extra polygons on screen whether the GPU cand display 600, or 200 million. (except for Xenos of course, which will be able to create its own geometry...but this is not something typical of current GPU's).

Think of the CPU, bandiwdth and GPU as the water company, water pipeline and your faucet on your sink. The amount of water that will ever come into your house, is always going to be primarily decided by the pipe itself. It doesn't matter what kind of faucet you have. You won't get any more water than the water pipe and company can give you.
 
Good posts, jimbo.

I don't know how the revolution will turn out power wise, but things should be very interesting next gen for programming.

the PS3 for it's raw power
360 for the versatililty and great amount of customization it offers.
Revolution because of the new input method. It also should be quite easy to code for and create more advanced code, since it shares a lot with the GC's API, so developers are already familiar with programming for it.

Most likely, the PS3 will offer the most possibilities, while the 360 will offer the greatest flexibility, and the Revolution will offer the greatest ease of use, giving each system a certain strength for programmers.

In fact, going off on that, (and this is getting slightly off topic), I'd say all three consoles are going to differentiate themselves a bit next gen, maybe moreso than consoles have ever been before.

Sony seems to be trying to differentiate the Ps3 from the Revolution and the 360 with its raw power, especially its use of blu-ray for full support of "true" high definition entertainment.

MS seems to be trying to differentiate the Xbox 360 from the Revolution and PS3 with the Xbox Live community, the heavy emphasis on an online infastructure supported by every game on the system. (Of course, this whole community aspect could end up being copied and/or challenged by Nintendo or Sony. We don't know what kind of communities they'll be supporting)

Nintendo has been the most obvious company trying to differentiate itself from the competition, and it plans on doing so with a unique new input scheme (and perhaps a new online infastructure which has been highly praised in a very select few instances by other developers, including Square Enix).


I think that next-gen's consoles will be even more different than they ever have been before in any previous generation, and, coupled with the rise of game development costs and the need for more developers to make more games multiplatform, there will be a lot of untapped potential in each of the three consoles, even into 2008 or 2009.
 
1) ATI claims 100% efficiency, so all pipes are used all the time. But what if those pipes aren't good at doing the job (jack of all trades) as dedicated pixel/vertex pipes? Say they are 30% slower? Suddenly you have the same effective performance as a traditional approach running at 70% efficiency. (more pipes full, but they don't go through the pipes so quick)

2) Xenos 'free' AA is a great thing, no doubt about it. But real fillrate is only 4GPixels/s. RSX is around 13GPixels/s, which is almost enough to give you the same 4xAA for 'free' if you use 4GP real fillrate. But then RSX can then forego AA and use 13GP for real fillrate. Lots and lots of passes for post scene processing etc. Less simple, but potentially a lot more flexible than Xenos
 
"1) ATI claims 100% efficiency, so all pipes are used all the time. But what if those pipes aren't good at doing the job (jack of all trades) as dedicated pixel/vertex pipes? Say they are 30% slower? Suddenly you have the same effective performance as a traditional approach running at 70% efficiency. (more pipes full, but they don't go through the pipes so quick)"

Actually in the beyond3D artile ATI claims the Xenos ALU's are 33% MORE effective than current vertex and pixel shaders...so there goes that theory.

"2) Xenos 'free' AA is a great thing, no doubt about it. But real fillrate is only 4GPixels/s. RSX is around 13GPixels/s, which is almost enough to give you the same 4xAA for 'free' if you use 4GP real fillrate. But then RSX can then forego AA and use 13GP for real fillrate. Lots and lots of passes for post scene processing etc. Less simple, but potentially a lot more flexible than Xenos"

Only? That's still quite a bit(remember it only has to do 720p and 1080i(which is actually 540). But yes, the RSX no doubt has an advantage over it in this department and this is why the PS3 is going to be able to do true 1080p. I don't know how many games will do that, but the ones that do will no doubt look simply incredible. Can't wait.
 
mrklaw said:
But real fillrate is only 4GPixels/s. RSX is around 13GPixels/s, which is almost enough to give you the same 4xAA for 'free' if you use 4GP real fillrate.

The difference is it is unlikely you will hit 13 gpix/s in the real world with the memory bandwidth that RSX has. Assuming it has 24 pixel pipes and you're only using 32-bit pixels/z, you will need 160 GB/s in framebuffer bandwidth alone to hit 13 gp/s. If you turn on blending, that jumps to 211 GB/s.

Obviously bandwidth saving techniques like z and colour compression will help some, but in the end, bandwidth is bandwidth.

Lack of memory bandwidth is the same reason xbox NV2A doesn't get 1 gpix/s except under ideal benchmark conditions -- NV2A chugs on the tanker intro in MGS2:Substance, despite having in theory almost as much textured fillrate as PS2's GS (1.2 gpix/s) because it just runs out of memory bandwidth and can't keep up.
 
jimbo said:
"If all you're looking at is the ocean, and a LODed low poly skyline, I don't think either chip would have any trouble from a vertex shading POV. Your ocean would also be LODed, btw.

If you wanted to do stuff with VS beyond what the GPU was capable of, there's a piece of kit at the other end of a very fat pipe that may be useful..

As above, I completely get what you're trying to say, and I'm sure there are situations that would illustrate the point, I'm just not sure if this example is the best."


Ok well let me rephrase and make it more general then.

Say you have a game level split into 3 parts A, B and C..begining middle and end.

Part A of that level would be extremely vertex heavy, utilizing 80% of all shaders on Xenos as vertex shaders, part B is a transition point(where you could split up the work between vertex and pixel shaders like the old traditional ways) which would also act as a visual barrier/constraint between C and A(put a halway for example between A and B and B and C)so as a gamer could never have both A and C in the same view. Even simpler part B could simply just be an L shaped hallway. And part C you would make it extremely pixel shader heavy.

This is where first of all you can see that for the first time developers have the freedom of being able to design something like this.

And to make my point, on traditional cards with dedicated vertex and pixel shaders....you just simply could never do part A and B of that level. All three parts would have the same maximum number of vertex and pixel information at any one point. You could have less, yes, but not more.

USA is going to allow for some pretty amazing level designs on top of everything else.

In a situation where you have a closed level and can control to some degree what the camera sees - like above where you know the camera is only going to see a room, then a hall, and then a room, yes, the switching of proportions between vertex and pixels can be handled well, and you can spend gains in one over the other. However, thinking more about your earlier examples with an open scene, where the camera can be looking at anything at any moment, I'm not sure if you can take advantage of your better utilisation much. You will get better utilisation, but what does it buy you? For example, if you're looking at one part of the scene with high verts and low pixel loads, and then suddenly switch to viewing a part of the scene with low vertices and high pixel loads, how can you take advantage of that? You can't suddenly start piling in more pixel shading for that frame - adjusting the loads on a frame by frame basis is basically doing the same as you thought I was suggesting earlier with regard to managing proportions per frame for dedicated architectures - not very feasible, and kinda goes against ATi's mantra of the dev not having to worry about that kind of thing. You'd also have very disturbing switches in terms of shading quality from frame to frame e.g. that wall looked a whole lot better when I was just looking at it a moment ago! So what does it buy you? That frame may be computed more quickly, but in terms of visual fidelity it may not get you much. When you can't assume anything about what the user will be looking at at any one point in time, your gains aren't going to come in terms of increased pixel or vertex shading more than increased framerate. And vs a dedicated architecture, that framerate increase is as likely to be an excess (e.g. 90fps vs 60fps) as it is to make the game go from rocky to smooth, since your minimum framerate is still determined by the worst-case scenario - the most the viewer could look at at anyone time - and that scene may be handled as well on a non-unified chip as a unified.

Truth is, I think devs could to some degree control the mix of instructions being sent to their cards - you could theoretically have a LOD system that could dynamically alter pixel or vertex loads frame-by-frame, in Xenos's case to take advantage of that better utilisation for more than just framerate, and in RSX's case to keep the fixed vertex and pixel shaders humming at a high utilisation. That is, however, a lot of work, it may not even work well and without a high level of granularity in your shaders you'd still be losing some power anyway (in Xenos's case, to increased framerate, in RSX's case to...possibly to increased framerate, but possibly not as much of an increase as Xenos).

Frame-by-frame adjustments aside, yes, a USA buys you flexibility with your scene design to weight vertices of pixels against each however you wish, pretty much. However, if you can weight proportions like that in a scene, then for RSX you simply also just weight your proportions, but this time to match the hardware to gain utilisation. Mapping to the hardware is doable in a closed environment. That was the higher level control I was talking about earlier.

jimbo said:
And let's keep in mind USA is just ONE of the Xenos's advantages over traditional cards. Its eDRAM .........is a whole different story too. I'm really excited about what this GPU is going to be able to do for scenes like forrests and grass once developers learn how to use it. Because now that strand of grass and leaf doesn't have to be moved back and forth from main RAM to the GPU sucking up bandwidth...you can just store it right in the eDRAM and use it freely and keep your bandwidth free and clear of excess and reusable data(the GC already is using this btw so its nothing new, it's just a lot more evolved).

I'm not sure how much the eDram will be used to store data beyond the framebuffer. Gains elsewhere by doing that would result in losses with framebuffer efficiency. There'd be a tradeoff there, and if MS is insisting on a certain level of AA, it might not be possible in all situations. In the cases you gave - grass and trees - geometry instancing means you wouldn't have to keep shuffling that data back and forth from memory anyway for each individual blade or tree.

jimbo said:
Yes exactly and I don't think they'll ever be able to get it running at 700Mhz-1000Mhz depending on its efficiency to come up with that firepower

Where are you pulling these numbers from?

jimbo said:
There's really only one reason why Nvidia hasn't come out gunz blazing to blast Xenos...because they know they'll eventually use this in their own chips. Just like how they took the lead with pixel and vertex shaders and set the standard now ATI is setting a new standard. I wouldn't doubt if Nvidia can come back and use this same techonology to blow ATI's card out of the water, but as of right now, ATI's got the edge right here. It's just too bad ATI can't use it in PC's just yet.

Again, using them later doesn't mean it's better to use them now. And a small point, but ATi took the lead with pixel and vertex shaders? The Geforce 3 was the first card with pixel and vertex shaders, and ATi wasn't taking the lead in terms of shader technology with the last set of cards (NVidia moved to SM3.0+, ATi stayed at SM2.0+).

edit - I see you may be talking about NVidia there, wasn't very clear.

dorio said:
Maybe that's what I don't quite see. If you have a closed environment to work in like consoles are then what's the timing issue for nvidia? Why is this design bad now but good in the future?

They say that SM3.0+ level shaders are still too different to unify well. So since we're still at that level of hardware with these consoles, it may not make a lot of sense as far as their concerned - it may not even make sense at SM4.0 from their perspective - but eventually as the pipeline matures further they see it happening.

jimbo said:
Of course it's been said before and I will say it again. Before us Xenos lovers get too excited, it still remains to be seen just how INEFFICIENT current graphics cards are. If they're only 50%-70%, than that would bode VERY well for Xenos, as it could take twice the speed of current hardware to match it. If they're more like 80% it wouldn't take all that much.

OK, now I can see where you were coming from with your 700-1000Mhz figures. OK, first of all, don't expect the man to understate utilisation problems with current chips since he's trying to promote a product that apparently solves that. Again, if you asked someone else you may get a different answer. Second, even if he was correct, he's referring to utilisation on PC cards - as we all know, PC games don't take advantage of the hardware in the same way as games on consoles do. You're designing with many cards in mind, so you can never take advantage of just one card, which will obviously affect utilisation. Furthermore, consider what I was saying earlier about what utilisation buys you, and the control you have over pixel/vertex proportions on a scene-by-scene basis (not frame-by-frame). How would utilisation be if you took a game and designed it for that hardware?

Anyway, assuming a 100% increase in utilisation was possible, saying another chip would require a 100% increase in clockspeed to match is assuming far too much. You're assuming all else is equal. If you look at what Xenos's utilisation gains have to be offset against specifically in terms of RSX, the list isn't trivial. An unknown loss of efficiency inside the shader ALUs, a 10% greater clockspeed and unknown, but likely greater increase in terms of raw shading logic. It's not as simple as you're suggesting.
 
gofreak said:
Again, using them later doesn't mean it's better to use them now. And a small point, but ATi took the lead with pixel and vertex shaders? The Geforce 3 was the first card with pixel and vertex shaders, and ATi wasn't taking the lead in terms of shader technology with the last set of cards (NVidia moved to SM3.0+, ATi stayed at SM2.0+).


Hm
take a look at BF2's compatibility list. r8500 witch was GF3's direct competitor, is compatible.
Geforce3 and 4 arent (iirc its their lack of 1.4ps or vs ) . i'd say ATI was actually ahead at that time
 
you should really stop the damagecontrol for a while Gofreak.
You keep saying their *HAVE* to be tradeoffs to ATI's aproach. The fact is that untill youve actually coded for it, you wont know.
just wait untill you have some more conrete details on RSX , or untill (if) developers start compaining about these tradeoffs you keep talking about
 
Hajaz said:
Hm
take a look at BF2's compatibility list. r8500 witch was GF3's direct competitor, is compatible.
Geforce3 and 4 arent (iirc its their lack of 1.4ps or vs ) . i'd say ATI was actually ahead at that time

That was not my point, simply who was first. ATi came later with SM1.4. I believe I misread jimbo there, however, I think he was referring to NVidia being first afterall.

Hajaz said:
you should really stop the damagecontrol for a while Gofreak.
You keep saying their *HAVE* to be tradeoffs to ATI's aproach. The fact is that untill youve actually coded for it, you wont know.
just wait untill you have some more conrete details on RSX , or untill (if) developers start compaining about these tradeoffs you keep talking about

I believe we're having a technical discussion, but disappointingly you must insist that one "side" is "damage controlling". Typically the "side" you don't agree with. If you wish to disagree with something I've said, feel free - I'd actually appreciate feedback on anything I've said, since I'm actually open to correction - but crying damage control isn't a very credible response.

I think I've also sufficiently illustrated that there are gaps in knowledge which preclude anyone coming to conclusions yet - I'm certainly not! Some of the discussion and points being made here are based around incomplete information, I think that much should be obvious.

There are tradeoffs with EVERY architecture with EVERY system. You don't even need to know anything about any chip to know it won't be perfect.
 
ok, i wont call it damagecontrol. right now noone knows what the tradeoffs of either architecture might be. Theres not enough info on RSX.

the only quote ive seen from a dev is the one posted last night.
 
That all means that the Xbox 360 runs at 100% efficiency all the time, whereas previous hardware usually runs at somewhere between 50% and 70% efficiency. And that should means that clock for clock, the Xbox graphics chip is close to twice as efficient as previous graphics hardware.
xbox360 is only twice as powerful as Xbox. So what's the big deal about Revolution being "only" 2-3 times as powerful as Gamecube?


;P
 
Capable perhaps, but still a lot less powerful than X360 and PS3, which is what this ATI guy is confirming

Your imagining things. Please quote this guy from ATI (who BTW is utterly clueless, I can pick his interview to peices if you want on a technical level) saying that Revolution is a lot less powerful then anything.
 
Put it this way....you can run Xenos with 8 vertex shaders and 24 pixel shaders at 500Mhz constantly just like the RSX at 550Mhz and still have 16 shaders to spare.....

But Unified shaders, even at the same clock speed != the performance of dedicated shaders!!


HERE IS THE TRADEOFF AS I UNDERSTAND IT

Dedicated Pixel/Vertex shaders=faster clock for clock than unified shaders but can only do pixel or vertex work, so there is alot of effiency lost


USA=can do either vertex or pixel work which is highly effient *BUT* they are not as fast clock for clock/cycle for cycle than a dedicated pixel or vertex shaders at those operations...


Even if you use your example and setup XENOS ALUs to do an 8 vertex/24 pixel mix at the same clockrate, they would not have the same speed/performance as a normal architecture with 8 dedicated vertex shaders and 24 pixel shaders..


To use an analogy, let say you have 3 cars...one is a Porsche Cayenne SUV (USA) one is a Lamborghini Gallardo (Pixel Shader) and the last one is a Hummer H2 (Vertex shader)

The Cayenne can perform better than an H2 on a racetrack and can run rings around a Gallardo off road but it cannot do both (i.e. outrun the Gallardo on the racetrack and the Hummer offroad)

This is the cost/benifit ratio of USAs.....we know how efficient USAs are but what we need to find out is just how fast they are (Cayenne 3.2 V6? Cayenne S? Cayenne twin turbo?) and ATI aint fessin up yet...


The tradeoff of *per cycle* performance of USAs is what nVidia claims is one of the reasons they havent gone with USAs yet...

Again, there is not enough information from either nVidia or ATI to come to a conclusion yet....

xbox360 is only twice as powerful as Xbox. So what's the big deal about Revolution being "only" 2-3 times as powerful as Gamecube?

Because it is not twice as powerful, it is twice as efficient, which is not the same thing.....that is what everybody is mixing up here....

all this talk of USA is intresting, I mean if it is 40% or so more efficient and the raw power is only a marginal drop off 10-20% then it will obviously be the way to go..

StoOgE "gets" it and I agree with him 100%, if ATI can get the performance of the ALUs to be even 60-70% that of dedicated pixel/vertex shaders then it is safe to say they have the best solution....I won't disagree with you there..

Actually in the beyond3D artile ATI claims the Xenos ALU's are 33% MORE effective than current vertex and pixel shaders...so there goes that theory.

Here is the exact quote:

Additional to the 48 ALU's is specific logic that performs all the pixel shader interpolation calculations which ATI suggests equates to about an extra 33% of pixels shader computational capability

Not only doen't it refer to vertex shaders AT ALL but they also do not compare the 33% number to anything......to me it sounds like in addition to the ALU there is "specific logic" that gives the ALU a 33% boost that it would if it didn't have the "specific logic" so there goes that theory :)
 
Most power console in 5 years? How much did MS pay ATI to say that bs. As it stands, the RSX simply kills the xenos in brute force. Lets use a metaphor to describe the differences
The XB360 gives you a total of twenty fruit. You can have 10 apples and 10 oranges, 15 apples and 5 oranges, or 0 apples and 20 oranges. Basically, it's more "efficient" because when you're not eating apples, you've got all these oranges at your disposal.

The PS3, on the other hand, has 20 apples and 20 oranges. You can't go over 20, though.. that's the maximum, because they're in separate baskets.

So, the way I understand it, the "advantage" in efficiency that ATI was blabbing about is completely nullified by the PS3's brute power.

Basically the Volkswagen might get better mileage, but the Ferrari will whoop it on the track.
 
cobragt3 said:
Most power console in 5 years? How much did MS pay ATI to say that bs. As it stands, the RSX simply kills the xenos in brute force. Lets use a metaphor to describe the differences
The XB360 gives you a total of twenty fruit. You can have 10 apples and 10 oranges, 15 apples and 5 oranges, or 0 apples and 20 oranges. Basically, it's more "efficient" because when you're not eating apples, you've got all these oranges at your disposal.

The PS3, on the other hand, has 20 apples and 20 oranges. You can't go over 20, though.. that's the maximum, because they're in separate baskets.

So, the way I understand it, the "advantage" in efficiency that ATI was blabbing about is completely nullified by the PS3's brute power.

Basically the Volkswagen might get better mileage, but the Ferrari will whoop it on the track.
Some people should only be able to play the games and not talk about the hardware.
 
cobragt3 said:
Most power console in 5 years? How much did MS pay ATI to say that bs. As it stands, the RSX simply kills the xenos in brute force. Lets use a metaphor to describe the differences
The XB360 gives you a total of twenty fruit. You can have 10 apples and 10 oranges, 15 apples and 5 oranges, or 0 apples and 20 oranges. Basically, it's more "efficient" because when you're not eating apples, you've got all these oranges at your disposal.

The PS3, on the other hand, has 20 apples and 20 oranges. You can't go over 20, though.. that's the maximum, because they're in separate baskets.

So, the way I understand it, the "advantage" in efficiency that ATI was blabbing about is completely nullified by the PS3's brute power.

Basically the Volkswagen might get better mileage, but the Ferrari will whoop it on the track.


Yes and no...

We really don't have enough information yet....we know very general things about RSX (can get pixel/vertex assist from cell, large pipe between Cell/RSX, traditional vertex/pixel architecture, probably no eDRAM)

And although we know alot more about the effiency of Xenos we don't know how fast it performs vs. traditional vertex/pixel architectures....not even against ATIs own PC cards...

Until we can get some comparitive benchmarks and/or performance figures, then Xenos USA could just be a "jack of all trades, master of none" type tradeoff for all we know....

There are no perfect solutions and RSX or Xenos are no different...

For example with RSX:

Not as efficient as Xenos

Doesn't seem to have enough bandwidth to do 128-bit HDR, 1080p and FSAA simulataniously (perhaps it can get some assist from CELL which would introduce other tradeoffs)

Seems to be less customized than Xenos, which was designed for a console on day one

No eDRAM so bandwith demanding ops like AA will take a bigger hit than with Xenos..


I could go on, but we have plenty of members who are much better at pointing out the weakness of PS3 than I ;)


On the surface it does seem that nVidia (once again) has taken a brute force method and ATI has taken a nimble approach....that is not to say Xenos isn't powerful or RSX is innefficient....we just don't know enough of the picture yet...

One thing we have seen is rumors of 3rd party developers come flat out and say PS3 is more powerful than X360 and which that is a flawed comparison with the early dev kits and all, it is all we have to go on comparitively right now...

I haven't seen an independent developer/publisher who actively works with both PS3/X360 state X360 is more powerful than PS3, which is probably why you are seeing all the MS damage control/mis-information these days :D
 
cobragt3 said:
Most power console in 5 years? How much did MS pay ATI to say that bs. As it stands, the RSX simply kills the xenos in brute force. Lets use a metaphor to describe the differences
The XB360 gives you a total of twenty fruit. You can have 10 apples and 10 oranges, 15 apples and 5 oranges, or 0 apples and 20 oranges. Basically, it's more "efficient" because when you're not eating apples, you've got all these oranges at your disposal.

The PS3, on the other hand, has 20 apples and 20 oranges. You can't go over 20, though.. that's the maximum, because they're in separate baskets.

So, the way I understand it, the "advantage" in efficiency that ATI was blabbing about is completely nullified by the PS3's brute power.

Basically the Volkswagen might get better mileage, but the Ferrari will whoop it on the track.

I feel... enlightened.
 
cobragt3 said:
Most power console in 5 years? How much did MS pay ATI to say that bs. As it stands, the RSX simply kills the xenos in brute force. Lets use a metaphor to describe the differences
The XB360 gives you a total of twenty fruit. You can have 10 apples and 10 oranges, 15 apples and 5 oranges, or 0 apples and 20 oranges. Basically, it's more "efficient" because when you're not eating apples, you've got all these oranges at your disposal.

The PS3, on the other hand, has 20 apples and 20 oranges. You can't go over 20, though.. that's the maximum, because they're in separate baskets.

So, the way I understand it, the "advantage" in efficiency that ATI was blabbing about is completely nullified by the PS3's brute power.

Basically the Volkswagen might get better mileage, but the Ferrari will whoop it on the track.


bullshit, RSX is *NOT* double Xenos in brute power or in amount of on-chip processing units, also known as 'functional units'. If RSX was as you are saying, it would have significantly more transistors, like 400 to 500 million. but guess what, it does not. it has slightly over 300 million.

it is more like, the *modest* advantages that RSX has in *some* areas, is nullified by Xenos' very significant efficiency. on top of that, aside from efficiency, there are areas where Xenos outright beats RSX.

they are both different GPUs that acomplish things in different ways, but both will arrive at roughly the same 'plane' that will be what we come to know as 'next-gen' console graphics.
 
Top Bottom