• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

ATI interview on the new consoles.

Kleegamefan said:
It does seem that way, and that is what I like most about the X360 hardware.....very efficient, it seems...

It is also obvious (to me) that Nintendo is sandbagging Rev. performance....2-3X GCN performance???

Nah, that would make it the smallest ever performance jump between generations of any Nintendo console product!!!

It will be similar in performance to X360, I would think.....
I know this is really twisting moore's law, since it doesn't specifically state performance doubles every two years, but according to Moore's law, that would make the GC tech 2006-tech in terms of power. (the cube was 2000 tech)
 
WordofGod said:
I have known for more then 1 year that the Revolution GPU will only be 20% the power of the Xbox GPU. Nintendo went to ATI with the plan of producing the new hardware very cheap just like all of their previous consoles they have made. =[



while I am *not* saying that I agree that Hollywood (Revolution's GPU) will only be 20% the power of Xbox 360's Xenos GPU, we've known for almost 2 years that they both have different graphics-capability requirements

http://www.extremetech.com/article2/0,3973,1220430,00.asp

August 14, 2003:
Don't expect the graphics capabilities of future Nintendo and Microsoft products to be exactly the same, however, the ATI spokesman said. "Yes, we have different design teams working on them, with different requirements and different timetables," the spokesman said.

and the careful wording of my post does not itself rule out Hollywood / Revolution being more powerful
than Xenos / Xbox 360
 
GaimeGuy said:
I know this is really twisting moore's law, since it doesn't specifically state performance doubles every two years, but according to Moore's law, that would make the GC tech 2006-tech in terms of power. (the cube was 2000 tech)

Moore's law states that transistor counts (or "performance", as popularly interpreted) doubles every 18 months, not 2 years. Which if you misguidedly took to meant performance, would make Rev to be 2002-2003 tech, assuming a 3x leap ;)

I think Nintendo's being conservative however with regard to performance claims.
 
gofreak said:
Moore's law states that transistor counts (or "performance", as popularly interpreted) doubles every 18 months, not 2 years. Which if you misguidedly took to meant performance, would make Rev to be 2002-2003 tech, assuming a 3x leap ;)

I think Nintendo's being conservative however with regard to performance claims.

Oops. :)



Well, of course they're being conservative. They always have. You had Sony and MS saying Ps2 and Xbox could do 100+ million polys per second (and they never got close to that. Not even close to 50 million, I believe). Meanwhile, you had little Nintendo saying "um, the cube can do 14 million polygons per second. " and then Rouge Leader: Rouge Squadron 2 (A game developed from start to finish, including concept, in eight months) came out on GC launch, and it pushed over 18 million polygons per second.
 
Yeah....I miss the days when MS said XBOX could do Toy Story 2 in realtime and could push 300M PPS :lol :lol :lol


And Sony???

Still waiting for those movie-like graphics Ken :lol
 
GaimeGuy said:
Meanwhile, you had little Nintendo saying "um, the cube can do 14 million polygons per second. "

They actually claimed 6-12m ;)

Sony and MS quoted theoretical peaks vs Nintendo's very conservative "in-game" figures, though in fairness to Sony they also gave out numbers with effects turned on (16m polys/sec with various effects turned on, IIRC), but everyone focussed on just the big theoretical numbers.
 
Kleegamefan said:
Yeah....I miss the days when MS said XBOX could do Toy Story 2 in realtime and could push 300M PPS :lol :lol :lol


And Sony???

Still waiting for those movie-like graphics Ken :lol

Or how Nintendo said the GC would sell 50 million and MS said Xbox would sell 100 million.


here we go again with Nintendo and MS saying they'll sell a billion (!) next gen. :lol
 
"That all means that the Xbox 360 runs at 100% efficiency all the time, whereas previous hardware usually runs at somewhere between 50% and 70% efficiency. And that should means that clock for clock, the Xbox graphics chip is close to twice as efficient as previous graphics hardware. "


This is probably the most important part of that whole interview and everyone needs to start taking notes. Now 100% is saying A LOT and it probably won't be quite 100%, in the beyond 3D article it was more like 95%, and that also is still theoretical so it remains to be seen.

However, if the 360 GPU can do that, even if you have another GPU that theoretically EQUALS Xenos but it's based on traditional methods, like the RSX, you're talking about a HUGE difference in efficiency. Efficiency has always been the biggest bottleneck for chips, so if ATI can pull this off, then they're probably right in saying it's going to be the most powerfull chip out of anything that comes out in the near future(I doubt 5 years).

This remains to be proven, but if it turns out right...Xenos is going to be mentioned quite a bit in future GPU articles. But don't expect it in PC's any time soon. Because of the 360 being a closed platform, ATI was able to design a chip and implement their USA without risk for the first time since they have been talking about it.

It's actually quite similar to CELL. The way CELL was designed for FPOs, this chip was designed for making console graphics shine.
 
"You had Sony and MS saying Ps2 and Xbox could do 100+ million polys per second (and they never got close to that. Not even close to 50 million, I believe). "

Dude those were RAW figures and yes they both CAN do that(you'd just have games with PSX textures or none at all actually). They were talking about RAW horsepower. Not fully lit, textured and everything else(they never said that which is what you're talking about). In other words check out Jack and Daxter for the PS3 or JSRF for the XBOX. They both use very little actual textures in favored of simple hand-drawn ones....and alows for an incredible amount of geometry.

And btw, in the tech world, it's quite a bit easier to just go ahead and list RAW horsepower then to figure out equations for all of the chip's bottlenecks and give polygon numbers under different conditions...WELL if our game had two textures, lights etc......you would never get a good understanding of what that would be.
 
jimbo said:
"That all means that the Xbox 360 runs at 100% efficiency all the time, whereas previous hardware usually runs at somewhere between 50% and 70% efficiency. And that should means that clock for clock, the Xbox graphics chip is close to twice as efficient as previous graphics hardware. "


This is probably the most important part of that whole interview and everyone needs to start taking notes. Now 100% is saying A LOT and it probably won't be quite 100%, in the beyond 3D article it was more like 95%, and that also is still theoretical so it remains to be seen.

However, if the 360 GPU can do that, even if you have another GPU that theoretically EQUALS Xenos but it's based on traditional methods, like the RSX, you're talking about a HUGE difference in efficiency.

His efficiency figures for "previous hardware" are questionable. I don't know any different, but I wouldn't blame him for overstating the situation..if you asked NVidia how efficient their cards are, you'd probably get a different answer. Or even from ATi themselves when they're promoting their PC cards ;)

In a closed system you will have games moulded around the hardware. Xenos will be more efficient with more arbitrary mixes of instructions, but programmers have control over the instructions going in (even though this varies from frame to frame, they do have some high level control), and thus utilisation can be influenced to some degree. With a closed system, you can design for the hardware specifically in front of you, unlike in PC land. I'd wonder how utilisation would look on a game designed for a specific "traditional" card's hardware distribution..

Also remember that "traditional architectures" aren't arbitrarily designed..the balance between pixel shaders and vertex shaders is aimed at the most common cases.

There are a number of other issues to look at if you wish to make comparisons also. They've aimed for efficiency on one level - the level of shader utilisation - but what about efficiency inside the shader? On that level, dedicated shaders should outperform unified shaders and would be more efficient. To what degree, we don't know, but that'll be a key point in determining if the tradeoff was worthwhile.

Furthermore you'd have to consider apparent differences between raw performance, although this is a murky point at this stage until we know more about RSX. We do know there's a 10% clock advantage, and possibly more shader logic however.
 
No way. I mean you honestly believe that? I'm not even going to comment on the fact that the Revolution would have to have Liquid Nitrogen in it to cool this GPU in that tight of a space.

He's talking about R&D budgets not silicon budgets. In other words ATI has said that Nintendo and MS have given them a similar amount of money to develop each chip. That doesn't mean each chip will be as expensive to produce.

OMFG.... PS3 graphics, ATI's own Revolution graphics: OWNED

ATI are under a none disclosure agreement ATM for Revolution, therefore they can't actually talk about its features. They can only talk about 360 ATM. Also as the guy said each team working on each chip doesn't share info. So I doubt he knows much about Revolutions GPU anyway.
 
"It's impossible to be 100% efficient, realistically."

No not 100%, but the Xenos pixel and vertex shaders can get very close. Like I said it was mentioned in the beyond3D article at around 95% efficiency. I've been doing some reading so let me explain why from what I understand. What keeps traditional chips down, is the fact that pixel and vertex shaders have to sit around and wait doing nothing because they are designed for strictly one thing. So think about it...at this point in the rendering stage...you have a bunch of vertex information but almost no pixel shader information...well the vertex shaders are going to be busy crunching numbers while the pixel shaders are just going to sit around and wait until data for them is available.

With the Xenos pixel and vertex shaders.... they can process whatever it needs to be done right then, and alternate every other clock cycle. So say right now you just have vertex information an no pixel information(an unlikely scenario put simple to use for examples)....you tell ALL of your shaders to do vertex information...therefore they are ALL working at once...the next cycle you might have 80% pixel and 20% vertex...well the chip automatically assigns 80% of its shaders to pixel and 20% of its shaders to vertex processing.

So YES it can get REALLY close to 100%. There will be some things that will keep it from achieving that, there always are...but its efficiency is incredibly high.


Put it this way for current graphics card to do the same thing as XENOS...the developers would have to design a game where it split equally the amount of vertex and pixel information to be processed at any one point, ALL the time.
 
Would it be tough for Nintendo to incorporate EDRAM onto the system like the X360 GPU at this stage of the game? The EDRAM gives free 2XAA and only like a 5 percent hit for 4XAA, right?

It won't be tough no, Revolutions GPU has been planned to have eDRAM since before 360's GPU even went into development :)

MoSys confirmed in a conference call that they have licensed their new IT-Sram-Q memory (1 fourth the density of the embedded ram in GC) to NEC to be used on a 90nm embedded process for Revolution.
 
jimbo said:
Put it this way for current graphics card to do the same thing as XENOS...the developers would have to design a game where it split equally the amount of vertex and pixel information to be processed at any one point, ALL the time.

No, not equally. Dedicated "split" architectures are biased towards pixel shading because that is what's most demanded. E.g. the upcoming PC cards have 8 vertex shaders and maybe 24 or 32 pixel shaders. Again, that isn't a plucked-from-the-air split, it's done that way for a reason, that is to accomodate the most common case.

Also, the point about consoles is they are a closed system, and thus you can design around the specific hardware. In fact you must to get best performance. Of course, managing precise instruction mixes from frame-to-frame isn't particularly feasible, but you can manage things on other levels to increase utilisation for most cases. On Xenos, however, that doesn't matter as much, you should be able to throw any mix at the hardware and let it handle it (although I'd wager some mixes would still fair better than others still, even if not by the same margin as different mixes on dedicated hardware might fair).
 
gofreak said:
They actually claimed 6-12m ;)

Sony and MS quoted theoretical peaks vs Nintendo's very conservative "in-game" figures, though in fairness to Sony they also gave out numbers with effects turned on (16m polys/sec with various effects turned on, IIRC), but everyone focussed on just the big theoretical numbers.


main difference is, Nintendo's first games (first rogue leader game on GC) immediatly shatered that figure (RL pushed 20Million with all sorts of effects goning on), while it took sony ages to get anywhere near the figures theyd given (GT3 pushed 9 Million iirc)
 
I think the Xenos GPU is the most interesting design. This chipset could be something completely revolutionary or it could be a big flop. Or maybe something inbetween. It's definitely one of the most intriguing GPUs I've seen in awhile. I hope it pans out.

I am interested in the shader efficiency of the RSX. This area is going to play a huge role next gen. I'd say better shader efficiency for RSX would serve a much better purpose than being 50MHZ faster (Which really isn't much when you are comparing a 500MHZ part to a 550MHZ part). If Xenos ends up being a better shader card this would be a huge advatage for the X360.
 
gofreak said:
His efficiency figures for "previous hardware" are questionable. I don't know any different, but I wouldn't blame him for overstating the situation..if you asked NVidia how efficient their cards are, you'd probably get a different answer. Or even from ATi themselves when they're promoting their PC cards ;)

In a closed system you will have games moulded around the hardware. Xenos will be more efficient with more arbitrary mixes of instructions, but programmers have control over the instructions going in (even though this varies from frame to frame, they do have some high level control), and thus utilisation can be influenced to some degree. With a closed system, you can design for the hardware specifically in front of you, unlike in PC land. I'd wonder how utilisation would look on a game designed for a specific "traditional" card's hardware distribution..

Also remember that "traditional architectures" aren't arbitrarily designed..the balance between pixel shaders and vertex shaders is aimed at the most common cases.

There are a number of other issues to look at if you wish to make comparisons also. They've aimed for efficiency on one level - the level of shader utilisation - but what about efficiency inside the shader? On that level, dedicated shaders should outperform unified shaders and would be more efficient. To what degree, we don't know, but that'll be a key point in determining if the tradeoff was worthwhile.

Furthermore you'd have to consider apparent differences between raw performance, although this is a murky point at this stage until we know more about RSX. We do know there's a 10% clock advantage, and possibly more shader logic however.

errr. are you suggesting that gamedesigners design environments in such a way that vertex shaders and pixel shaders always get the right amount of work? Thats impossible, theres too much variables in a real time game environment to acomplish that.

You cant twist the unified shaders thing as if this would all of a sudden make Nv come out on top.
alot of developers have elaborated on the waist in power with current architecture in the past.
Unified shaders *are* the way forward, as indicated by WFG2 spec. the only problem so far has been pulling it off.
Theres always gotta be someone who manages it first, and by the looks of it it was ati.
 
I don't see how you can design your game very effectively around the pixel vertex shader split in current hardware. The issue isn't really total vertices vs. pixels in a scene, it's what a camera is doing at any given clock cycle. There would be situations where the camera might have to only render a very large polygon because the camera is really zoomed in and when its far away then the reverse situation is the case where you start having to render a ton of vertices. I don't see how a developer can design for those situations.
 
Hajaz said:
main difference is, Nintendo's first games (first rogue leader game on GC) immediatly shatered that figure (RL pushed 20Million with all sorts of effects goning on), while it took sony ages to get anywhere near the figures theyd given (GT3 pushed 9 Million iirc)

Oh I completely agree, in fact I was helping Nintendo's case by pointing out their claimed figure was actually lower than 14m.

Mrbob said:
I think the Xenos GPU is the most interesting design. This chipset could be something completely revolutionary or it could be a big flop. Or maybe something inbetween. It's definitely one of the most intriguing GPUs I've seen in awhile. I hope it pans out.

I am interested in the shader efficiency of the RSX. This area is going to play a huge role next gen. I'd say better shader efficiency for RSX would serve a much better purpose than being 50MHZ faster (Which really isn't much when you are comparing a 500MHZ part to a 550MHZ part). If Xenos ends up being a better shader card this would be a huge advatage for the X360.

I agree, it's definitely the most novel design. As for shader efficiency, remember that RSX should be more efficient on one level (inside shaders), Xenos on another (between shaders). The picture is much more complex than a simple clockspeed difference.

Hajaz said:
errr. are you suggesting that gamedesigners design environments in such a way that vertex shaders and pixel shaders always get the right amount of work? Thats impossible, theres too much variables in a real time game environment to acomplish that.

dorio said:
I don't see how you can design your game very effectively around the pixel vertex shader split in current hardware. The issue isn't really total vertices vs. pixels in a scene, it's what a camera is doing at any given clock cycle. There would be situations where the camera might have to only render a very large polygon because the camera is really zoomed in and when its far away then the reverse situation is the case where you start having to render a ton of vertices. I don't see how a developer can design for those situations.

As I said, you can't manage things frame by frame. You can't account for the case where the camera is looking at one polygon, for sure. But you can manage things on a higher level in terms of scenes and shaders used, and examine what the typical weightings are and then optimise for your hardware. I'm certianly not suggesting you'll get 100% utilisation all the time, I'm simply saying there are ways of managing your mix to a certain degree..you don't have to throw your game blindly at the hardware.

Hajaz said:
You cant twist the unified shaders thing as if this would all of a sudden make Nv come out on top.
alot of developers have elaborated on the waist in power with current architecture in the past.
Unified shaders *are* the way forward, as indicated by WFG2 spec. the only problem so far has been pulling it off.
Theres always gotta be someone who manages it first, and by the looks of it it was ati.

Unified shading is the future, but that doesn't necessarily mean it makes sense today. (With WGF2.0, the requirement for hardware unified shaders has been removed IIRC, at NVidia's request - they have to appear unified to the software, but on a hardware level they can be dedicated). ATi argues the time is right, NVidia argues it isn't - that in a SM3.0(+) model, pixel and vertex shaders are still too different. Who's right? Who's wrong? We'll have to wait and see.
 
Anyone expecting Ati to trash the Xbox in order to promote the Revolution (which Nintendo isn't ready to talk about) should just end their life for the good of mankind. When the Revolution is shown off, expect the same kind of interviews from Nintendo's partners.
 
"No, not equally. Dedicated "split" architectures are biased towards pixel shading because that is what's most demanded. E.g. the upcoming PC cards have 8 vertex shaders and maybe 24 or 32 pixel shaders. Again, that isn't a plucked-from-the-air split, it's done that way for a reason, that is to accomodate the most common case."

True but that's still not going to come close to real world needs.

Think about it. In the real world player controls what he's looking at with just his thumb. The developer has control but not complete control because you are also the other half of the puzzle that's going to tell your system what to render at any one point.

So say you have a scene where you have a character sitting at the edge of a city looking over an ocean. At that point while he's starring at that ocean, you have a bunch of geometry far away that are low textured(boats, refinery, etc) because you will never get close enough to see them. So right now the vertex shaders are working pretty hard, but your pixel shaders aren't. All of a sudden your player turns and looks at the buildings next to him. In that second the amount of workload needed for pixel shaders shoots up.

So in real world, with dedicated vertex and pixel shaders you still have to develop for the common denominator(no more vertex inromation than allowed for your vertex shader at any one time, no more pixel information than allowed at any one time)

On the Xenos on the other hand...you can now have 3 or 4 refineries on that ocean and 3 times more boats and even more realistic looking waves...because your pixel shaders can do the same thing your vertex shaders can do. When you turn around, the job gets split automatically again. That's the beauty of this system. It not only efficient, it's also more or less automatic.


"The issue isn't really total vertices vs. pixels in a scene, it's what a camera is doing at any given clock cycle. There would be situations where the camera might have to only render a very large polygon because the camera is really zoomed in and when its far away then the reverse situation is the case where you start having to render a ton of vertices. I don't see how a developer can design for those situations."


Thank you. You said what I wanted to say in a lot less words:)



Another thing: With the USA in Xenos vs RSX its worth pointing out that the CELL in the PS3 is supposedly able to help out with vertex information...so that might help even things out in cases like my example above. But how good that's going to work, and how or if it will be implemented in a real world remains to be seen.
 
" agree, it's definitely the most novel design. As for shader efficiency, remember that RSX should be more efficient on one level (inside shaders), Xenos on another (between shaders). The picture is much more complex than a simple clockspeed difference."

What do you mean by inside the shaders? Oh and I agree clockspeed difference is hardly going to make a difference in this case.
 
jimbo said:
" agree, it's definitely the most novel design. As for shader efficiency, remember that RSX should be more efficient on one level (inside shaders), Xenos on another (between shaders). The picture is much more complex than a simple clockspeed difference."

What do you mean by inside the shaders? Oh and I agree clockspeed difference is hardly going to make a difference in this case.
I think his assumption is that a general shader is less powerful than a specialized one.
 
jimbo said:
So say you have a scene where you have a character sitting at the edge of a city looking over an ocean. At that point while he's starring at that ocean, you have a bunch of geometry far away that are low textured(boats, refinery, etc) because you will never get close enough to see them. So right now the vertex shaders are working pretty hard, but your pixel shaders aren't.

If you're looking at an ocean with a city far in the distance, I'm not sure how your vertex shaders are working overtime. The city far away would have been chopped down by LOD, the ocean would be the bulk of the work, but I don't think that'd be stressing the vertex shaders.
jimbo said:
All of a sudden your player turns and looks at the buildings next to him. In that second the amount of workload needed for pixel shaders shoots up.

Again, I don't think your example works very well here. Water rendering is very pixel-shader heavy. I think switching to buildings would actually be a relief to the pixel shaders.

As for staring at one polygon, yeah, your vertex utilisation goes through the floor, but what do you expect Xenos to do in that situation? Start arbitrarily pumping up pixel detail? That's not going to happen, the polygon or whatever isn't going to start changing in appearance to look better. Both chips would go underutilised in such a situation.

Despite the examples, I get your point, I completely agree with that - Xenos adapts to arbitrary instruction mixes much better. However, my point is that many questions remain regarding its relative performance. Better utilisation on that level does not necessarily imply better performance. Perhaps if all else were equal it would, but all else is not necessarily equal as I've outlined before (the question of internal shader efficiency, clock speed, raw shading logic etc.). Sometimes the relatively "dumb" approach can be better.
 
Hajaz said:
http://www.beyond3d.com/articles/xenos/index.php?p=09



looks to me like it can do both way beyond sm3.0 spec, so i dont see what the problem is tbh

Well you don't work for NVidia evidently ;) They do see a problem. ATi doesn't.

My (+) was there for a reason btw - and that NVidia pressed for the removal of the unified hardware requirement from WGF2.0 suggests they see issues even at SM4.0.

ATi could have struck on the magic sauce to make everything unified fine and dandy, or NVidia may be correct (and NVidia do have more real-product experience with 3.0+ architectures, so perhaps they had a little clearer vision as to any issues with unifying it). We'll have to wait and see how it pans out.
 
"Again, I don't think your example works very well here. Water rendering is very pixel-shader heavy. I think switching to buildings would actually be a relief to the pixel shaders."

You're probably still thinking in terms of the way current water is being done on today's consoles. Let's strap a gooey looking texture on top of a polygon and give it some reflective qualities. Then make it move a little. I'm talking about deformation, waves, physics...the way it's supposed to be done...this is VERTEX TRANSFORMATION and DEFORMATION and it does require a lot of vertex processing.

This is why I picked an ocean and not a lake.
 
jimbo said:
"That all means that the Xbox 360 runs at 100% efficiency all the time, whereas previous hardware usually runs at somewhere between 50% and 70% efficiency. And that should means that clock for clock, the Xbox graphics chip is close to twice as efficient as previous graphics hardware. "


This is probably the most important part of that whole interview and everyone needs to start taking notes. Now 100% is saying A LOT and it probably won't be quite 100%, in the beyond 3D article it was more like 95%, and that also is still theoretical so it remains to be seen.

However, if the 360 GPU can do that, even if you have another GPU that theoretically EQUALS Xenos but it's based on traditional methods, like the RSX, you're talking about a HUGE difference in efficiency. Efficiency has always been the biggest bottleneck for chips, so if ATI can pull this off, then they're probably right in saying it's going to be the most powerfull chip out of anything that comes out in the near future(I doubt 5 years).

This remains to be proven, but if it turns out right...Xenos is going to be mentioned quite a bit in future GPU articles. But don't expect it in PC's any time soon. Because of the 360 being a closed platform, ATI was able to design a chip and implement their USA without risk for the first time since they have been talking about it.

It's actually quite similar to CELL. The way CELL was designed for FPOs, this chip was designed for making console graphics shine.

when Richard said 5 years, he was talking in terms of game console lifecycles.
 
jimbo said:
"Again, I don't think your example works very well here. Water rendering is very pixel-shader heavy. I think switching to buildings would actually be a relief to the pixel shaders."

You're probably still thinking in terms of the way current water is being done on today's consoles. I'm talking about deformation, waves, physics...the way it's supposed to be done...this is VERTEX TRANSFORMATION and it does require a lot of vertex processing.

If all you're looking at is the ocean, and a LODed low poly skyline, I don't think either chip would have any trouble from a vertex shading POV. Your ocean would also be LODed, btw.

If you wanted to do stuff with VS beyond what the GPU was capable of, there's a piece of kit at the other end of a very fat pipe that may be useful..

As above, I completely get what you're trying to say, and I'm sure there are situations that would illustrate the point, I'm just not sure if this example is the best.
 
Mrbob said:
I think the Xenos GPU is the most interesting design...
Well, I'll bet the Nintendo Rev. GPU ends up as the most interesting design. FACINATING, in fact! But not in a good way.
 
gofreak said:
However, my point is that many questions remain regarding its relative performance. Better utilisation on that level does not necessarily imply better performance. Perhaps if all else were equal it would, but all else is not necessarily equal as I've outlined before (the question of internal shader efficiency, clock speed, raw shading logic etc.). Sometimes the relatively "dumb" approach can be better.
Agreed, but I would hope that ATI did enough profiling with real world apps to see if they aren't tackling a problem that doesn't exist. I think Nvidia has said that they will eventually go with a unified design so they must see some benefits of it or else they would stay with their current designs.
 
dorio said:
Agreed, but I would hope that ATI did enough profiling with real world apps to see if they aren't tackling a problem that doesn't exist. I think Nvidia has said that they will eventually go with a unified design so they must see some benefits of it or else they would stay with their current designs.

There is a problem, but this may not be solving it for now. Or it may be solving it but creating others.

Adopting it in the future says nothing about its suitability now. Everyone's moving toward more general chips, and unifying the pipe is a part of that, but evidently not everyone agrees on the timing.

There's a lot to consider if comparisons are to be made, particularly between chips that are so different. No design is perfect, everything has its ups and downs, and it's a tricky job weighing those off against one another.
 
got an intresting quote from oa for you guys btw


"Got to talk to an old friend of mine from HS. Hes an AI programmer. He designed the orginal Splinter Cell's AI and the AI for Deus Ex 2 and Thief 3. Bastard even had a level from Splinter Cell named after him. Long story short, hes currently working at Midway on next gen stuff. Hes got working specs and kits on both X360 and PS3. He says X360 will hold up just fine against PS3. It even edges it out in some categories. But all in all he says its gonna be extremely difficult to see a difference in the systems. He says the only thing that may bite Sony in the "graphical" ass is the lack of anti-aliesing. Other than that hes under strict "we will fire your ass if you talk" NDA about what hes working on so don't ask."


we do easily forget about the free AA thing. if Xenos is twice as efficient as a normal gpu AND it doesnt take a 20% hit on AA like normal GPU's do, then NV wil have to come with a lot of "dumb" firepower to match it
 
"If all you're looking at is the ocean, and a LODed low poly skyline, I don't think either chip would have any trouble from a vertex shading POV. Your ocean would also be LODed, btw.

If you wanted to do stuff with VS beyond what the GPU was capable of, there's a piece of kit at the other end of a very fat pipe that may be useful..

As above, I completely get what you're trying to say, and I'm sure there are situations that would illustrate the point, I'm just not sure if this example is the best."


Ok well let me rephrase and make it more general then.

Say you have a game level split into 3 parts A, B and C..begining middle and end.

Part A of that level would be extremely vertex heavy, utilizing 80% of all shaders on Xenos as vertex shaders, part B is a transition point(where you could split up the work between vertex and pixel shaders like the old traditional ways) which would also act as a visual barrier/constraint between C and A(put a halway for example between A and B and B and C)so as a gamer could never have both A and C in the same view. Even simpler part B could simply just be an L shaped hallway. And part C you would make it extremely pixel shader heavy.

This is where first of all you can see that for the first time developers have the freedom of being able to design something like this.

And to make my point, on traditional cards with dedicated vertex and pixel shaders....you just simply could never do part A and B of that level. All three parts would have the same maximum number of vertex and pixel information at any one point. You could have less, yes, but not more.

USA is going to allow for some pretty amazing level designs on top of everything else.

And let's keep in mind USA is just ONE of the Xenos's advantages over traditional cards. Its eDRAM .........is a whole different story too. I'm really excited about what this GPU is going to be able to do for scenes like forrests and grass once developers learn how to use it. Because now that strand of grass and leaf doesn't have to be moved back and forth from main RAM to the GPU sucking up bandwidth...you can just store it right in the eDRAM and use it freely and keep your bandwidth free and clear of excess and reusable data(the GC already is using this btw so its nothing new, it's just a lot more evolved).

The more I learn about this chip the more excited I get about it.

"we do easily forget about the free AA thing. if Xenos is twice as efficient as a normal gpu AND it doesnt take a 20% hit on AA like normal GPU's do, then NV wil have to come with a lot of "dumb" firepower to match it"

Yes exactly and I don't think they'll ever be able to get it running at 700Mhz-1000Mhz depending on its efficiency to come up with that firepower, especially with current yields and heating problems they're having with the G70. There's really only one reason why Nvidia hasn't come out gunz blazing to blast Xenos...because they know they'll eventually use this in their own chips. Just like how they took the lead with pixel and vertex shaders and set the standard now ATI is setting a new standard. I wouldn't doubt if Nvidia can come back and use this same techonology to blow ATI's card out of the water, but as of right now, ATI's got the edge right here. It's just too bad ATI can't use it in PC's just yet.
 
gofreak said:
Adopting it in the future says nothing about its suitability now. Everyone's moving toward more general chips, and unifying the pipe is a part of that, but evidently not everyone agrees on the timing.
Maybe that's what I don't quite see. If you have a closed environment to work in like consoles are then what's the timing issue for nvidia? Why is this design bad now but good in the future?
 
This thread lost me about halfway through page 1. I'm dumb. Sounds interesting though!
 
"Maybe that's what I don't quite see. If you have a closed environment to work in like consoles are then what's the timing issue for nvidia? Why is this design bad now but good in the future?"

I believe there are at least two reasons we know of. One...it's still an unproven chip...the 360 will decide its fate. Two, I've read there are compatibility issues with games and programs designed for traditional graphics cards as well as other functions the GPU performs that isn't necessarily related to graphics in games. Remember Xenos is designed as a graphical beast for console games. Before they can even begin to make a chip like Xenos for desktop they would probably have to design the transition chip(or add transitional elements to this chip) that will allow programs designed in the tradional ways to run smoothly on something like this.

Right now it's just a lot easier to simply beef up speed and horsepower on GPU's and leave efficiency where it is.

Which brings me to another point. If Xenos can do everything its hyped up to do, then what we might see here is that console games might have a graphical edge over PC games for a lot longer than we are used to(only a couple of months). With the first Xbox it was quite easy for PC's to take the lead again. All it needed was a faster version of their current chips, or the one that was in the Xbox. With Xenos, faster speeds might just not be enough to take it in the first 6 months.

Of course it's been said before and I will say it again. Before us Xenos lovers get too excited, it still remains to be seen just how INEFFICIENT current graphics cards are. If they're only 50%-70%, than that would bode VERY well for Xenos, as it could take twice the speed of current hardware to match it. If they're more like 80% it wouldn't take all that much.

This is another thing that has to be witness before they quickly all change to USA...just how much more efficient is XENOS. No benchmark tests yet.
 
Gahiggidy said:
Well, I'll bet the Nintendo Rev. GPU ends up as the most interesting design. FACINATING, in fact! But not in a good way.


Heh, you seem to be taking the Revolution non HD era a little tough. I almost feel sorry for you. ALMOST. Then I remembered you are a Nintendo fan and should be desensitized to the pain by now. :D
 
jimbo said:
"Maybe that's what I don't quite see. If you have a closed environment to work in like consoles are then what's the timing issue for nvidia? Why is this design bad now but good in the future?"

I believe there are at least two reasons we know of. One...it's still an unproven chip...the 360 will decide its fate. Two, I've read there are compatibility issues with games and programs designed for traditional graphics cards as well as other functions the GPU performs that isn't necessarily related to graphics in games. Remember Xenos is designed as a graphical beast for console games. Before they can even begin to make a chip like Xenos for desktop they would probably have to design the transition chip(or add transitional elements to this chip) that will allow programs designed in the tradional ways to run smoothly on something like this.

Right now it's just a lot easier to simply beef up speed and horsepower on GPU's and leave efficiency where it is.

Which brings me to another point. If Xenos can do everything its hyped up to do, then what we might see here is that console games might have a graphical edge over PC games for a lot longer than we are used to(only a couple of months). With the first Xbox it was quite easy for PC's to take the lead again. All it needed was a faster version of their current chips, or the one that was in the Xbox. With Xenos, faster speeds might just not be enough to take it in the first 6 months.

Of course it's been said before and I will say it again. Before us Xenos lovers get too excited, it still remains to be seen just how INEFFICIENT current graphics cards are. If they're only 50%-70%, than that would bode VERY well for Xenos, as it could take twice the speed of current hardware to match it. If they're more like 80% it wouldn't take all that much.

This is another thing that has to be witness before they quickly all change to USA...just how much more efficient is XENOS. No benchmark tests yet.
But those 2 points are irrevelant in a console environment which is why I question why nvidia didn't do a unified design for the ps3. Every console chip is unproven. That's what testing is for plus you're designing all your software for one target so those incompatibilities are not the issue.
 
There are a number of other issues to look at if you wish to make comparisons also. They've aimed for efficiency on one level - the level of shader utilisation - but what about efficiency inside the shader? On that level, dedicated shaders should outperform unified shaders and would be more efficient. To what degree, we don't know, but that'll be a key point in determining if the tradeoff was worthwhile.


This is a key point that I am very much wondering myself....

Unified shaders can perform operations on either vertices or pixels...for this flexibility you sacrifice performance, which is why nVidia claims they didn't want to go with unified shaders at this time...

ATI is talking big about 100% efficiency with the USA but what they are *not* talking about is how fast the USA is and how good its performance will be vs. a vertex/pixel shading architecture...

If the Xenos USA has ~100% efficiency but is only 50% as fast as a GPU with dedicated pixel/vertex shaders then it is a wash....

Unfortunately, ATI isn't giving performance numbers of how Xenos USA compares to even their own internal ATI PC cards (something they could easily do) so the "100% efficiency™"-card is useless within the context of how it performs vs., say RSX....

Of course it's been said before and I will say it again. Before us Xenos lovers get too excited, it still remains to be seen just how INEFFICIENT current graphics cards are. If they're only 50%-70%, than that would bode VERY well for Xenos, as it could take twice the speed of current hardware to match it. If they're more like 80% it wouldn't take all that much.

That is only one side of the coin....its not just about efficiency but speed/performance...if general purpose USAs only have 50-60% performance of dedicated Pixel/Vertex shaders, then that would nullifiy most of its advantage...

ATI wont do that comparison......not even against their own PC cards!!!!


Speaks volumes, IMO....
 
"Got to talk to an old friend of mine from HS. Hes an AI programmer. He designed the orginal Splinter Cell's AI and the AI for Deus Ex 2 and Thief 3. Bastard even had a level from Splinter Cell named after him. Long story short, hes currently working at Midway on next gen stuff. Hes got working specs and kits on both X360 and PS3. He says X360 will hold up just fine against PS3. It even edges it out in some categories. But all in all he says its gonna be extremely difficult to see a difference in the systems. He says the only thing that may bite Sony in the "graphical" ass is the lack of anti-aliesing. Other than that hes under strict "we will fire your ass if you talk" NDA about what hes working on so don't ask."


O.K....lets take a lookseehere...


So his buddy was the AI programmer for the original Splinter Cell, Deus ES2 and Theif 3...is this correct??


So the original Splinter Cell (Ubisoft Montreal) shipped 11/17/02....

12 1/2 months later, (12/02/03)Deus Ex: Invisible War shipped...

FIVE MONTHS LATER(5/25/04)Thief: Deadly Shadows shipped..

All three titles were in development for more than 2 years each *YET* Mr. Super AI programmer did SIX man years of AI design(3 games*2 years in development) in 18 MONTHS (11/17/02-5/25/04)

That is impressive beyond belief!! :lol :D :lol :D
 
I feel sorry for the people expecting the Rev GPU to be more powerful than the Xenos. Seriously, we are talking tech that has to be small enough to fit into the Rev and be a ble to sell at $200. And most likely Nintendo will try not to lose money on it unlike MS who will allow up to $100 on each console.

And yet somehow Nintendo will magically have better graphics. :/
 
"But those 2 points are irrevelant in a console environment which is why I question why nvidia didn't do a unified design for the ps3. Every console chip is unproven. That's what testing is for plus you're designing all your software for one target so those incompatibilities are not the issue."

Oh well I didn't know you were asking me about the PS3. I thought you were asking why not in PC's. First of all ATI is just further along with the USA architecture, and have a perfect opportunity to be the first to make it work, AND have been working on this chip for a longer time. Nvidia only started working on the PS3 chip last year and just like with the original Xbox, time constraints is what's making them just use a derivative of their G70, just like the NV2A was a derivative back in the day for the xbox.

That's like asking why ATI weren't the first ones to use pixel and vertex shaders.....Nvidia just did it first. ATI are simply taking a chance on something new which could be the next big thing, and Nvidia just didn't have the luxury of having a risc-free environment like the 360 to make something like this from the begining...or simply didn't think of it first. They have to crunch something in a year that they know will be good and hold up for the PS3. So they went with reliability. ATI's been testing this for years. They were able to risk.
 
RSX almost definitally has an advantage over Xenos in pure fillrate,

RSX: 13.2 Gpixels/sec or 13,200 Mpixels/sec (24 * 550) vs Xenos: 4 Gpixels/sec or 4,000 Mpixels/sec (8 * 500)

however if RSX has to do 4x FSAA that high fillrate is going to drop like a ROCK.

that is why Xenos has the 'equivalent' of 16 Gpixels/sec or 16,000 Mpixels/sec which comes out on top of RSX when anti-aliasing is figured into the equasion. this will give Xenos the advantage in anti-aliasing, but not pure fillrate, where RSX should be able to apply its strength. there have to be other areas where RSX has advantages over Xenos and where Xenos has advantages over RSX.

neither GPU is going to shit on the other.
 
God's Hand said:
Laugh now, but Revolution is quite a capable machine. Can't wait till it's all revealed before years end.
Capable perhaps, but still a lot less powerful than X360 and PS3, which is what this ATI guy is confirming (though Miyamoto practically confirmed it ages ago since that E3 IGN interview). And with no HDTV support, there's little left to argue concerning Revolution's grahical power against the other consoles.
 
Top Bottom