• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

ATI interview on the new consoles.

Scrow said:
xbox360 is only twice as powerful as Xbox. So what's the big deal about Revolution being "only" 2-3 times as powerful as Gamecube?


;P


Xbox 360 is more than twice as powerful as Xbox.


in some areas, Xbox 360 is 3 to 5 times as powerful as Xbox
(GPU raw specs and main memory bandwidth.)

in other areas, Xbox 360 is 7 to 14 times as powerful as Xbox.
most comments on Xbox 360 vs Xbox fall into this range.


in CPU power, Xbox 360 would appear to be *dozens* of times more powerful than Xbox
(115 Gflop XeCPU vs 1.5 or 3 Gflops Intel CPU ).

on top of this raw power increase (which is more than 2x) the Xbox 360 GPU is *upto*
twice as efficient, which magnifies the sheer power increase that Xbox 360 already has over Xbox.


with that said, Xbox 360 is not quite as much of a leap over Xbox, that Gamecube was over Nintendo 64,
or that Playstation2 was over Playstation.
 
midnightguy said:
bullshit, RSX is *NOT* double Xenos in brute power or in amount of on-chip processing units, also known as 'functional units'. If RSX was as you are saying, it would have significantly more transistors, like 400 to 500 million. but guess what, it does not. it has slightly over 300 million.

it is more like, the *modest* advantages that RSX has in *some* areas, is nullified by Xenos' very significant efficiency. on top of that, aside from efficiency, there are areas where Xenos outright beats RSX.

they are both different GPUs that acomplish things in different ways, but both will arrive at roughly the same 'plane' that will be what we come to know as 'next-gen' console graphics.


Yeah, on the surface, I do agree with you...

RSX, I would think, CANT be double the performance of XENOS...that would be a huge jump...

I also think that you can't really look at XENOS or RSX alone as they seem to be designed with "systems thinking" in mind and are only one part of the graphic puzzle...

Besides the traditional CPU/GPU relation you will have the very real option of XeCPU+Xenos and CELL+RSX solution to graphics, which is very exciting to me...

Unfortunately, this just introduces another confusing element to the puzzle:

Cell is more powerful than XeCPU but when used with RSX will that flexibility be enough to compensate for the effiency of Xenos?

How effective with XeCPU+Xenos operation be when only 1MB of L2 cache have to be shared between 3 CPU cores AND Xenos which might lock part of the L2 cache, starving the other 3 cores even more?

Will Cell be effective enough at pixel shading assist to be worth the trouble for developers?

Compared to dedicated pixel/vertex shaders how fast *are* those ALUs in Xenos, really?


These are just some of the questions a dumbass like me can come up with.....I am sure this is just the tip of the iceburg of some of tradeoffs to both of these consoles....we will just have to wait and see :/
 
I haven't seen an independent developer/publisher who actively works with both PS3/X360 state X360 is more powerful than PS3, which is probably why you are seeing all the MS damage control/mis-information these days

Quite a few have said one will not have a noticable graphical advantage over the other.
 
TheDuce22 said:
Quite a few have said one will not have a noticable graphical advantage over the other.


Yeah but not one has said X360 is more powerful than PS3.....most give PS3 the edge...

midnightguy said:
Xbox 360 is more than twice as powerful as Xbox.


in some areas, Xbox 360 is 3 to 5 times as powerful as Xbox
(GPU raw specs and main memory bandwidth.)

in other areas, Xbox 360 is 7 to 14 times as powerful as Xbox.
most comments on Xbox 360 vs Xbox fall into this range.


in CPU power, Xbox 360 would appear to be *dozens* of times more powerful than Xbox
(115 Gflop XeCPU vs 1.5 or 3 Gflops Intel CPU ).

on top of this raw power increase (which is more than 2x) the Xbox 360 GPU is *upto*
twice as efficient, which magnifies the sheer power increase that Xbox 360 already has over Xbox.


with that said, Xbox 360 is not quite as much of a leap over Xbox, that Gamecube was over Nintendo 64,
or that Playstation2 was over Playstation.

IAWTP
 
TheDuce22 said:
Quite a few have said one will not have a noticable graphical advantage over the other.

I would suggest this is mostly due to people not knowing enough about both machines, and hoping to shut people up.


For me, RSX with fixed pipelines *is* limited. I don't know if they've announced how its split, but if its similar to their other setups, then I can't help but think that

a) their vertex pipes will not be able to transform the kinds of numbers we will be looking at next gen (i.e. you'll need/want CPU assistance), and

b) given (a), you'd have made better use of the silicon making it a pixel shader only chip.
 
Kleegamefan said:
Yeah but not one has said X360 is more powerful than PS3.....most give PS3 the edge...



IAWTP


comments based on what? funny that when those power comments were made, both PS3 and Xbox 360 devkits were based on hardware which won't be in the console at all.
 
I just hope we don't get into a polygon pushing pissing contest again in time deciding which GPU is 'better'. With all the extra hardware effects being used you can make more detailed models with less polygons.

If anything, reading about Xenos and RSX makes me a little sad PC cards are sold for $500+ bucks (high end) and will never be truly pushed. Whereas Xenos and RSX will be further fully utilized.
 
midnightguy said:
bullshit, RSX is *NOT* double Xenos in brute power or in amount of on-chip processing units, also known as 'functional units'. If RSX was as you are saying, it would have significantly more transistors, like 400 to 500 million. but guess what, it does not. it has slightly over 300 million.

it is more like, the *modest* advantages that RSX has in *some* areas, is nullified by Xenos' very significant efficiency. on top of that, aside from efficiency, there are areas where Xenos outright beats RSX.

they are both different GPUs that acomplish things in different ways, but both will arrive at roughly the same 'plane' that will be what we come to know as 'next-gen' console graphics.
Actually I was talking about the ps3 as whole, since the ps3 can do 2teraflops, that's 2 times more powerful than the 360. the RSX has more advantages, you have dot product, the number of shader opts that can be done, the RSX simply is more powerful the the xenos. The only thing the xenos has over the RSX, "maybe," is being easy to work with
 
cobragt3 said:
Actually I was talking about the ps3 as whole, since the ps3 can do 2teraflops, that's 2 times more powerful than the 360. the RSX has more advantages, you have dot product, the number of shader opts that can be done, the RSX simply is more powerful the the xenos. The only thing the xenos has over the RSX, "maybe," is being easy to work with

chuga chuga chuga chuga choo choo!

all aboard!
 
Actually I was talking about the ps3 as whole, since the ps3 can do 2teraflops, that's 2 times more powerful than the 360. the RSX has more advantages, you have dot product, the number of shader opts that can be done, the RSX simply is more powerful the the xenos. The only thing the xenos has over the RSX, "maybe," is being easy to work with

I hate it when people pretend they know what they are talking about.
 
Kleegamefan said:
Yes and no...

We really don't have enough information yet....we know very general things about RSX (can get pixel/vertex assist from cell, large pipe between Cell/RSX, traditional vertex/pixel architecture, probably no eDRAM)

And although we know alot more about the effiency of Xenos we don't know how fast it performs vs. traditional vertex/pixel architectures....not even against ATIs own PC cards...

Until we can get some comparitive benchmarks and/or performance figures, then Xenos USA could just be a "jack of all trades, master of none" type tradeoff for all we know....

There are no perfect solutions and RSX or Xenos are no different...

For example with RSX:

Not as efficient as Xenos

Doesn't seem to have enough bandwidth to do 128-bit HDR, 1080p and FSAA simulataniously (perhaps it can get some assist from CELL which would introduce other tradeoffs)

Seems to be less customized than Xenos, which was designed for a console on day one

No eDRAM so bandwith demanding ops like AA will take a bigger hit than with Xenos..


I could go on, but we have plenty of members who are much better at pointing out the weakness of PS3 than I ;)


On the surface it does seem that nVidia (once again) has taken a brute force method and ATI has taken a nimble approach....that is not to say Xenos isn't powerful or RSX is innefficient....we just don't know enough of the picture yet...

One thing we have seen is rumors of 3rd party developers come flat out and say PS3 is more powerful than X360 and which that is a flawed comparison with the early dev kits and all, it is all we have to go on comparitively right now...

I haven't seen an independent developer/publisher who actively works with both PS3/X360 state X360 is more powerful than PS3, which is probably why you are seeing all the MS damage control/mis-information these days :D

At the resolutions that PS3 and Xbox 360 will be runnig it will be difficult to really notice any AA problems.

Also, Xbox 360 does in fact have an edge when it comes to AA, because it has 10MB of EDRAM, which is used to buffer frames for AA etc. But that is pretty much irrelivent. With all the power these consoles have, it's pretty much unncessecary, it just makes it a little bit easier to code for.

You might notice a lot of people talking about the Xbox 360's huge bandwidth, I believe the 256GB+ of bandwidth, but that ONLY applies to the GPU-EDRAM, nothign else.

For instance, the bandwidth to the Cell and RSX is larger than that of the Xbox360's GPU and it's CPU.
 
anyway youre wrong about it beeing twice as powerfull. even sony loyal tech heads like panajev have stated that the twice as powerfull estimate sony gave was far from true
 
midnightguy said:
it is more like, the *modest* advantages that RSX has in *some* areas, is nullified by Xenos' very significant efficiency. on top of that, aside from efficiency, there are areas where Xenos outright beats RSX.

they are both different GPUs that acomplish things in different ways, but both will arrive at roughly the same 'plane' that will be what we come to know as 'next-gen' console graphics.

But by the way you phrased it, any and all "modest" advantages by the RSX will be nullified by ATI's USA, and that Xenos will outright beat RSX in other areas. That doesn't sound like roughly the same plane to me. That sounds like Xenos has a solid gap. It'll have the cake and eat it too if we're to assume you are correct.

Given how little information has been released on RSX (or its possibilities in collaboration with Cell for that matter) I fail to see how one could come to such broad sweeping conclusions either way, but its interesting stuff nonetheless.
 
Kleegamefan said:
But Unified shaders, even at the same clock speed != the performance of dedicated shaders!!

USA=can do either vertex or pixel work which is highly effient *BUT* they are not as fast clock for clock/cycle for cycle than a dedicated pixel or vertex shaders at those operations...

Where did you hear that unified shaders aren't as powerful? Link, please.

Sony seems to be trying to differentiate the Ps3 from the Revolution and the 360 with its raw power, especially its use of blu-ray for full support of "true" high definition entertainment.
blu-ray will have no effect on what we see on the screen in games.
 
Deg said:
What do you guys think of Samsung's 90nm XDR DRAM in PS3? :)


Very low latancy.....much less than GDDR3, it seems...Nice that RSX can use it instead of/in addition to GDDR3..

I am sure that shit (XDRAM) is expensive, though.....
 
Kleegamefan said:
I have heard comments to that effect from nVidia (biased source) and nA0 and DeanoC (both developers) over at Beyond3D....

any of the beyond 3d guys have any guesses as to how much slower the USA's are? If we are talking 5-10% slower than the ASU sees like it would be a success, if we are talking 30% or so, it seems like a wash.

I would assume that ATI wouldnt push into risky new territory like this if they didnt think there would be an overall gain for doing so. Unless they are using MS's bankroll to play with new toys, which I seriously doubt would happen.
 
Wait, isn't this guy in charge of the X360 GPU and then he stated that his team has little if any contact with the ArtX team? I am not saying he doens't know, but maybe he doesn't know anything about "Hollywood."
 
The problem is that little idiots on message boards are expecting Ati to use interviews to hype up the Revolution when the Revolution hasn't been announced (for real) and the Xbox 2 is coming out this year. It is stupid to think that Ati will say much of anything about the power of the Revolution when they are in the middle of hyping up the product that will be released this year.

When Nintendo shows off the Revolution, expect the same type of interviews from Nintendo's partners.
 
Ulairi said:
The problem is that little idiots on message boards are expecting Ati to use interviews to hype up the Revolution when the Revolution hasn't been announced (for real) and the Xbox 2 is coming out this year. It is stupid to think that Ati will say much of anything about the power of the Revolution when they are in the middle of hyping up the product that will be released this year.

When Nintendo shows off the Revolution, expect the same type of interviews from Nintendo's partners.

exactly.. and if ATI has an agenda with USA's as far as PC cards go, that is another reason they would be pushing Xenon. Though, I wouldnt be shocked if their focus for Hollywood was a combo of power/size/heat that may wind up limiting the card overall.

The point is, as of right now, we know nothing about Hollywood, ATI cant even talk about it, and they are *not* going to say "get an Xbox, Revolution will suck" nor will they say "Hey, this Xenos thing is allright, but just wait till you see Hollywood". Even when both chips are out, expect alot of comparisons to PS3, and a whole lot of "Well, Hollywood and Xenos are good at different things, and fit their product perfectly"
 
dorio said:
Interesting, I hadn't heard that.
Why do you think Xenos is classified as a 48pipe GPU when it has 48 ALUs? Unified shaders are a blend of VS and PS and in Xenos, they're fragment shaders. I'm sure some of the logic has been moved to eDRAM, and other stuff was changed as well.

Anyway, as I've contended for a while, Xenos is a great chip, but elegance doesn't necessarily trump brute force. 90% of 4GP (Xenos) is still gonna be less than "50-70%" of 8.8GP (RSX). That's assuming 16ROPs for RSX. It could be more...it could be less. And speculation so far is that RSX will do 128bit blends, which might make that 128bit HDR more of a reality.

Personally, I'd like to think that a closed-box design like a console lets devs know exactly what's going on in the hw, so they can write much more efficient code. I don't know what GPU usage was like on purpose-built Xbox games, but I'd like to think it was more than 70%. But I don't know. With most titles going to the PS3, I'd assume they would maximize that Cell<->RSX bond. But even then, I don't see why RSX shouldn't be the more powerful part. PEACE.
 
dorio said:
Interesting, I hadn't heard that.


Yeah...this stuff is new technology....I am still learning myself..

BTW, here is a link that talks a little about unified shaders:

http://www.atomicmpc.com.au/article.asp?SCID=14&CIID=22720&p=2

Today's top of the line GPUs contain six vertex processors to manipulate geometry and 16 pixel processors to work on pixels. Both types of processors can do similar work and indeed have very similar instruction sets. With the next generation of Windows, Longhorn and the introduction of Windows Foundation Graphics 2.0, pixel and vertex shaders may very well be unified.

While a common piece of hardware with the same instruction set is nice and elegant, it's not clear that it's the best solution. NVIDIA is not convinced. Their Chief Scientist David Kirk called such shader hardware 'Jack of all trades, master of none.' When we spoke to John Montrym, NVIDIA's Chief Architect, he also gave a lukewarm assessment of unified shader hardware. ATI on the other hand has made it clear that its future GPUs will employ unified shader architecture.

So who's right? It's a delicate balancing of issues. A specialised vertex or pixel shader will always be faster than a 'generalised' one. So individually, unified architecture will be at a disadvantage. On the whole, however, a GPU using generalised shaders may prove to be more efficient than one without.

Because pixel processing can only occur after geometry processing, if one of the two stages takes too long, the other sits idle. For example, a vertex shader intensive game with simple pixels operations will overload the six vertex engines while leaving much of the 16 pixel pipelines sitting still. Conversely, a game that's low in geometry but applies a dozen effects to everything will choke the pixel pipelines while leaving the vertex hardware unused.

Unified shader hardware won't have such a problem. If a game is very well balanced between vertex and pixel shading, a unified shader GPU with 32 general shader units will split them evenly for vertex and pixel shading. If a game is geometry dominant, more units will be allocated to do just that while the remaining will work on pixels. As a game's content changes from frame to frame, such a GPU will be able to intelligently allocate its shader resources to best draw the picture.

While such a scheme sounds almost too good to be true, the alternative is by no means doomed. NVIDIA has a track record of making very efficient hardware. Current architectures have fragment buffers between the vertex and pixel pipelines. This alleviates much of the work balancing problem by providing a constant pool of pixels for the pixel shaders to work on. NVIDIA will most likely provide the same instruction set for both vertex and pixel shaders in future GPUs but still use different hardware for both. That being said, in the very long run, NVIDIA may eventually move to a unified architecture.

So it seems that at this point unified shaders are slower than dedicated shaders *in theory*

However, ATI claims to have lots of specialized logic within the ALUs that speed them up 33% on pixel ops according to them.....

We still need to see some comparative benchmarking to see what the tradoffs are, but ATI are only talking about the pros of USAs, which is understandable....

StoOgE said:
any of the beyond 3d guys have any guesses as to how much slower the USA's are? If we are talking 5-10% slower than the ASU sees like it would be a success, if we are talking 30% or so, it seems like a wash.

I would assume that ATI wouldnt push into risky new territory like this if they didnt think there would be an overall gain for doing so. Unless they are using MS's bankroll to play with new toys, which I seriously doubt would happen.


The X360 is the first ever implimitation of USAs so noone outside of a GPU vendor like ATI (who is actually releasing USA hardware) or nVidia (who have experimented with USAs but have not released any products yet) knows for sure...

Any performance guesses one way or the other are just that...guesses..

ATI could really help us out here by comparing Xenos to their PC GPUs but they are choosing not to at this time....which could mean something, or it may not....

We will have to rely on multi-platform developers like the EAs, the Konamis and whatnots to give us a solid answer.....

Sofar, EA in particular has said PS3 is more powerful, but that is not a fair assesment as neither RSX+CELL nor Xenos+XeCPU devkits are widely available :/
 
Pimpwerx said:
Why do you think Xenos is classified as a 48pipe GPU when it has 48 ALUs? Unified shaders are a blend of VS and PS and in Xenos, they're fragment shaders. I'm sure some of the logic has been moved to eDRAM, and other stuff was changed as well.
That's all just speculation on your part though. No one has came out and said that the USA designed gpu in the xenos has reduced shading power then other next gen cards. I'll believe it when someone breaks down the numbers in terms of shader ops etc.
 
"any of the beyond 3d guys have any guesses as to how much slower the USA's are? If we are talking 5-10% slower than the ASU sees like it would be a success, if we are talking 30% or so, it seems like a wash."



From beyond3D:

"Additional to the 48 ALU's is specific logic that performs all the pixel shader interpolation calculations which ATI suggests equates to about an extra 33% of pixels shader computational capability."
http://www.beyond3d.com/articles/xenos/index.php?p=07



Also I really don't understand why some of you are comparing this chip with the RSX the way you are. By simply using peak numbers when we have NO idea how the chip actually works and what OTHER features Nvidia may include before it's all ready to be manufactured. We need an in-depth look at the RSX like we have had at Xenos before we will be able to compare it correctly. Right now it's too early to compare numbers. You really just ought to look at features.
 
Since that 33% number doesnt compare to anything else, it is just that, an assumption....


And read it again, in addition to the ALUs is "specific logic" that give it a 33% boost compared to an ALU witout the additional logic....

And why do you ADD in the fact that it speeds up vertex ops? (2nd time you have done this)...it only specifies pixels....doesn't say anything at all about vertex performance boost...
 
Are you talking to me? If so where did I specifically say it would speed up vertex ops? And keep in mind, this is a USA architecture. In normal architectures you have a smaller number of vertex shaders compared to pixel shaders ANYWAY, so it's not much to simply dedicate more ALU's to vertex shading as needed.
 
Hey Jimbo, you forgot to edit this post on the previous page.....I saw you remove the vertex part of your last post AFTER THE FACT... very nice....and yes I am talking to you....

jimbo said:
"1) ATI claims 100% efficiency, so all pipes are used all the time. But what if those pipes aren't good at doing the job (jack of all trades) as dedicated pixel/vertex pipes? Say they are 30% slower? Suddenly you have the same effective performance as a traditional approach running at 70% efficiency. (more pipes full, but they don't go through the pipes so quick)"

Actually in the beyond3D artile ATI claims the Xenos ALU's are 33% MORE effective than currentvertex and pixel shaders...so there goes that theory.

"2) Xenos 'free' AA is a great thing, no doubt about it. But real fillrate is only 4GPixels/s. RSX is around 13GPixels/s, which is almost enough to give you the same 4xAA for 'free' if you use 4GP real fillrate. But then RSX can then forego AA and use 13GP for real fillrate. Lots and lots of passes for post scene processing etc. Less simple, but potentially a lot more flexible than Xenos"

Only? That's still quite a bit(remember it only has to do 720p and 1080i(which is actually 540). But yes, the RSX no doubt has an advantage over it in this department and this is why the PS3 is going to be able to do true 1080p. I don't know how many games will do that, but the ones that do will no doubt look simply incredible. Can't wait.

Compare this to the actual quote of the article:
"Additional to the 48 ALU's is specific logic that performs all the pixel shader interpolation calculations which ATI suggests equates to about an extra 33% of pixels shader computational capability."

That DOES NOT SAY the ALUs are "33% MORE effective than current vertex and pixel shaders"....it DOES say that in addition to the ALUs is other logic that speeds up pixel shader ops by 33%....


BTW, did you read this link at all?? it talks a little bit about the pros and cons of USAs:

http://www.atomicmpc.com.au/article.asp?SCID=14&CIID=22720&p=2
 
I used it in a general sentence because USA are both vertex and pixel shaders. But yes you are correct they are talking just about pixel shaders, at the same time, you make it sound like I went out of my way to make it sound as if I was talking about vertex shaders. I wasn't.

And AGAIN this is USA architecture. The only way you're going to get faster vertex processing on a traditional card is if ALL of your vertex shaders are being used 100% of the time(which is actually one of the main reasons USA was even invented) , while the USA architecture is split EXACTLY the same way as traditional means....which one again...this is the reason it was designed. So you can assign more vertex shaders as needed.

You have a total number of 8 vertex shaders running 20% faster. I have 16. You tell me which one is going to be faster overall?
 
The real question is, which CPU/GPU configuration will most accurately simulate a jiggling pair of jugs? The gentle bob and sway, the texture of goosebump-covered skin as a breeze blows by a pair of playfully erect nipples, the reflective sheen of high-gloss oil applied to thousand-polygon knockers... these will be determining factors in the coming console wars, and the primary motivator for the jump to HD.
 
I used it in a general sentence because USA are both vertex and pixel shaders. But yes you are correct they are talking just about pixel shaders, at the same time, you make it sound like I went out of my way to make it sound as if I was talking about vertex shaders. I wasn't.

O.K....sorry about that..

And AGAIN this is USA architecture. The only way you're going to get faster vertex processing on a traditional card is if ALL of your vertex shaders are being used 100% of the time(which is actually one of the main reasons USA was even invented) , while the USA architecture is split EXACTLY the same way as traditional means....which one again...this is the reason it was designed. So you can assign more vertex shaders as needed.

You have a total number of 8 vertex shaders running 20% faster. I have 16. You tell me which one is going to be faster overall?


I can see how you would think that, but here is the only source I have found sofar:

http://www.atomicmpc.com.au/article.asp?SCID=14&CIID=22720&p=2

So who's right? It's a delicate balancing of issues. A specialised vertex or pixel shader will always be faster than a 'generalised' one. So individually, unified architecture will be at a disadvantage. On the whole, however, a GPU using generalised shaders may prove to be more efficient than one without.

So all I can say to you right now is; everything I have seen sofar points to USAs being at a performance disadvantage (notice I didn't say effiency, which is not the same thing) compared to traditional vertex/pixel shader architectures of the same generation.....

ATI *could* clear all this up by giving us some performance numbers of Xenos vs. X800 or something but they choose not do this....why, I don't know...
 
for the 5 trillionth time... it's Microsoft who holds the reigns of Xenos, NOT ATI.

ATI would probably be fully willing to pimp Xenos but it's Microsoft. Microsoft will release information when the time is right.
 
Yes it's true that traditional vertex shaders could be faster but even that doesn't go against my example.

If you have a vertex shader that runs twice as fast as mine. But I have 8 and you have 4...we're going to get that information processed at the same time.

All that's saying is one dedicated vertex shader is faster than one USA shader assigned as a vertex shader. Yes that's probably true. But OVERALL is where you need to be looking. Because we're not comparing one vertex shader to another, but rather the OVERALL vertex and shading capabilities.

If you have a Ferrari and need to get 4 people across town, you'll still need to make two trips(and that's assuming you stuffed one in the gas tank...damn I need more sleep :lol ...my math is just not there today). I can do it in just one with my Honda Accord.

In that that one example, it's a disadvantage, but that's not how it will be used in real gaming applications. That's assuming you would never use more than the number of vertex shaders that a traditional card has dedicated. But why when you can? Because OTH, if I want to dedicate 20 vertex shaders, I don't care if yours are 30% faster, it's still not going to process vertex info than what my 20 can do. I will be able to do things with 20 that you will simply never be able to do with 8. That's the beauty of USA.

And notice how we haven't even mentioned efficiency here. Because in the real world those dedicated vertex shaders and pixel shaders will be sitting there doing nothing a lot of times.
 
jarrod said:
But Xbox 360 = Xbox 1.5... does that mean GameCube 1 > Xbox 1? :lol

check your math again.

if. xbox 360 = xbox 1.5 = gamecube 2.5

x1.5 = g2.5

x = g 1.666666666666666666666666666666666666666666666666666666666666666666666666666666666667
 
If you have a vertex shader that runs twice as fast as mine. But I have 8 and you have 4...we're going to get that information processed at the same time.
That would be true, but is Xenos' ALUs 50% as fast at vertex ops as an X800 VS??? Who Knows??

ATI/MS isn't saying...

All that's saying is one dedicated vertex shader is faster than one USA shader assigned as a vertex shader. Yes that's probably true. But OVERALL is where you need to be looking. Because we're not comparing one vertex shader to another, but rather the OVERALL vertex and shading capabilities.

I'm sure you would agree with me in saying we CANNOT look at the overall picture within the context of RSX and Xenos....

Xenos we know alot more about but we don't know performance figures benchmarked against other platforms....RSX we know even less about...at even then, CELL/XeCPU can assist RSX/Xenos in graphic related function, so if you are a fan of looking at the overall picture, perhaps you should look at XeCPU/Cell and what they can offer to Xenos/RSX....but right now you cant really do that...

If you have a Ferrari and need to get 4 people across town, you'll still need to make two trips(and that's assuming you stuffed one in the gas tank...damn I need more sleep :lol ...my math is just not there today). I can do it in just one with my Honda Accord.

In that that one example, it's a disadvantage, but that's not how it will be used in real gaming applications. That's assuming you would never use more than the number of vertex shaders that a traditional card has dedicated. But why when you can? Because OTH, if I want to dedicate 20 vertex shaders, I don't care if yours are 30% faster, it's still not going to process vertex info than what my 20 can do. I will be able to do things with 20 that you will simply never be able to do with 8. That's the beauty of USA.

We do not know yet....as you (should) know, elegance doesnt always beat brute force if there is enough of it (brute force, that is)

And notice how we haven't even mentioned efficiency here. Because in the real world those dedicated vertex shaders and pixel shaders will be sitting there doing nothing a lot of times.

That would depend of the effectiveness of the fragment buffers in RSX:

While such a scheme sounds almost too good to be true, the alternative is by no means doomed. NVIDIA has a track record of making very efficient hardware. Current architectures have fragment buffers between the vertex and pixel pipelines. This alleviates much of the work balancing problem by providing a constant pool of pixels for the pixel shaders to work on.

http://www.atomicmpc.com.au/article.asp?SCID=14&CIID=22720&p=2
 
Kleegamefan said:
We do not know yet....as you (should) know, elegance doesnt always beat brute force if there is enough of it (brute force, that is)

When it comes to the per dollar it always does though. I mean for exacmple the GC only has half the brute power of the PS2, but it delivered better graphics when it came to multiplatform titles.
 
bishoptl said:
Who are you and what have you done with Gahiggidy??!




1. Don't bullshit
2. Don't quote bullshit
3. Don't quote oa bullshit

See you later.
It's Matt, probably. He locked gahiggidy up in his basement. :(
 
jimbo said:
, you'll still need to make two trips(and that's assuming you stuffed one in the gas tank...damn I need more sleep :lol ...my math is just not there today). I can do it in just one with my Honda Accord.

And applying efficiency statements, lets allow the accord to acheive 95% of its performance potential, and the ferrari to acheive 70% of its potential.

I'm betting on the Ferrari.

Plus as was mentioned, those efficiency numbers are PC based, and are likely to be higher in a controlled console environment.

In that that one example, it's a disadvantage, but that's not how it will be used in real gaming applications. That's assuming you would never use more than the number of vertex shaders that a traditional card has dedicated. But why when you can? Because OTH, if I want to dedicate 20 vertex shaders, I don't care if yours are 30% faster, it's still not going to process vertex info than what my 20 can do. I will be able to do things with 20 that you will simply never be able to do with 8. That's the beauty of USA..

But thats not how it will be used in real gaming applications either. GPUs have a split for a reason, and thats to cover most cases. CELL/XCPU is there to help out when those cases are stretched. CELL in particular can help with pixel shaders or vertex calculations.
 
Why don't we break down the Nvidia 7800GTX once information is available. Isn't this the same chipset RSX is based upon?
 
Mrbob said:
Why don't we break down the Nvidia 7800GTX once information is available. Isn't this the same chipset RSX is based upon?

I think they'll share a lot of technology, so from that perspective I think the G70's unveiling should be useful, but I think RSX will be closer to an Ultra version of that technology than the GTX. Then again it may be closer to the G80 (less likely, of course). The precise relationship between RSX and G70 is currently unclear. I'm expecting some differences, for example, RSX will have 128-bit framebuffers and blending, partially to accomodate data exchange with Cell I imagine, something I'd be surprised to see in their desktop parts.

So G70 should be useful, but I think many questions will still remain.

All of the above aside, I'm more hopeful that once G70 is unveiled, we won't be left waiting long, if at all, for specific details on RSX. I think a lot of the information will be common to the PC parts, but it'd be nice to get specific details on it, to keep us from guessing if nothing else. I think the only reason we've had little to no detail on it is because of NVidia's desire to protect competitive secrecy for their PC parts, but once they're unveiled, there's little reason for them not to talk about RSX too. Unless they've a policy of waiting till a chip has taped out or to be finished, which may not have happened yet in RSX's case.
 
gofreak said:
RSX will have 128-bit framebuffers and blending

I think in the real world 128-bit rendertargets and blending will have to be used sparingly and carefully on RSX.

With the limited memory bandwidth available to it and blending enabled, RSX's fillrate will drop like a rock considering just framebuffer reads and writes (not counting Z, texture, or anything else).

This may of course not mean anything if you have really long shaders (that don't need to access a lot of memory) -- but the same advantage applies to xenos + xenos doesn't have to worry about framebuffer bandwidth ever, even in FP modes higher than FP10 (which it does support).
 
aaaa0 said:
With the limited memory bandwidth available to it and blending enabled, RSX's fillrate will drop like a rock considering just framebuffer reads and writes (not counting Z, texture, or anything else).
In Faf's dreamland RSX will have 512-1MB SRam shared for FB and texel cache so all we'll need to do is be a little carefull on access coherency like we did on GS to keep it happy.
... back in the real world, :(... I doubt 128bit FP has much purpose for rendering. It could be there mainly for helping RSX<->Cell communication, or in the worst case it's a marketting ploy similar to XeCPUs FP ratings... Though that'd be a very expensive ploy, transistor wise... :(

xenos doesn't have to worry about framebuffer bandwidth ever, even in FP modes higher than FP10 (which it does support).
Actually Xenos only has the bandwith to sustain half the fillrate with FP16. Of course the same shader heavy argument applies that you already pointed out.
And the few rendering portions that might saturate Xenos peak fillrate won't need FP buffers to render into. :)
 
Top Bottom