• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Xenos GPU to be a 332 million transistors part?

Blimblim

The Inside Track
Now that's more like it, that 150 million number really surprised me
Another bit of information sent our way is the final transistor count for Xbox 360's graphics subset. The GPU totals 332 million transistors, which is spit between the two separate dies that make up the part. The parent die is the "main" piece of the GPU, handling the large bulk of the graphics rendering, and is comprised of 232 million transistors. The daughter die contains the system's 10MB of embedded DRAM and its logic chip, which is capable of some additional 3D math. The daughter die totals an even 100 million transistors, bringing the total transistor count for the GPU to 232 million.
Notice the nice error at the end :D
http://xbox360.ign.com/articles/617/617951p3.html
 
1. Take Sony's numbers
2. Add something to it
3. Release numbers to the press

I just wait till they release a paper where the CPU is clocked at 3.3GHz. :lol

Fredi
 
McFly said:
1. Take Sony's numbers
2. Add something to it
3. Release numbers to the press

I just wait till they release a paper where the CPU is clocked at 3.3GHz. :lol

Fredi

You mean, what Sony did at their press conference?
 
Amir0x said:
Oh man, Ghaleon, you need some sleep. Your baby is makin' you damn near incoherent :P

No kidding. But I'm trying to get my wife another few hours of sleep, so I've got to hold out a little longer.

Do you still not get the fact that I don't trust the MS spin? Has it gone over your head that the ONLY reason I post on these threads (which I really am not all that interested in) is to keep myself awake? I think you are the tired one. :)
 
GhaleonEB said:
No kidding. But I'm trying to get my wife another few hours of sleep, so I've got to hold out a little longer.

Do you still not get the fact that I don't trust the MS spin? Has it gone over your head that the ONLY reason I post on these threads (which I really am not all that interested in) is to keep myself awake? I think you are the tired one. :)

Well I had THOUGHT you were being sarcastic (check the other thread), but I'll admit these last few posts have thrown me off :P
 
Amir0x said:
Well I had THOUGHT you were being sarcastic (check the other thread), but I'll admit these last few posts have thrown me off :P

Let me be clear:

I will be surprised if the PS3 is not more powerful. I think it IS.

But I find irony in so many people buying into the hype that Sony cranked out, then when MS does the same thing (after the fact), they call it BS. I see hypocracy there, and it amuses me at this late hour.

I also found that Microsoft, of all companies, was the honest one at E3. They could have shown nothing but CG if they wanted, but they showed games as they stood, warts and all. That took balls, and they are taking a beating in some corners for it. But the beating Sony is taking for showing what amounted to a render reel, and then nothing else, is piling up as well.

I think if Sony really has hardware 2x more powerful, they would not have had to try so hard.
 
GhaleonEB said:
Let me be clear:

I will be surprised if the PS3 is not more powerful. I think it IS.

But I find irony in so many people buying into the hype that Sony cranked out, then when MS does the same thing (after the fact), they call it BS. I see hypocracy there, and it amuses me at this late hour.

I also found that Microsoft, of all companies, was the honest one at E3. They could have shown nothing but CG if they wanted, but they showed games as they stood, warts and all. That took balls, and they are taking a beating in some corners for it. But the beating Sony is taking for showing what amounted to a render reel, and then nothing else, is piling up as well.

I think if Sony really has hardware 2x more powerful, they would not have had to try so hard.

All I can say it'll be interesting to see how closely people match these "tech demos" by the end of PS3 and Xbox360's life.
 
Amir0x said:
All I can say it'll be interesting to see how closely people match these "tech demos" by the end of PS3 and Xbox360's life.

Agreed. With the PS2, it took until the middle of the generation. We'll see this time out.

But, not to sound too much like Allard, what sold me on the 360 was the Media Center, Guide and LIVE functionality demonstrations. It's an amazing package they have put together. I think Sony's machine is packed with potential, but they are so hardware oriented I don't see them wrapping it in software and services like MS is doing. Compare LIVE to the Sony online situation, and that's the kind of gap I'm expecting next gen in this area as well.

But the games.....this gen is gonna rock. :)
 
haha, Xbox360 Xenos GPU has 332M transistors. PS3 RSX GPU only has 300M transistors.


PS3 d00000med!

:lol


and PS3 is coming out later? Nvidia better work up an SLI'd RSX with 600 million transistors, or otherwise it's game over for Sony. welcome to a Microsoft dominated videogame industry :lol
 
I told you it couldn't have just 150 Mtransistors, event without edram. :)
Edram should take as mush as 80-90 Mtransistors so the logic count should top at 240-250 MTransistor.
Since RSX doesn't have edram we can say RSX has more transistors devoted to logic (shaders!) than R50 even if It doesn't mean RSX is more powerful than R500.
 
midnightguy said:
haha, Xbox360 Xenos GPU has 332M transistors. PS3 RSX GPU only has 300M transistors.


PS3 d00000med!

:lol


and PS3 is coming out later? Nvidia better work up an SLI'd RSX with 600 million transistors, or otherwise it's game over for Sony. welcome to a Microsoft dominated videogame industry :lol

Of the 322M, 100 is the smart/3D EDRAM part. I think that 100M is the real key to 360's GPU supremacy over the RSX. More and more, the RSX part in PS3 looks like a rush job. The only real advantage for PS3 will be the additional overhead in CPU processing for thinsg like physics and AI, but since a chunk of the Cell's SPEs will always be doing vertex calcs, I'm wondering just how much an advantage that will be for Sony.
 
Check out this sequence of quotes from B3D (Here's the thread) on CELL vs XeCPU

Sony interview on London Demo
For example we showed the demo that renders London City, it's not rendered in the GPU but the CELL does lighting and texture processing then outputs it to the frame buffer. Even without GPU, only CELL can create good enough 3D graphics.

Response by Shifty Geezer
Well the fact this scene was produced on Cell has already been mentioned on this forum. Regards whether it's possible on a tri-core Xenon CPU, I wouldn't have thought so based on real-world performance of similar cores. Cell is designed for this sort of data manipulation and works more akin to a GPU then a CPU. I'm kinda curious now. I haven't seen a software renderer on a PC in ages and don't know what they can manage.

Response by DeanoC, developer on Heavenly Sword for PS3
XeCPU has 3 VMX128 cores at 3.2 Ghz, at worst it could get within 30% of Cell doing the same job, because the cores of Xenon are more flexible then SPU in all likelyhood it could achieve results much closer than that.

Cell has a FLOP advantage, XeCPU has a flexibilty advantage... I suspect a tuned advanced software engine for both would be within 70-80% of each other. I'm not even sure that if you have lots of vertex and texture data, that XeCPU would lose...

Pretty interesting... if the XeCPU can hold it's own like this and Xenos matches up well w/ the RSX, the differences between the consoles shouldn't be too big.
 
I'll take that as a firm vote for both being very close to each other on the CPU end, then. DeanoC and his team were going to bring Heavenly Sword to X360 before Sony Europe picked them up for the title's exclusive release on PS3. He seems to be the most knowledgeable person on the subject of the two systems, having a fair amount of familiarity with the XCPU and now the PS3's Cell-based CPU. He really seemed to really like Xenon/X360, so I'm guessing that after HS ships, there's a pretty decent chance his team might work with the finished product after having worked on the alpha for Xenon/X360 awhile back. If HS does well, though, and I really hope it does, it might mean his team's locked onto Sony's stuff for a follow-up. HS is definitely one of the titles I'm excited about for PS3.
 
:lol It might happen. Of course, everyone's subject to possible upgrading or downgrading in specs before they hit manufacturing. Sony would definitely have time to do so, as would Nintendo. I just don't think Sony cares about MS' specs now. They've already started what will be a massive hype-campaign that will serve them well past launch.
 
Bauer Action Hour said:
Pretty interesting... if the XeCPU can hold it's own like this and Xenos matches up well w/ the RSX, the differences between the consoles shouldn't be too big.

He's comparing one type of task (general software rendering), he's not making a general statement about performance. Shall we post some of his other comments about certain other tasks? ;)
 
gofreak said:
He's comparing one type of task (general software rendering), he's not making a general statement about performance. Shall we post some of his other comments about certain other tasks? ;)

Please do, I must have missed them. Thanks. :)
 
Of the 322M, 100 is the smart/3D EDRAM part. I think that 100M is the real key to 360's GPU supremacy over the RSX. More and more, the RSX part in PS3 looks like a rush job. The only real advantage for PS3 will be the additional overhead in CPU processing for thinsg like physics and AI, but since a chunk of the Cell's SPEs will always be doing vertex calcs, I'm wondering just how much an advantage that will be for Sony.
Shog

Fuck me shog u r usually better than this. U know u cannot possibly claim anything of the sort based on the fact we know sweet fuck all about the RSX and only a little more about the R5360.

Given that the UE3.0 demo was running on a dev RSX basically (the RSX and the SPE i gather from reading that interview with Mark Rein) the RSX must be capable of at least some vertex calcs and shading by itself u'd think. How many and whatever we have no clue really because as above we know fuck all about it.

Holy cow i've had to stick u in the xbox fanboi camp now.
 
seanoff said:
Shog

Fuck me shog u r usually better than this. U know u cannot possibly claim anything of the sort based on the fact we know sweet fuck all about the RSX and only a little more about the R5360.

Given that the UE3.0 demo was running on a dev RSX basically (the RSX and the SPE i gather from reading that interview with Mark Rein) the RSX must be capable of at least some vertex calcs and shading by itself u'd think. How many and whatever we have no clue really because as above we know fuck all about it.

Holy cow i've had to stick u in the xbox fanboi camp now.

and U get stuck in the Prince wanna B camp
 
OOHHH i'd love to have even some of that talent.

If you would like me to write perfect English whilst posting here can we hold everyone else to the same standard. Most cannot spell to save themselves, incorrect use of verbs, tenses and just about every other English error.

I get a bit sick of writing carefully crafted English as my job is basically that for an Information Technology department and my studies over the years with a Masters in E-Commerce so forgive my use of some small shorthand when posting here.


I'm just sick of people comparing things they know little about and those that do know claim thay have no information to do a sensible comparisons. :)
 
seanoff said:
OOHHH i'd love to have even some of that talent.

If you would like me to write perfect English whilst posting here can we hold everyone else to the same standard. Most cannot spell to save themselves, incorrect use of verbs, tenses and just about every other English error.

I get a bit sick of writing carefully crafted English as my job is basically that for an Information Technology department and my studies over the years with a Masters in E-Commerce so forgive my use of some small shorthand when posting here.


I'm just sick of people comparing things they know little about and those that do know claim thay have no information to do a sensible comparisons. :)

C'mon, that stuff is one step away from 1337, or whatever. It's distracting from your point, however well articulated othewise. Write in full words and it just sends a more coherent message. (Like the one you just wrote, though it smacks of bragging. I'll think of you while I help roll up Intel's financials. :)
 
seanoff said:
Shog

Fuck me shog u r usually better than this. U know u cannot possibly claim anything of the sort based on the fact we know sweet fuck all about the RSX and only a little more about the R5360.

Given that the UE3.0 demo was running on a dev RSX basically (the RSX and the SPE i gather from reading that interview with Mark Rein) the RSX must be capable of at least some vertex calcs and shading by itself u'd think. How many and whatever we have no clue really because as above we know fuck all about it.

Holy cow i've had to stick u in the xbox fanboi camp now.

I don't mind being called an fanboi too much. :)

Seriously though, I'm sure that RSX has some vertex units. But I don't think it's got all the vertex units from G70 carried over, since I really don't believe the single general purpose PPE unit can do emulation of the EE and the GS at mere 3.2 Ghz. IMH and probably kerazy O, to make PS2 BC completely work (without the X360esque problems with XBox emulation), I believe RSX is G70 minus some vertex portions offloaded to the Cell and then the remaining space filled with EE and GS cores, or portions of them that cannot be emulated with the PPE (I might be wrong but nature of SPEs don't lend themselves to such emulation work).

Supporting this crazy theory of mine is the reports that RSX and G70 both have about the same transistor count (300M). If Cell with RSX can't emu PS2 as in my reasoning, than it only makes sense that Sony request nVidia to make room in the die of RSX for EE and GS by removing some of the vertex unit that would be pretty much redundant do to the monster vertex calc capabilities of the SPEs.




Or I could be just completely sauced from repeated viewings of the Sega's next gen demos last two days. :D
 
MightyHedgehog said:
Actually, you should, Gofreak. :) I don't like sifting through that site for info anyway. Too much tech babble there...

Well, for one, there was such comment in his following posts after those quoted:

In the example of a software rasterisor its clear to anybody who has ever written one that the memory architecture of Cell is gonna hurt. The reason is simple for rasterisation FLOP count is largely irrelevant, its lots of data movement and random access to memory. Now if we want to talk about procedural synthesis that creates procedural textures, than Cell is gonna whip XeCPU into touch in most cases...

He also goes on to talk about lighting calcs being a good fit for the SPEs in cell.

No one is talking about using Cell as for general software rendering, someone brought it up hypothetically. Cell is unlikely to be rasterizing in a game. Leveraging cell for "graphics" means using it for, or using it to help the GPU with, those things which make sense on it and that would be good on it, but that doesn't include doing everything aka software rendering. Lighting, shadowing, vertex processing, procedural data generation etc. would all work great on Cell, either in their entirity, or in collaboration with the GPU on much bigger things (e.g. Cell works on one part of a much bigger lighting equation and moves its results to the GPU which works on the rest, as one example) :)
 
Nostromo how about all the sram logic on the RSX, I don't know how much that is but at a guess 30-50 million transistors? Basically the GPU in transistor counts are similar, but they work is such a different way its not even worth comapring numbers. I still think visual difference will be minimal between the 2 systems no matter what the numbers slanging war turns up.
 
NO not bragging just saying that i write enough proper english to occassionally write in shorthand and why. U (You) is about the only thing i shorthand though. And most of what i have to write is total dribble, to make things sound useful. :D And I'm sure you don't come home and start doing consolidation journals etc for fun

Good luck rolling up the financials. What do you use? I come from an Accounting Systems background so its interesting to find out what people use and why.

Do you work for Intel or external to them. And what system do they use?? SAP, Oracle, JD Edwards, Peoplesoft (I know the last 3 are now one) or something else.


Had to really stop myself here from writing in very shorthand but that would cause me to open bits of my brain that *like the weekend off. *(keyboard sucks)
 
Nostromo said:
I told you it couldn't have just 150 Mtransistors, event without edram. :)
Edram should take as mush as 80-90 Mtransistors so the logic count should top at 240-250 MTransistor.


Wow, you're right.

The daughter die totals an even 100 million transistors, bringing the total transistor count for the GPU to 232 million.

It's not an error, then.
 
Shogmaster said:
I don't mind being called an fanboi too much. :)

Seriously though, I'm sure that RSX has some vertex units. But I don't think it's got all the vertex units from G70 carried over, since I really don't believe the single general purpose PPE unit can do emulation of the EE and the GS at mere 3.2 Ghz. IMH and probably kerazy O, to make PS2 BC completely work (without the X360esque problems with XBox emulation), I believe RSX is G70 minus some vertex portions offloaded to the Cell and then the remaining space filled with EE and GS cores, or portions of them that cannot be emulated with the PPE (I might be wrong but nature of SPEs don't lend themselves to such emulation work).

Look, something is taking 300+ MTransistors on RSX.

It more than likely is not a Programmable Tessellating unit (which some other GPU has... and that takes some space since it is pretty fast).

8 ROP's take about 20 MTransistors (going by the Xenos' daughter chip Transistors count) so if you take 16 ROP's for RSX you still only take into account 40 MTransistors (likely we might be talking of simplier ROP's, but we cannot be sure yet so I take the highest number I can get my hands on to be on the safe side).

The Transistors count difference between Xenos and RSX is 68 MTransistors, but if I take into account the Transistors used for the ROP's in the Xenos GPU then the difference goes down to 48 MTransistors.

Out of 300 MTransistors, RSX likely uses 260 MTransistors for Shading Logic + "other stuff" (that is, stuff which is not part of the ROP's and the Rasterization logic). Xenos uses 232 MTransistors for the same Shading Logic + "other stuff". A difference of 28 MTransistors. This transistors count gap widens if we think that the work of PPP-like (Programmable Primitive Processor) unit is more than likely done on the CELL based Broadband Engine CPU and not on the RSX GPU.

Also, can you tell me how they did the Unreal Engine 3 tech-demo since it was only able to use the PPE CPU (likely they only compiled code without doing much hand-optimization with the VMX unit the PPE has) and the RSX GPU ?

Did they use this:

I believe RSX is G70 minus some vertex portions offloaded to the Cell and then the remaining space filled with EE and GS cores

?

:lol.
 
Pug said:
Nostromo how about all the sram logic on the RSX, I don't know how much that is but at a guess 30-50 million transistors?

256kb * 8 SPEs + 521 kb L2 = 2560 Kb of static ram.

A single bit of static ram costs 6 transistors so we have 2560*6*8*1024 = 125+ Mtransistors
Half of CELL transistors budget is devoted to caches and local stores!

EDIT: Sorry PUG, I misread your post :( I thought you were talking about CELL, not RSX!
I really don't know how many transistors are devoted to caches on a GPU.
Usually L1 texture caches are really small (4/8 kb).
 
*PS3 more powerful than Xbox 360*

Playstation fans: WOOT
Xbox fans: Omgawd it's gameplay that matters!!

*Xbox 360 more powerful than PS3*

Playstation fans: Gameplay matters and it's not that much of a difference, so...
Xbox fans: OWNED :lol

*Nintendo Revolution least powerful next-gen console*

Everyone: Nintendo sucks.

:(
 
wow, microsoft does so much damage control, i guess sony will just add another cell chip to shut them up

(j/k, i don't give a shit)
 
lets have some speculation about those transistors then. Taking edram out of the equation (It seems to have been stated that RSX has no edram), you have 300m Vs 150m transistors for the processing part.

thats a big difference in a GPU. You think Nvidia might have gone dual core? Or simply brute force? They say they can do 136 shader ops per cycle Vs Xenos 96, which is less than double for double the transistor count. So they can be simulteneously less efficient but more powerful.
 
mrklaw said:
lets have some speculation about those transistors then. Taking edram out of the equation (It seems to have been stated that RSX has no edram), you have 300m Vs 150m transistors for the processing part.

You can really only take out the memory portion of the eDram, not the whole module (the eDram logic will have counterpart logic in RSX, so it should be included). That won't be 150m transistors. Maybe 100m? 50-75m? I'm hearing different things from different people.
 
Shogmaster said:
Of the 322M, 100 is the smart/3D EDRAM part. I think that 100M is the real key to 360's GPU supremacy over the RSX. More and more, the RSX part in PS3 looks like a rush job. The only real advantage for PS3 will be the additional overhead in CPU processing for thinsg like physics and AI, but since a chunk of the Cell's SPEs will always be doing vertex calcs, I'm wondering just how much an advantage that will be for Sony.

Ken Kutaragi said:
Because of it, we allied ourselves with IBM that knows supercomputers, co-developed CELL with Toshiba in the 3-company alliance, and created the new GPU with nVIDIA. Especially we could sympathize with nVIDIA very much, and I and Jen-Hsun (nVIDIA CEO) drew the future roadmap. The entrance of this roadmap is RSX. Those who aren't in the know seem to think it's an off-the-shelf PC GPU, but in reality, they are totally different in their architectures. Including Dr. Kirk (nVIDIA architect), all people at nVIDIA are visionaries actually. Also in that regard we sympathize with each other, and we are in talks to do new thing in the future. OTOH, nVIDIA Shader can exploit past assets as it has compatibility with various shading programs in the PC world.

Ken Kutaragi said:
RSX is not a variant of nVIDIA's PC chip. CELL and RSX have close relationship and both can access the main memory and the VRAM transparently. CELL can access the VRAM just like the main memory, and RSX can use the main memory as a frame buffer. They are just separated for the main usage, and do not really have distinction.

This architecture was designed to kill wasteful data copy and calculation between CELL and RSX. RSX can directly refer to a result simulated by CELL and CELL can directly refer to a shape of a thing RSX added shading to (note: CELL and RSX have independent bidirectional bandwidths so there is no contention). It's impossible for shared memory no matter how beautiful rendering and complicated shading shared memory can do.

Link
 
Dave Baumann from B3D says R500 core is 232 Mtransistors and that probably 150 MTransistors is the edram + ROPs transistors count.
 
mrklaw said:
lets have some speculation about those transistors then. Taking edram out of the equation (It seems to have been stated that RSX has no edram), you have 300m Vs 150m transistors for the processing part.

thats a big difference in a GPU. You think Nvidia might have gone dual core? Or simply brute force? They say they can do 136 shader ops per cycle Vs Xenos 96, which is less than double for double the transistor count. So they can be simulteneously less efficient but more powerful.


More than likely brute force. ATI usually goes for streamlined powerful elegance while Nvidia goes for hulking power. In this aspect, I think Nvidia is a great match with Sony! :lol
 
Nostromo said:
Dave Baumann from B3D says R500 core is 232 Mtransistors and that probably 150 MTransistors is the edram + ROPs transistors count.

That's true, nAo, but Major Nelson claims otherwise. And who are we to question Major Nelson's judgment?!? :)
 
Mrbob said:
More than likely brute force. ATI usually goes for streamlined powerful elegance while Nvidia goes for hulking power. In this aspect, I think Nvidia is a great match with Sony! :lol
Yeah that's why NV40 is most of the time clock per clock faster than R420, LOL :lol
urban legends never die..
 
:D Pana, I'm putting all my chips down on this EE/GS in RSX prediction in hopes that this extreme long shot becomes true and I get HUGE payoff from this and gain outrageous fame and fortune!!! :lol :lol


mrklaw said:
I thought it was 232 overall, with a portion of that being the edram?

No, it's 100M for 3D/smart EDRAM daughter core and 232M for the main GPU core.
 
Blimblim said:
Now that's more like it, that 150 million number really surprised me

This has become intrigue central.

Why would ATI have said that their GPU was 150M transistors initially. I can concede not counting the transistors in the DRAM module initially, but why understated the GPU by ~33%?
 
What you guys seem to forget that is not PCs, but consoles.

On PC all these comparisions make some sense at least for all the power whores. You can take one game and compare it ATI vs. nVidia. Then you get 74.3fps vs. 73.8fpa and buy the better card, go home and run all the benchmarks to show how fast your machine is.

Probably doesnt leave much time for actually playing games, but thats PC.

On consoles all that means shit. Some genius developer pulls off an absolutely unbelievable game on the weaker console, while on the more powerful machine some lazy developers just pull some lame ports out of their asses.

OK, you'll still have your beloved head to heads, but wait, those are usually completely inconsitent, sometimes one console wins, then the other. Not because of the number of transistors obvisously, just because some programmer had a bad day or someone just cut the budget for one of the consoles.

And even worse, it depends on what development kits are avaible, what middleware is being used and how much the developers make use of that or try to dive deeper into the machine and actually using all the features available.

And after all the machines will have different lineups, especially different in numbers of titles availble. One machine will have the greatest looking game of all time, but unfortunately its a genre you hate, while the other consoles will have dozens of games you like which unfortunately look 10% worse.
 
sonycowboy said:
This has become intrigue central.

Why would ATI have said that their GPU was 150M transistors initially. I can concede not counting the transistors in the DRAM module initially, but why understated the GPU by ~33%?

Who said ATI told IGN 150M? All this original 150M BS from one stinkin' single source, IGN. Obviously, IGN heard wrong.
 
Top Bottom