• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Xbox "Advanced Tech Engineers" discuss tech specs

Vince said:
While on the topic of this "extremely slow" DP operation in the CELL architecture, let it be noted that a single PS3-CELL which will retail for $300 is 3 (THREE) times faster at DP Floats than the NEC SX-6 that is a custom IC designed for use in the Earth Simulator -- the 30TFlop super computer.

C'mon man. I won't pretent to be a Super Tech Expert(TM), but a simple google search shows the NEC SX-6 to run at 8 gflops(http://www.metoffice.com/corporate/scitech0203/5_computing/about_the_nec.html). It was also apparently made in 2002(what did your beloved Sony have that was so amazing in 2002? Emotion engine?? That didn't even have a DP unit :lol ) and runs at like 500mhz and you're comparing it to a design in 2006? How many freaking NEC SX-6's have to be put together to make a 30Tflops machine? Did you tell them that the Earth simulator uses 5,120 CPUs and takes up the size of 4 tennis courts? Were you trying to make it seem to the uninformed that the cell beats the earth simulator super computer in DP floats? Is this really comparable at all in any real way that matters to this discussion? How many times over can the Xcpu beat the NEC chip from 2002 that runs at 8gflops?

To me your statement is just as much marketing crap as what you're getting so pissed off about with MS/ATI. Flawed, biased, and misleading. Oh yeah, and I also think you kind of smell bad.
 
gmoran said:
I think their breakdown of the RSX architecture is logical, all the math seems to add up and it looks to me to be a good bet?

They're making assumptions about RSX in the absence of facts. There's no architectural details out there to break down, no information to examine. It's pointless.

All of this is pointless. It's just MS spin. Debating about this just isn't useful, just as it wouldn't be useful to discuss what Sony engineers think about X360.

Just give us the details, the facts, and let us form our own analysis and opinions on it, please (sony + ms). In the mean time, I'll look forward to analysis from trusted independents.
 
GhaleonEB said:
I call Sony's stuff BS because their entire approach has caused them to lose credibiltiy in my eyes. They put out only a handful of off the wall specs, then show a ton of rendered footage deliberately to give the impression that it's realtime. 90% of what they SHOWED was bullshit, so I sure as hell don't trust what they are telling me.

They JUST lost credibility in your eyes? Hell they did the same thing with the last TWO consoles launches...
 
Amir0x said:
Your alternative IS Xbox, and you spin for it. That's fine, but don't try to claim otherwise. Which is, once more, the point - you're muddying the waters and you're not doing a convincing job.


GhaleonEB, you should have admitted this several posts ago.
 
briefcasemanx said:
C'mon man. I won't pretent to be a Super Tech Expert(TM), but a simple google search shows the NEC SX-6 to run at 8 gflops(http://www.metoffice.com/corporate/scitech0203/5_computing/about_the_nec.html). It was also apparently made in 2002(what did your beloved Sony have that was so amazing in 2002? Emotion engine?? That didn't even have a DP unit :lol ) and runs at like 500mhz and you're comparing it to a design in 2006? How many freaking NEC SX-6's have to be put together to make a 30Tflops machine? Did you tell them that the Earth simulator uses 5,120 CPUs and takes up the size of 4 tennis courts? Were you trying to make it seem to the uninformed that the cell beats the earth simulator super computer in DP floats? Is this really comparable at all in any real way that matters to this discussion? How many times over can the Xcpu beat the NEC chip from 2002 that runs at 8gflops?

To me your statement is just as much marketing crap as what you're getting so pissed off about with MS/ATI. Flawed, biased, and misleading. Oh yeah, and I also think you kind of smell bad.

It doesn't matter, the idea is that a custom built Vector CPU that was designed for DP computing is getting beat by a factor of 3 by a commodity CPU being put into an f-ing Playstation game console. That's the point you ass (which a simple google search of ISSCC-CELL would show you).

Or, instead of using the custom Vector approach that needed 5K CPUs, want to look at BlueGene that uses 32K processors? Or what about the NASA-AMES cluster of 10K processors? Your argument is getting weaker on a normalized, per CPU, level dumbass.

If case you haven't noticed, the trent in supercomputing is to use commodity CPUs en masse with specific and redundent network topologies as the supercomputing system. What was done in the Earth Simulator tht was the last of the full-custom vector processing clusters could be done with 3 times less processors using PS3-CELL and they could get them at $300 a pop, try that with the custom SX-6.

The only thing flawed is your quick to the gun responce that has about as much intelligence put into it as a blond puts into her SAT. Next time, try comprehending what I stated before reposnding and help conserve bandwith.
 
Vince said:
It doesn't matter, the idea is that a custom built Vector CPU that was designed for DP computing is getting beat by a factor of 3 by a commodity CPU being put into an f-ing Playstation game console. That's the point you ass (which a simple google search of ISSCC-CELL would show you).

Or, instead of using the custom Vector approach that needed 5K CPUs, want to look at BlueGene that uses 32K processors? Or what about the NASA-AMES cluster of 10K processors? Your argument is getting weaker on a normalized, per CPU, level dumbass.

If case you haven't noticed, the trent in supercomputing is to use commodity CPUs en masse with specific and redundent network topologies as the supercomputing system. What was done in the Earth Simulator tht was the last of the full-custom vector processing clusters could be done with 3 times less processors using PS3-CELL and they could get them at $300 a pop, try that with the custom SX-6.

The only thing flawed is your quick to the gun responce that has about as much intelligence put into it as a blond puts into her SAT.

Again, and your comparison has to do with this discussion why(I mean other than the fact that you copied it, and most of what you were saying almost verbatim from one of the articles i found with a simple google search)? It was made in 2002 on 0.15 micron process. Technology keeps getting better and better- wow what an astonishing conclusion! I ask, can you compare the SX-6 and the Cell by only using DP performance as a metric, is that the only thing that needs to be considered? If they remade earth simulator today using custom chips on CURRENT PROCESSES with CURRENT KNOWLEDGE do you think those chips would still only be getting 8gflops performance? I ask again how is comparing the Cell to a custom chip in an outdated model/trend for supercomputing systems relevant to the discussion?

The madder you get the more I can smell you. It's kind of like a burnt rubber smell mixed with really bad B.O. I honestly didn't think you could smell things through the internet- maybe you revealed the TRUE POWER OF THE CELL ARCHITECTURE!!!
 
briefcasemanx said:
(I mean other than the fact that you copied it, and most of what you were saying almost verbatim from one of the articles i found with a simple google search)?

Really? Where?

It was made in 2002 on 0.15 micron process. Technology keeps getting better and better- wow what an astonishing conclusion! I ask, can you compare the SX-6 and the Cell by only using DP performance as a metric, is that the only thing that needs to be considered? If they remade earth simulator today using custom chips on CURRENT PROCESSES with CURRENT KNOWLEDGE do you think those chips would still only be getting 8gflops performance? I ask again how is comparing the Cell to a custom chip in an outdated model/trend for supercomputing systems relevant to the discussion?

Using your own postulated circumstances how much would each custom processor cost and what preformance would it net?

150nm -> 90nm is a 4X increase in transistor area.

8 DP GFlops (150nm SX6), scaled to 90nm yeilds O[32 DPGFlops] for 90nm SX-x.

ISSCC-CELL yeilds 26 DP GFLOPS in a Commodity processor sold in a game console for $300. Once again, attempt to think before posting, I do believe it's a prerequisite.
 
Vince said:
Really? Where?



Using your own postulated circumstances how much would each custom processor cost and what preformance would it net?

Simple geometry, 150nm -> 90nm is a 4X increase in transistor area.

8 DP GFlops (150nm SX6), scaled to 90nm yeilds O[32 DPGFlops] for 90nm SX-x.

ISSCC-CELL yeilds 26 DP GFLOPS in a Commodity processor sold in a game console for $300. Once again, attempt to think before poting, I do believe it's a prerequisite.

Oh cool, so you admit that if it had been made now it would beat Cell(in the DP metric), even if it's not a "commodity processor" and that's ONLY taking into account change to current process, which is not the only way chip technology changes. Yet knowing this you still tried to mislead everyone with your bolded 3x statement. You're admitting it was a flawed comparison.....I understand. The Cell is not the DP monster you tried to make it out to be with your flawed comparisons. You said yourself that using custom designs is outdated so what's the point of bringing some outdated model, other than for PR purposes?

I did think before posting, and you haven't proved me wrong on a single point. You have dodged just about all of my questions though.....I'm still waiting for answers.
 
briefcasemanx said:
Oh cool, so you admit that if it had been made now it would beat Cell(in the DP metric), even if it's not a "commodity processor" and that's ONLY taking into account change to current process, which is not the only way chip technology changes. Yet knowing this you still tried to mislead everyone with your bolded 3x statement. You're admitting it was a flawed comparison.....I understand. You said yourself that using custom designs is outdated so what's the point of bringing some outdated model, other than for PR purposes?

How is it flawed? Is the ISSCC-CELL processor 3X faster in DP math than the SX-6 custom vector processor in the Earth Simulator or not? Yes or No?

The point is, I'd assume you'd be smart enough to figure it out yourself, is that the NEC design is one of the last (if not the last) of the custom supercomputing clusters designed around non-commodity processors. It achieved with 5K processors what takes comparable commodity clusters around 2X the number of processors to achieve (based off LLNL scaled to ES preformance). Thus, on a per-IC level, the SX-6 is highly effecient in terms of area and preformance.... but, this is all lost on you.

I did think before posting, and you haven't proved me wrong on a single point. You have dodged just about all of my questions though.....I'm still waiting for answers.

(a) I just don't think you get it. (b) Answers to what?
 
tetsuoxb said:
Im calling bullshit on briefcasemax until he posts the article vince copied from that he says he found on Google.

I rescind my comment on that. I do *THINK he *MAY* have copied some/most of his info from an article but it wasn't verbatim. I will accuse vince of copy-catting no further.
 
(I mean other than the fact that you copied it, and most of what you were saying almost verbatim from one of the articles i found with a simple google search)?

You cant recind that. You were pretty concrete in accusing him of plagarism. It was the crux of claiming he had no credibility and he was wrong. Now you are saying you *think*.... not cool.

Briefcasemax
-----------------
Intelligence = 3.
Credibility = 0.
Deception = 20.
Fanboy = 40000123513984572034785230974052.
 
Vince said:
How is it flawed? Is the ISSCC-CELL processor 3X faster in DP math than the SX-6 custom vector processor in the Earth Simulator or not? Yes or No?

The point is, I'd assume you'd be smart enough to figure it out yourself, is that the NEC design is one of the last (if not the last) of the custom supercomputing clusters designed around non-commodity processors. It achieved with 5K processors what takes comparable commodity clusters around 2X the number of processors to achieve (based off LLNL scaled to ES preformance). Thus, on a per-IC level, the SX-6 is highly effecient in terms of area and preformance.... but, this is all lost on you.



(a) I just don't think you get it. (b) Answers to what?

How is that comparison relevant to the discussion? Maybe MS should compare the XeCPU to a supercomputer, or the processor in a super computer of the 80's and say "Our CPU kills this xcpu in performance" or "our design is WAY more efficient than that!!!". The only thing that proves is that TECHNOLOGY AND IDEAS GET BETTER AND BETTER. A chip anything like the performance of Cell would have cost WAYYYYYY more than 300 dollars back when earth simulator was made. All your comment served to do was mislead people that are uninformed. Yes ISSCC Cell is better than SX-6. It's an unfair comparison and It has nothing to do with this thread, which is XeCPU vs. Cell, not Cell vs. SX-6. Is the ISSCC Cell even the version that's going to be in the PS3?
 
gofreak said:
They're making assumptions about RSX in the absence of facts. There's no architectural details out there to break down, no information to examine. It's pointless.

All of this is pointless. It's just MS spin. Debating about this just isn't useful, just as it wouldn't be useful to discuss what Sony engineers think about X360.

Just give us the details, the facts, and let us form our own analysis and opinions on it, please (sony + ms). In the mean time, I'll look forward to analysis from trusted independents.

Their assumptions are something like this:

PS3: total system dot product: 51 billion. Check
Cell: 7 dot products per cycle * 3.2 GHz = 22.4 billion. Check
System-Cell: 28.6 billion dot products per second. Check
28.6 billion dot products per second / 550 MHz = 52 GPU ALU ops per clock. Assumption but reasonable.
24 pixel shading pipes and 4 vertex shading pipes. Assumption quite reasonable.
24 pixel pipes * 2 issued per pipe + 4 vertex pipes = 52 dot products per clock in the GPU. Sounds right, maths works with above.
Each pixel pipe = 4 ALU ops + texture op. Assumption
Each vector pipe= 4 scalar ops. Assumption
For a total of 24 * (4 + 1) + (4*4) = 136 operations per cycle or 136 * 550 = 74.8 GOps per second. Adds up to Sony's figures.

its not definitive, but it is quite impressive because the results match back to what we know. Doesn't make it true, so I agree with you there; but I do suspect they are on the right track.
 
tetsuoxb said:
You cant recind that. You were pretty concrete in accusing him of plagarism. It was the crux of claiming he had no credibility and he was wrong. Now you are saying you *think*.... not cool.

Briefcasemax
-----------------
Intelligence = 3.
Credibility = 0.
Deception = 20.
Fanboy = 40000123513984572034785230974052.

Fanboy of what? earth simulator?
Yeah, I guess I am!

I was wrong in making that accusation from memory without proof. I admit it 100%. Everyone has made a wrong accusations about someone. The difference between a fanboy and everyone else, is that a fanboy can't admit when they are wrong. I was wrong and I apologize to you Vince.
 
DopeyFish said:
personally if I was a coder, I'd loathe OoOE and would rather have IOE.

I just loathe execution windows. block fills up, execute repeat = blah in my mind. I like giving a command and it executing the exact time I issue it. But maybe that's just me.

I take it that you do all of your coding in assembly language? The better justification for in order on a console is that the platform is closed and the compiler can do the instruction reordering at compile time. Now we just need sufficiently good compilers ;)
 
briefcasemanx said:
How is that comparison relevant to the discussion?

It's relevent because the one Microsoft "expert" from their ATG stated that CELL has poor|weak DP Floating-Point preformance. Which isn't true when you compare the CELL against other ICs, of which the custom SX-6 is one of the highest preforming ICs.

It's DP-FP is only relatively weak when compared against it's enormous SP computational power, but that's like saying that $1 million dollars isn't alot compared with most people because Billy Gates has an extraordinarily high peak of $50 Billion.

Bud, it's time to just give up.
 
Vince:
I find this ironic considering ATI's current support of a [proprietary] 10-10-10-2 [HDR] format in the X360 GPU. Why would they chose some non-standard FP10 format when nVidia uses FP16 and FP32?

Oh, right... because this is nothing but marketing bullshit. In both cases, ATIs FP10 and STI forgoing IEEE754 blending, they chose the correct and logical design.
Vince, or anyone who knows - how good a decision is to use this FP10 blending? How much faster it would be than Nvidia's FP16 on RSX, and how much less precise? The only comment I ever found that mentions this is DeanoC who thought FP10 wouldn't be precise enough for real HDR because in his experience, FP16 (that he uses now) can even lose precision in heavily light saturated situations.

4) The biggest problem is that instead of a cache they have a Local Memory

Uh, no shit sherlock. This is basically necessitated by the above SPE choices we've talked about and the desire to have General-Computation on the SPEs. Besides, this cache (especially in what a console dev enviroment affords) is extremenly ineffecient proposition:
Also to note, if needed, SPEs on Cell can use caching scheme with their local memory utilizing some rather simple software algorithms. This was also mentioned by someone at B3D.



briefcasemanx:
C'mon man. I won't pretent to be a Super Tech Expert(TM), but a simple google search shows the NEC SX-6 to run at 8 gflops
Isn't that exactly what he said? I think Cell's DP performance is ~25GF for double precision math. (I'm assuming that Nec chip's spec is for DP math btw, otherwise if it's SP math, that's really pretty bad...)

It was also apparently made in 2002(what did your beloved Sony have that was so amazing in 2002? Emotion engine?? That didn't even have a DP unit
EE I think is 1999 design, and as mentioned already, it really didn't needed any DP math, as oposed to a supercomputer design that really needs it. Same for Cell - it is actually a good choice to not have very efficient DP design on a chip that is going to be used for realtime calculations in consoles and home appliances. Such device should only excel in SP math because that's pretty much the main/only thing needed for it's purpose.

That is exactly why it was not needed to bring this DP issue to begin with (which is what tech people did at that blogcast). I think the main thing Vince did in his post was to call them on that fact, it really doesn't matter how many times Cell is or is not faster than Nec supercomputer chip, it's a nice thing that it even supports DP to a good degree that it does to being with, and saying that it's somehow inneficient in that area is really pretty irrelevant.

I see that you were quick to jump on Vince's comment, but I think it was there just to give an illustration of what that particular comment in the blogast did - to call it on the irrelevance of bringing such point up.
 
briefcasemanx said:
Everyone has made a wrong accusations about someone.

No, I understand the done thing is to check your sources before you make the accusation, lest you end up looking stupid.
 
Marconelly said:
Vince, or anyone who knows - how good a decision is to use this FP10 blending? How much faster it would be than Nvidia's FP16 on RSX, and how much less precise? The only comment I ever found that mentions this is DeanoC who though FP10 wouldn't be precise enough for real HDR because in his experience, FP16 (that he uses now) can even lose precision in heavily light saturated situations.

I'd defer to the Faf, but IMHO it's likely an excellent choice for inclusion. I'd assume it will have minimal impact on transistor|area requirements and it's benefit in bandwith and onboard resource savings during the times that full FP16 or FP32 isn't needed will likely be noticable (I assume the computation costs are all fixed). I heard that FP10 will likely be utilized extensively, but Dean is much better versed than I.

The irony is that ATI created one heck of a cool GPU, the kind that we all find interesting due to it's novel design. FP10 is such an example, but in reality it's likely the RSX and it's [rumored] highly improved FP32 support could just beat out the R500 by strait-up overpowering it in such intense ops. But, thats only my 2 cents.
 
gmoran said:
Their assumptions are something like this:

PS3: total system dot product: 51 billion. Check
Cell: 7 dot products per cycle * 3.2 GHz = 22.4 billion. Check
System-Cell: 28.6 billion dot products per second. Check
28.6 billion dot products per second / 550 MHz = 52 GPU ALU ops per clock. Assumption but reasonable.
24 pixel shading pipes and 4 vertex shading pipes. Assumption quite reasonable.
24 pixel pipes * 2 issued per pipe + 4 vertex pipes = 52 dot products per clock in the GPU. Sounds right, maths works with above.
Each pixel pipe = 4 ALU ops + texture op. Assumption
Each vector pipe= 4 scalar ops. Assumption
For a total of 24 * (4 + 1) + (4*4) = 136 operations per cycle or 136 * 550 = 74.8 GOps per second. Adds up to Sony's figures.

its not definitive, but it is quite impressive because the results match back to what we know. Doesn't make it true, so I agree with you there; but I do suspect they are on the right track.


Nice. If this is correct, how does it compare to Xenos on an objective, not Major Nelson level? I can only see 136 Vs 96 shader ops but I have no idea what that means, or what kind of shader op.
 
Vince said:
aaaaa0, why does it matter? We both know the answer. Does it change what I've posted or the positions I've taken? Does my working in the medical field and doing research with specialization in biochemistry and neurology change my ability to comprehend and articulate this?

As I'm sure you're aware, there's a difference between theory and actual experimentation (read: real-world experience).


Vince said:
I think not, instead of attacking who I am of which you don't know (my nickname is Vince BTW) attack what I said... or atleast how I said it. :)

uhh:

Vince said:
This is nothing but pathetic, and that guys voice is still f-ing gay.
 
The most important thing that MS (imo) achieved with all this is to get the public to believe that the 360 is as powerful as the PS3. Sony has pulled out every stop in their quest to convince the world in how much of a quantum leap in graphics the cell was going to enable the PS3 to provide.

Say what you will about MS but going into this next gen i was under the impression that the PS3 was leaps and bounds more powerful than the 360. And their respective press conferences did nothing but enforce that view.

Now i'm starting to think that maybe there is not going to be that much difference, between the two, as far as graphics are concerned.

And here is my whole take on this. (and maybe the general publics, again i said maybe) Sony came out and lied about how much more powerful the PS3 would be over the 360, and to combat this perception, MS has come out and lied that hey were afterall actually more powerful than Sony. And what we are left with is the perception that they are actually almost equal in graphics capability.

Which could end up benefitting MS in the fact that if all things are equal, why should i wait 6 months for the PS3 when i can get the same experience from the 360 this Christmas?
 
GhaleonEB said:
Actually, I don't really buy the story either side is telling; I'm not nearly enough of the techie to decipher it myself and I don't presume to be.

I call Sony's stuff BS because their entire approach has caused them to lose credibiltiy in my eyes. They put out only a handful of off the wall specs, then show a ton of rendered footage deliberately to give the impression that it's realtime. 90% of what they SHOWED was bullshit, so I sure as hell don't trust what they are telling me.

Meanwhile, MS comes to E3 and says, "our system is really powerful, but we've only got Alpha hardware out there so far. Here's what we've got." It's an entirely more HONEST approach.

That said, as I've said over and over and over and over, I'll be really suprised if the PS3 isn't the more powerful machine. If it's not, not only did MS pull off a miaracle but lots of (incompetant) heads are gonna roll over at Sony.

I'm reserving judgement on either system's power until folks like Pana and sites lie Ars Technia put out their analysis.

I can agree with your perspective. I don't understand any of this technical babble or jargon, but I feel that Sony E3 conference was mostly style and no substance. They through a lot of numbers around and showed CG "representations" with no basis or backing whatsoever other than this is what our system should do based on these specs. The stuff on the show floor from M$ was at least running on Alpha kits. So I have an idea what to expect from their console. Seriously, we have no idea if the PS3 game will even resemble ANY of the CG stuff that was shown. Is the hardware or dev kits even that far along. I'm not knocking them, I'm just saying at least I have an idea of what I'm getting into with the XBox360. Sony's ploy just reeks of PS2 'smoke & mirrors' allover again. I'm sorrry but I just can't get past that. If the games look and play like the CG demo, then I'll shut my mouth and keep the peace and acknowledge the consoles attributes. BUt Sony has to show me something.

I was present at the E3 when Nintendo pulled the same tactic with the N64. Showing demos using 'Silicon Graphics' workstations claiming it was representative of what the N64 could do. I'm just saying I've seen this trick too many times before. We'll all know what's what by Fall 2006.........
 
Vince said:
It's relevent because the one Microsoft "expert" from their ATG stated that CELL has poor|weak DP Floating-Point preformance. Which isn't true when you compare the CELL against other ICs, of which the custom SX-6 is one of the highest preforming ICs.

It's DP-FP is only relatively weak when compared against it's enormous SP computational power, but that's like saying that $1 million dollars isn't alot compared with most people because Billy Gates has an extraordinarily high peak of $50 Billion.

Bud, it's time to just give up.

Again, they were discussing Cell vs. XeCPU, a fair comparison, not Cell vs. a 4 year older design, an unfair comparison which was made to confuse uninformed people. XeCPU beats supercomputer designs in the 80's. Big deal. You had your tail inbetween your legs with regards to Cell's DP power so you had to make an unfair comparison to try to boost your own morale as well as sway other less informed people's opinions. I don't disagree with most of the stuff you said about Cell, and I think Cell will be more powerful than XeCPU for gaming, and I'm getting a PS3 so you don't have to worry, I just feel your comparison is very unfair.
 
iapetus said:
No, I understand the done thing is to check your sources before you make the accusation, lest you end up looking stupid.

And I ended up looking stupid in that respect. But no? You've never made accusations about someone that were wrong/unprovable?
 
Nice. If this is correct, how does it compare to Xenos on an objective, not Major Nelson level? I can only see 136 Vs 96 shader ops but I have no idea what that means, or what kind of shader op.
Well, if you multiply Xenos' clock with 96 shader ops/cycle:

96*500 = 48Gops, which is ehactly what Ati people said (or at least I think so)
I'm also not sure, but what was the 100Gops number quoted by Sony, supposed to be? A total system shader ops power (Cell+RSX)?


Meanwhile, MS comes to E3 and says, "our system is really powerful, but we've only got Alpha hardware out there so far. Here's what we've got." It's an entirely more HONEST approach.
Note that they also used CGI footage where needed. I wouldn't call that honest either. Just to name a few that come to mind, remember the PD0 Joanna art fiasco where everyone though that pretty picture was supposed to be realtime? Or the PGR3 footage which consisted of pre-rendered stuff as much or more than realtime footage?

Again, they were discussing Cell vs. XeCPU, a fair comparison, not Cell vs. a 4 year older design,
OK, so they said DP performance of Cell is not that great. But so what? Why even bring such irrelevant point? Did they say what is the DP performance of XeCPU? Did they even mention what is DP math used for?
 
briefcasemanx said:
You had your tail inbetween your legs with regards to Cell's DP power so you had to make an unfair comparison to try to boost your own morale as well as sway other less informed people's opinions.

WTF? Are you delusional? I made that comparison since it's in the same ballpark -- DP-Floating Point is generally used in the realm of scientific or other critical applications. Please, explain to us how powerful a Pentium4 or the XCPU is in comparison. I don't feel you have a clue what you're talking about.
 
briefcasemanx said:
And I ended up looking stupid in that respect. But no? You've never made accusations about someone that were wrong/unprovable?

Not without looking stupid, which is why I try to avoid it where possible. :D
 
Marconelly said:
OK, so they said DP performance of Cell is not that great. But so what? Why even bring such irrelevant point? Did they say what is the DP performance of XeCPU? Did they even mention what is DP math used for?

This spin of argument just highlights how desperate they are to paint Cell in a negative light. X360's CPU's DP performance would be much lower, and DP is pretty much irrelevant for games. The odd algorithm in a physics engine might use it, but even then it'd be a rare luxury. The only possible reason they'd bring it up is to make Cell look less powerful in the eyes of the ignorant - but they'd never prey on ignorance, would they? ;)
 
Marconelly said:
Well, if you multiply Xenos' clock with 96 shader ops/cycle:

96*500 = 48Gops, which is ehactly what Ati people said (or at least I think so)
I'm also not sure, but what was the 100Gops number quoted by Sony, supposed to be? A total system shader ops power (Cell+RSX)?


But if its that simple, and RSX is 78Gops/s and Xenos is 48GOps/s, then PS3 CPU is nearly 2x XeCPU, and RSX is 50% faster than Xenos.

How do MS spin that? I'm probably reading the wrong numbers in the wrong order..
 
PhatSaqs said:
Both sides are playing big PR jokes to make the other guy look bad. Shocking....
It doesn't help either that fanboys are not trying to know what is the more powerful system but siding with one and trying to demonstrate its superiority.

It is kinda interesting to see how these threads are revealing people's biases.
 
mrklaw said:
But if its that simple, and RSX is 78Gops/s and Xenos is 48GOps/s, then PS3 CPU is nearly 2x XeCPU, and RSX is 50% faster than Xenos.

They started counting ops other than vector and scalar ops to bring up their figures, and made assumptions about RSX's figures based on the 6800 Ultra architecture and suggested that nvidia's figure incorporated all the ops they were now counting on Xenos. But we've no idea what that 136 op/cycle figure is made up of.

On a more general note, I agree that Sony plays the PR game all too well too, but MS is taking it to a new level here in terms of credibility and "dirty tactics".
 
mrklaw said:
But if its that simple, and RSX is 78Gops/s and Xenos is 48GOps/s, then PS3 CPU is nearly 2x XeCPU, and RSX is 50% faster than Xenos.

How do MS spin that? I'm probably reading the wrong numbers in the wrong order..



PS3 GPU (RSX)
136 ops per clock * 550MHz = 74.8 billion ops per second

Xbox 360 GPU (R500)
96 ops per clock * 500MHz = 48 billion ops per second
 
::Cues Days of our Lives theme music::

Like sands through the hourglass, so are the days of our livesÂ…

I don't know why anyone would get in such a huff about this stuff really. Sony has their 'hype machine' and MS has their 'hype machine'. I don't know about ya'll, but I'm gonna' sit back and enjoy the ride. :)

ItÂ’s like soap opera for geeks IÂ’m tellinÂ’ yaÂ’. :lol
 
But if its that simple, and RSX is 78Gops/s and Xenos is 48GOps/s, then PS3 CPU is nearly 2x XeCPU, and RSX is 50% faster than Xenos.
Well, two things
- they (ATG) are playing up a bandwdth that is available to Xenos. Even though the main memory total bandwitdth of Xbox 360 is much smaller than that of PS3, Xenos has the chunk of EDRAM available that is a very good bandwidth saving measure. How is that going to compare to raw higher bandwidth availability on PS3, is very, very hard to tell, and noone is able to do that thus far. The ONLY valid measurement will be to see which one will have a better performance in engines like U3 or Renderware, comparing the exact two same games on both. Also, note that they flat out lied about the Xenos' 256GB/s bandwidth to EDRAM. That number is only valid for the internal EDRAM logic, the connection of the Xenos to that EDRAM is actualy much slower (30GB/s or so write, 16GB/s read). That, and saying that alpha devkits have R300 in them (they have R420) makes me think these people don't actually know as much as they are making it seem they do - even about Xbox 360 (some of the things they say about PS3 can obviously be due to lack of knowledge about it)

- second, they assumed Nvidia counted their shder ops in a differnt way than Ati did, so basically they are adding those extra ops in their Xenos calculation to bring the number up. It's a huge assumption on their part, but noone can say whether it's true or not.

Personally, I wouldn't be surprised RSX can execute (a lot) more shader ops. After all, it has a big advantage in transistors utilized in it's core (300M vs. 230M) and those transistors are certainly not sitting there idle.
 
[11:16] <EAsp0rsk> lets talk about specs bay be
[11:16] <EAsp0rsk> lets talk about the p s 3
[11:17] <EAsp0rsk> lets talk about all the good things and the bad things about the 3 six ty
[11:17] <EAsp0rsk> lets talk aboooooout specs
 
I love the argument, "I don't understand all this tech talk, but....I'm gonna inject my uninformed opinion on it anyway." :lol I'm not well versed on the programming side, which is a HUGE advantage in understanding the inner workings of a GPU. In particular, figuring out how shader ops will affect your fillrates and bandwidth and so on. Much of this can be figured out if you take the time to look up the definitions of certain terms, and make the effort to ask questions for clarification. But if you have no clue, don't come in here spouting off about "well Sony did this at their presser" and "MS did that last gen". That's fanboy nonsense and adds nothing at all to this numbers game. That's what this is, a cold, calculating numbers game. When we find out the architecture of the RSX, we'll be able to figure out bandwidth, FLOPS, shader ops and basic featureset info. This most of us can do. With the help of some of the actual coders, it's easy to run "test scenarios" to try and extrapolate "real-world" performance. Note, I'm putting those in quotes as those are always prone to error. But without access to a dev kit, this is all you have to work with.

So knock off this prattle about who lied and who didn't lie. It has nothing at all to do with it. Last gen, the tech analysis all but ignored the demos. It ignored the PR speak. It ignored all of that. What's used is the spec sheets provided and other facts and figures gleamed from interviews and intermediate conferences/presentations. Put it all together in a stew and let all the different minds dig in. I try my best to label my opinion as such in these discussions. If you don't understand something, ask a question, or just lurk until you have a better grasp of things. Most of this thread is fanboy junk. Lies and excuses are the copouts of people who bought into bullshit. If you understood what the PS2 and XB could do, it wouldn't be possible to be lied to. There's a reason you don't see a lot of these attitudes on the more technical boards, and there are plenty of fanboys among the NVidia and ATI camps.

gmoran said:
PS3: total system dot product: 51 billion. Check
Cell: 7 dot products per cycle * 3.2 GHz = 22.4 billion. Check
System-Cell: 28.6 billion dot products per second. Check
28.6 billion dot products per second / 550 MHz = 52 GPU ALU ops per clock. Assumption but reasonable.
24 pixel shading pipes and 4 vertex shading pipes. Assumption quite reasonable.
24 pixel pipes * 2 issued per pipe + 4 vertex pipes = 52 dot products per clock in the GPU. Sounds right, maths works with above.
Each pixel pipe = 4 ALU ops + texture op. Assumption
Each vector pipe= 4 scalar ops. Assumption
For a total of 24 * (4 + 1) + (4*4) = 136 operations per cycle or 136 * 550 = 74.8 GOps per second. Adds up to Sony's figures.

Yup, like I said, the extrapolated data is interesting (looks like a higher-clocked G70 based on # of pipes), but that's still inconclusive. For one, 7 dot products for the Cell seems to have left the PPE out of the equation (I assume 1 dot per SPE). Then again, the PPE was left out of Cell's GFLOPs rating at ISSCC as well. So, is it included in the final figure or not? The way things tie up nicely here would suggest it's not, but we don't know. Plus the number of pipes doesn't really tell us anything about implementation. But you know this, I'm just repeating it for those who might choose to run with this. I hope the rumors that there's better HDR implementation are true. Or some other way to "fix" the concerns some have over bandwidth. The waaaaaaiiiiiiting is the hardest part. ;) PEACE.
 
Why don't you read some the posts over at B3D where DaveBaumann has interviewed the ATI team responsible for the 360 GPU. He hasn't written up his article yet but has released a few tit bits. Also DeanoC also says You really need to wait for Dave's article, but the R500 is "special" R500 can write to EDRAM or GDDR (actually both at the same time...)
The GDDR writes are directly controlled by the ALUs (memexport), its unrelated to the ROPs output though...

Also Izzy check some of the shader ops calculations on the same thread.

http://www.beyond3d.com/forum/viewtopic.php?t=23232&postdays=0&postorder=asc&start=80
 
Pug said:
Why don't you read some the posts over at B3D where DaveBaumann has interviewed the ATI team responsible for the 360 GPU. He hasn't written up his article yet but has released a few tit bits. Also DeanoC also says You really need to wait for Dave's article, but the R500 is "special" R500 can write to EDRAM or GDDR (actually both at the same time...)
The GDDR writes are directly controlled by the ALUs (memexport), its unrelated to the ROPs output though...

Also Izzy check some of the shader ops calculations on the same thread.

http://www.beyond3d.com/forum/viewtopic.php?t=23232&postdays=0&postorder=asc&start=80

interesting thread! many seem to say it is a revolutionary gpu, I'm really interested to see how it will be taken advantage of.
 
I feel sorry for Mj Nelson. He seems like a good guy and seems to want to get the right information out but I dont think he understands the tech side very well. He's at the mercy of biased MS engineers who are trying to justify their design and downplay the advantages of PS3.
 
Pimpwerx said:
It has nothing at all to do with it. Last gen, the tech analysis all but ignored the demos. It ignored the PR speak. It ignored all of that. What's used is the spec sheets provided and other facts and figures gleamed from interviews and intermediate conferences/presentations. Put it all together in a stew and let all the different minds dig in.

I'd tend to agree, and this is what I really dislike about these major nelson articles, interviews etc. It's thinly veiled PR trying to hijack the normal "independent" debate on this kind of stuff, trying to inject their own viewpoint and their own bias into the argument in as discreet a way as possible. They should just release their specs, and shut up and let us come to our own conclusions. Same for Sony, who should really reveal more of their spec than they have thusfar.
 
Also Izzy check some of the shader ops calculations on the same thread.
The calculation he made is based on the specs provided by Ati, it's not something speculative like it seems to be in that thread.
 
Their assumptions are something like this:

PS3: total system dot product: 51 billion. Check
Cell: 7 dot products per cycle * 3.2 GHz = 22.4 billion. Check
System-Cell: 28.6 billion dot products per second. Check
28.6 billion dot products per second / 550 MHz = 52 GPU ALU ops per clock. Assumption but reasonable.
24 pixel shading pipes and 4 vertex shading pipes. Assumption quite reasonable.
24 pixel pipes * 2 issued per pipe + 4 vertex pipes = 52 dot products per clock in the GPU. Sounds right, maths works with above.
Each pixel pipe = 4 ALU ops + texture op. Assumption
Each vector pipe= 4 scalar ops. Assumption
For a total of 24 * (4 + 1) + (4*4) = 136 operations per cycle or 136 * 550 = 74.8 GOps per second. Adds up to Sony's figures.

its not definitive, but it is quite impressive because the results match back to what we know. Doesn't make it true, so I agree with you there; but I do suspect they are on the right track.
Btw, just saw this, and the assumptions seem to break down at the point where they say RSX will have 24 pixel and 4 vertex pipes. It's been "known" (well rumored from several sources) that G70 will have 24+8 pipes. I don't think they would scale that down for RSX, as the chip seems to be an improvement in many ways over G70 otherwise.
 
GhaleonEB said:
If MS had said, "this is the visual target we are shooting for" and ran the clip, that would have been fine. Again, my issue is not with rendered movies per se. It is with presenting them in the context of an actual game. With PGR3, I learned two days later, in an interview with J Allard, that it was rendered. I sure hope they hit that target, but I didn't appreciate being implicitly deceived. It's the same thing Sony did - realtime, realtime, movie, movie, movie, etc. without much, if any, clarification as to what is what.

You keep saying you saw Sony distinguish - again, I didn't see it, and didn't see anyone else at the time who did. I found it deceptive.
You mean the PGR3 clip they showed at the MS Press Conference? It was all of a 30 second teaser at best, didn't show any playable camera angles, HUD or interface, etc.,etc.,etc. The format of MS's PC is a little more ambiguous in how it represents such footage but you're still filling in the blanks yourself to assume that's realtime.

Sony's PC is much less ambiguous - they do a series of interactive tech demos on stage, leading the audience through them step by step, explaining what hardware is being used and how. They make it clear that this portion is all realtime. Then they shift gears to discuss what kind of content to expect on the PS3 and introduce a non-interactive video reel of clips from various pubs/devs as a "glimpse of the future". Why the fuck would they have had Phil Harrison spend several minutes dicking around with a bunch of ducks in a tub of water if the likes of Killzone, Tekken or Heavenly Sword were realtime playable?!?

The only thing that Sony and MS are stating explicitly is a statement of intent. Beyond that, anything you're doing to claim they were implicitly "deceiving" us into believing that some of this was realtime is based on your own faulty assumptions and logic.

I hope the sytems hit those target films - that would rock. I hope this gen is as good as it could be - but when publishers have to show rendered movies to convince me, I'm just not sold. As the cliche goes, show me the money.
It's fine if you want to reserve judgment, but that's not what you've been doing. You've been jumping to conclusions and proclaiming "bullshit" before all the facts are in.
 
I think you all need to take a step back, now inhale slowly, and exhale slowly. Now look at what u guys are arguing over lolol.
 
Marconelly said:
Btw, just saw this, and the assumptions seem to break down at the point where they say RSX will have 24 pixel and 4 vertex pipes. It's been "known" (well rumored from several sources) that G70 will have 24+8 pipes. I don't think they would scale that down for RSX, as the chip seems to be an improvement in many ways over G70 otherwise.
Dumb question. What's the difference between 24+8 pipes and 32 pipes?
 
sp0rsk said:
[11:16] <EAsp0rsk> lets talk about specs bay be
[11:16] <EAsp0rsk> lets talk about the p s 3
[11:17] <EAsp0rsk> lets talk about all the good things and the bad things about the 3 six ty
[11:17] <EAsp0rsk> lets talk aboooooout specs
of all my illegitimate children, you are my favourite :lol
 
Top Bottom