Pedigree Chum
Banned
Mrbob said:It'll be 2009 before we see a huge difference in PS3/Xbox 360 games! I doubt it will be as big of a gap as we see between XBox and PS2 right now.....
Not if Kojima has anything to say about it.
Mrbob said:It'll be 2009 before we see a huge difference in PS3/Xbox 360 games! I doubt it will be as big of a gap as we see between XBox and PS2 right now.....
Mrbob said:MGS4 will be countered by Ninja Gaiden 2.
Team Ninja vs Team Kojima, FIGHT.
UT 2007 vs Gears of War
Halo 3 vs I-8.
It'll be a graphical orgy.
I'm pretty excited about the potential on both consoles.
I seriously cannot imagine what NG2 will be like. If this game disappoints me in any aspect i'll probably quit gaming forever.Mrbob said:MGS4 will be countered by Ninja Gaiden 2.
sonycowboy said:I'm pretty sure somebody at Sony is aware of these little tet-a-tet's that are being made about system power (especially about Real-Time vs Renders), but I would hardly say that Sony has failed to "prove" their point. We're 72 hours after the press conference and they're not really posting on message boards to win spec wars with pissants like us, as I think they're probably pretty busy @ E3.
So, when random posters on various internet sites challenge a Sony metric, is it Sony fans job to prove it incorrect, or else it's assumed to be true? That certainly seems to be the modus operandi so far this E3. It's like Deadmeat has disciples all over the place and once a bogus calculation is dreamt up, it's taken as gospel, even though the best of hardware analysis web sites screw up calculations or assume some aspect of a chips architecture incorrectly.
It's just gets crazy sometimes. Oh well, that's what makes the internet wonderful, I guess. Unsupportable comments that are taken as fact and spread like wildfire. THURSDAYTON!!
GhaleonEB said:You're just as bad as GoBG lately.
MightyHedgehog said:I agree with Ghaleon, Bob, and Chum. It's gonna be so viciously good this coming gen.
Onix said:ATI's GPU might be the most revolutionary, that doesn't mean it will be the most powerful.
A great philosopher once said: "I believe nothing, I know nothing, but I AM everything".Onix said:ATI's GPU might be the most revolutionary, that doesn't mean it will be the most powerful.
Otherwise, I agree with Ghaleon, Bob, and Chum. It's gonna be so viciously good this coming gen. There won't be the bullshit gripes about system power half as much as there are now and it'll just be toe-to-toe beat-down and drag-out fights between the best devs using the best games they can muster against each other on two well-matched consoles. SNES vs. Gen won't have shit on this.
Ghost of Bill Gates said:A great philosopher once said: "I believe nothing, I know nothing, but I AM everything".
GhaleonEB said:Which one said that? Just curious. (Seriously.)
Ghost of Bill Gates said:Greek philosopher Socrates
Nostromo said:I already wrote something about shading operations here and on B3D, but I'm going to repeat it another time
Shader ops are a MEANINGLESS unit measure, cause every hw vendor has different defintions of shader ops, even between different GPU generations from the same vendor!
If we want to try to compare different GPU shading power we should count floating point operations instead of shader operations.
Regarding R500: each ALU can do a vec4 operation and a scalar operation per clock cycle.
ATI says those are 2 shader operations (even if those 2 ops are COMPLETELY different things from a computational standpoint!), so 48 ALUs * 2 shader ops = 96 shader ops per cycle.
But we're smarter than them so we're going to count floating point operations per clock cycle.
R500's ALU does 10 floating point operations per cycle (8 ops from a vec multiply-add and 2 ops from a scalar multiply-add), so it's rated at 10*48*500 Mhz = 240 Gigaflop/s (this is a lot!)
What about RSX? Well..we don't know much about it. Nvidia released a couple of numbers:
1) 136 shader ops per cycle
2) 51 Giga dot products per second.
The first number is useless cause we don't know RSX ALUs, and we don't know how nvidia count shader ops. (remember: each vendor has its shader ops definitions)
The second number is someway interesting, it tells us RSX does 51*10^9/550*10^6 = 92 dot products per clock cycle.
R500 ALUs should be able to do one dot product per clock cycle, so RSX is almost 2x faster than R500 in this (frequently used) mathematical operation.
A dot4 takes 7 floating point ops, so we can tell RSX is rated at least at 350 Gigaflop/s,
but we can expect each RSX ALU to be able to do a dot product or a fmadd instruction (this is a very common thing in modern GPUs) so RSX rating goes up to 92*8*550 Mgz = 409 Gigaflop/s..wooow!
Disclaimer: I'm not saying those numbers are correct cause I extrapolated a lot of thing and made assumptions here and there, but please...just stop to use shading ops as an indicator of how much powerful is a GPU![]()
Panajev2001a said:To be fair, I do not think all of those Dot Products come from the GPU, they were quoted in the section "system performance".
Still, it would only mean there are other tricks up RSX's sleeves: no PPP, no Hardware Sound&Video Encoding/Decoding engine in the GPU, etc... they used those 300+ MTransistors for something, don't you think?
Panajev2001a said:To be fair, I do not think all of those Dot Products come from the GPU, they were quoted in the section "system performance".
Still, it would only mean there are other tricks up RSX's sleeves: no PPP, no Hardware Sound&Video Encoding/Decoding engine in the GPU, etc... they used those 300+ MTransistors for something, don't you think?
Nostromo said:No, it's wrong, just re-read my post
A vector op is eight floating point ops, and a scalar op is two floating point ops.
ALUs can do vector and/or scalar FMADD in one cycle.
I restate di obvious, shader ops are a meaningless metric.
3rdman said:I don't get it...You're saying in one sentence that it can do 96bn shader ops per cycle (500MHz x 48 x 4 = 96 billion shader ops) and in the next sentence you say that its 48...is this a case of markieting numbers?
http://techreport.com/etc/2005q2/xbox360-gpu/index.x?pg=1
Sorry to be dense, but I want to understand the discrepency. Why is that considered a "different metric"?
MightyHedgehog said:Depends on the game, IMO. What they're doing with the PS3 CPU will determine whether or not you'll be using that extra horsepower to compensate for things you don't do on the GPU in on the X360.
gofreak said:I don't think there'll be much or anything you "can't do" on RSX vs Xenos. But yes, you can leverage Cell to help out with certain things..
I've seen little that changes things versus before these articles. RSX still looks more powerful based on the paper claims they've made. ATi are just talking in terms that let them use bigger numbers, but the same could be done with RSX. Of course, power is differently used between both chips.
DeanoC said:DaveBaumann said:tEd said:[Is it true that they only have 4 texture units? I was little surprised to say at least
No, its 4 groups of 4. They are grouped in four as these are the most common sampling requirements.
Xenon has 32 memory fetch units, 16 have filtering and address logic (textures) and 16 just do a straight lookup from memory (unfiltered and no addressing modes AKA vertex fetch).
Unification means that any shader can use either type (filtered or unfiltered) as it see fit (no concept of dependent reads or otherwise). This means that the XeGPU has an almost CPU like view of memory.
KingV said:I remember reading somewhere about the idea that you'd be able to get game invites and emails while watching TV over Xbox Live on the 360. I figure they probably need a TV encoder to do that. Not sure why they would NOT include some TIVO functionality if that is indeed the case.
Razoric said:I never got to read your overall impression of the PS3. What do you think of the specs so far? What do you suspect the video card will have up it's sleeve? How does the PS3 stack up against 360 in your opinion?
Panajev2001a said:I have not posted my over-all impression yet.
Obviously though, I think PlayStation 3 compars very well next to Xbox 360: it will be a nice generation to watch unfold.
sonycowboy said:My god. You're like the #1 Xbot aren't you. You simply aren't going to let it go that the PS3 is the more powerful system are you? Even after being owned over and over and over again?
Honestly, at this point, we don't truly know enough about either system to say, other than by specs, the PS3 is 2x Xbox360 and for months, various print and media outlets have been saying the PS3 is more powerful based on anonymous comments from developers (EGM several times actually).
The PS3 is coming out AFTER the Xbox360. By standard rules of Moore's law, the PS3 is going the be the more powerful system, but clearly you will grasp at any and all straws desperately trying to convince yourself that it simply isn't so. Even when Microsoft themselves defer the power advantage to the PS3.
I'll admit, I don't know crap about hardware internals beyond what we see posted here, even though I, in fact, have been following it pretty closely. And if, by some chance, when the dust settles, the Xbox360 ends up being more powerful, either because Sony makes a miscalculation in what they were aiming for or Microsoft hits a serious home run, I'll be the first to congratulate them.
But, you, in spite of overwhelming evidence (yes, it's all paper numbers and anonymous quotes at this point), you simply cannot allow for the possibility that the Xbox360 will be a weaker system.
I bow to your indomitable spirit. Never surrender.
Shogmaster said:DUDER, BC via GS on RSX die!! What do you think?!? huh?huh? huh? huh?![]()
Panajev2001a said:I think yer nuts.
Yes, you're right, maybe they added CELL numbers too.Panajev2001a said:To be fair, I do not think all of those Dot Products come from the GPU, they were quoted in the section "system performance"
We know RSX is derived from G70 and we know G70 is derived from NV40, so we don't expect RSX to have a PPP (and CELL certainly is a wonderful PPPStill, it would only mean there are other tricks up RSX's sleeves: no PPP, no Hardware Sound&Video Encoding/Decoding engine in the GPU, etc... they used those 300+ MTransistors for something, don't you think?
:lol Yeah, actually, GAF has been pretty bad lately. This place is probably going to be unreadable until late 2006. I'm gonna have to read news and exit even more often than I do now...or amass an ignore list to rival TToB's.GhaleonEB said:Seriously, man, GET OFF YOUR HIGH HORSE. You're just as bad as GoBG lately.
Nostromo said:Yes, you're right, maybe they added CELL numbers too.
A 7 SPEs CELL running at 3.2 Ghz can do 25 GDot/s, RSX almost triple NV40 dot per seconds figure.
We know RSX is derived from G70 and we know G70 is derived from NV40, so we don't expect RSX to have a PPP (and CELL certainly is a wonderful PPP) but we still don't know if they had the time to remove the video processor and other not needed stuff.
Vertex shaders are still there nonetheless..
A R500 ALU has got 2 units, the first one works on 4D vectors, the second one works on 1D vectors (a scalar)gofreak said:I'm still confused by where the four ALUs..per ALU..figure is coming from. Or 4 flops per cycle.
Or do those four ALUs make up the one vector ALU - one ALU for each component? 2 flops per component from each ALU?
Nostromo said:A R500 ALU has got 2 units, the first one works on 4D vectors, the second one works on 1D vectors (a scalar)
Both these 2 units are capable of ONE floating poing multiply-add (fmadd) operation per clock cycle.
This means every clock cycle a single ALU can do one vec4 fmadd and one scalar fmadd.
A vec4 fmadd is composed of 4 multiplications and 4 adds, a scalar fmadd is composed of 1 multiplication and 1 add -> 4 + 4 + 1 + 1 = 10 floating point operations per clock cycle.
Nostromo said:Pana: what interview are you talking about?
There will definitely be some differences between the RSX GPU and future PC GPUs, for a couple of reasons:
1) NVIDIA stated that they had never had as powerful a CPU as Cell, and thus the RSX GPU has to be able to swallow a much larger command stream than any of the PC GPUs as current generation CPUs are pretty bad at keeping the GPU fed.
2) The RSX GPU has a 35GB/s link to the CPU, much greater than any desktop GPU, and thus the turbo cache architecture needs to be reworked quite a bit for the console GPU to take better advantage of the plethora of bandwidth. Functional unit latencies must be adjusted, buffer sizes have to be changed, etc...
I never mentioned '4 ALUs'. you can also say a R500 ALUs contains some simpler ALUs, as ALU is a soft definition.gofreak said:I get that, but I'm still confused by the mention of "4 ALUs". Is that four ALUs for each component within the vec4 ALU?
That's only when they started talking and opening up negotiations. nVidia had already been developing the graphics processor as part of their next generation GeForce line, so the chip's design wasn't done hand-in-hand with Sony for the PS3 for too much of that time nor too much of its total development.I thought nVidia has been working for a good 2 years with Sony.
nVidia has already detailed some of the major changes they made that account for differences from past GeForces while revealing similarities that show it's scaled from past PC chips in significant ways.For one, no one knows what the internal makeup of the RSX is.
Not at all. The graphics processor was largely designed by nVidia and based off of their architecture, and Sony is contributing mostly on the implementation and integration sides.RSX is a Sony chip with NVidia IPs in it.
The purpose of those approaches is to move deferred rendering for visible surface determination and scene division for data size manageability closer to the device and further from the game software, getting some of the benefits PowerVR enjoys like fast stencil for shadows, fast AA, and fast Z check.I'm still hoping for some Chaperone stuff, or something like it, b/c that's essentially what ATI put in Xenos, some self-shadowing and AA hw that frees up bandwidth and the GPU.
Embedded RAM set-ups like that have some philosophical similarities to TBDLR.That eDRAM on Xenos is easily the coolest thing about the architecture IMO.
It includes it, but such a conventional architecture was definitely not built from the ground up for it. The memory requirements are better suited to a processor with low bandwidth requirements.From that Sony presser, it would seem that RSX was built from the ground up to do HDR.
2x is a completely arbitrary measure without specific conditions being compared. It's like a FLOPS number without qualifying how that performance can be applied.we don't truly know enough about either system to say, other than by specs, the PS3 is 2x Xbox360
The cost/capability of ATi's designs has usually been ahead of Sony/Toshiba's and nVidia's graphics chips.Im shocked at how well designed the 360 is given Sony/Toshiba's experience.
Depends on which specs. Anyway, this example is the perfect reason that no part of this X360/PS3 situation should come as a surprise since it took three times the amount of silicon cost, a newer fabrication process, and a lot of time for the PS2 to beat the DC's design as it did. When the launch conditions are brought much closer as with X360 and PS3, the more cost effective design really begins to close the gap.Contrast this debut to the DC/PS2 unveiling. The specs for the PS2 were light years ahead of the DC
Embedded RAM set-ups like that have some philosophical similarities to TBDLR.
The purpose of both is to solve bandwidth limitations by minimizing off-chip access in order to gain speed in intensive and/or related operations like Z determintion, anti-aliasing, and stencil support.Now you are just making shit up.
X360's ring of logic around the eDRAM is effective for the aforementioned operations like checking visibility with a fast device-side Z pass.What is the fundamental difference between ATI's choice and Flipper and the Graphics Synthesizer ?
Lazy8s said:The purpose of both is to solve bandwidth limitations by minimizing off-chip access in order to gain speed in intensive and/or related operations like Z determintion, anti-aliasing, and stencil support.
midnightguy said:it will be interesting to see what Nvidia can do for Playstation4 graphics, when they get an entire Playstation-length console cycle of ~6 years to develop the GPU instead of 1 to 2 years.
mr2mike said:Well, that's how it goes with these things, one has a very powerful CPU and less of a GPU, one has a Very powerful GPU with less of a CPU.
Oh bullshit, the fact is they both have quite talented engineering teams.Hajaz said:the fact is that ATI just has more talented engeneers then Nvidea nowadays.
Hajaz said:Ati bought the brilliant ArtX team, witch went on to design the 9700 core, while Nv bought the 3DfX team, witch then designed the geforce FX fiasco.
Hajaz said:On top of that , alot of Nv's more talented engeneers left the company for either ATI or other companies when things started to go downhill.
I really think Sony wouldve prefered ATI to do their GPU, but by the time they realised they needed help ATI was already tied up with REV/360/next pc card.