• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

360 Gpu Exposed, 4XMSAA etc , and PS3GPU was a late change.

MGS4 will be countered by Ninja Gaiden 2.

Team Ninja vs Team Kojima, FIGHT.

UT 2007 vs Gears of War

Halo 3 vs I-8.

It'll be a graphical orgy.

I'm pretty excited about the potential on both consoles.
 
Mrbob said:
MGS4 will be countered by Ninja Gaiden 2.

Team Ninja vs Team Kojima, FIGHT.

UT 2007 vs Gears of War

Halo 3 vs I-8.

It'll be a graphical orgy.

I'm pretty excited about the potential on both consoles.

Same here man. And consider the power of both consoles, I'm sure it'll be pretty close.

Being a Sony fan, I've had to stomach the weakest system for 2 gens now. Not anymore, I won't be like "I wish I could see this on the best hardware".
 
Mrbob said:
MGS4 will be countered by Ninja Gaiden 2.
I seriously cannot imagine what NG2 will be like. If this game disappoints me in any aspect i'll probably quit gaming forever.

BTW, someone needs to seriously smack the shades off of Itagaki for contemplating making DOAX2 before NG2.
 
sonycowboy said:
I'm pretty sure somebody at Sony is aware of these little tet-a-tet's that are being made about system power (especially about Real-Time vs Renders), but I would hardly say that Sony has failed to "prove" their point. We're 72 hours after the press conference and they're not really posting on message boards to win spec wars with pissants like us, as I think they're probably pretty busy @ E3.

So, when random posters on various internet sites challenge a Sony metric, is it Sony fans job to prove it incorrect, or else it's assumed to be true? That certainly seems to be the modus operandi so far this E3. It's like Deadmeat has disciples all over the place and once a bogus calculation is dreamt up, it's taken as gospel, even though the best of hardware analysis web sites screw up calculations or assume some aspect of a chips architecture incorrectly.

It's just gets crazy sometimes. Oh well, that's what makes the internet wonderful, I guess. Unsupportable comments that are taken as fact and spread like wildfire. THURSDAYTON!!

Seriously, man, GET OFF YOUR HIGH HORSE. You're just as bad as GoBG lately.

Look, I'm not a tech guy and I don't pretend to be. I can only observe what I see, and take the word of the few sources I trust to analyze this stuff fairly. Specically, sites like Ars and posters like Pana. I put ZERO weight in most of the stuff going on around here.

Contrast this debut to the DC/PS2 unveiling. The specs for the PS2 were light years ahead of the DC, but just as important, the demos blew away what the DC had shown. It was no contest.

This time around, we're comparing XBox 360 games running on Alpha hardware to demos that even Sony admitted were at least partially pre-rendered. Given that the specs have the PS3 as twice as powerful, I'm amazed that we don't see it in the games. It's just not there. That's my take.

But when Ars and Pana both point out that the performance gap is far less than Sony would have us to believe, it just adds weight to what I can already see.

Is the PS3 more powerful? It had damn well better be. If it's not, then those involved are incompetant and the MS camp pulled off a miracle.

Nearly every site I've read today has been very impressed with the 360 and dubious of what Sony is showing - and just as importantly, not showing. This includes guys originally blown away by Sony's conference and numbers. Why? Because of what their eyes tell them, and what the smart tech guys are observing, I suspect.

In so far as Sony cares about the chatter online, and I'm sure they don't, they must be glad to have you on their side. MS certianly has lots of damage to control, but you've been on a full-tilt Sony PR campaign since the conference. I usually look to you for a cut-above level of insight an analysis, which you usually deliver. But I read your posts and then the ones by Ghost of Bill Gates, and they sound pretty damn familiar; you're just contributing to the shouting match.


At any rate....the 360 looks amazing, IMHO. The PS3 will be a beast when it hits. I'm really looking forward to the competition and the games, and I'm more than happy to eat crow if any of my initial observations get turned on their head. But I'm pretty tired of getting toasted in here for what I think are pretty reasonable observations - the PS3 looks great on paper, but the gap is not there yet in visuals. Will it be? We'll see. It's not there now.
 
As Kojima's team sends off the masters of MGS4 for replication, Bungie will destroy it en route using their flaming ninja and some grenades. At the very same time, Team Ninja will dispatch...ninjas...to sneak into Konami's Kojima Productions offices and destroy all traces of source code and any assets pertaining to MGS4. Rare will show up, gather the MGS4 team and brainwash them with the DK64 rap song being played endlessly and at full volume levels...allowing J Allard to 'reprogram' Kojima-san and team into creating awesome 3D reimaginings of Snatcher and Policenauts on PS3 and X360.

End of story...no more damned MGS.

Otherwise, I agree with Ghaleon, Bob, and Chum. It's gonna be so viciously good this coming gen. There won't be the bullshit gripes about system power half as much as there are now and it'll just be toe-to-toe beat-down and drag-out fights between the best devs using the best games they can muster against each other on two well-matched consoles. SNES vs. Gen won't have shit on this.
 
Why is it so hard to accept that Sony has the more revolutionary CPU and MS has the more revolutionary GPU (from what we know)? Can't we all just get along?
 
Onix said:
ATI's GPU might be the most revolutionary, that doesn't mean it will be the most powerful.

if you just define power as a number, MAYBE (we even don't know that).
Other than that you are going to get free 4x AA and a bunch of other features. So either the rsx will have to lose performance due to AA or have worse IQ.
 
Otherwise, I agree with Ghaleon, Bob, and Chum. It's gonna be so viciously good this coming gen. There won't be the bullshit gripes about system power half as much as there are now and it'll just be toe-to-toe beat-down and drag-out fights between the best devs using the best games they can muster against each other on two well-matched consoles. SNES vs. Gen won't have shit on this.

Amen.
 
Ghost of Bill Gates said:
A great philosopher once said: "I believe nothing, I know nothing, but I AM everything".

Which one said that? Just curious. (Seriously.)

It's looking more and more like a consensus on the GPU side isn't coming any time soon, if ever.

Which means....it's gonna be the games. Shock and awe!
 
Lol!. That 10Teraflops power statement is written by someone who didnt listen to the pressconferance!. *lol*

What they said in the PressConf. was that the PS3 had 2Teraflops of power, compared to the 10Teraflops of power Sony had in their movie-rendering facility (which didnt consist of any PS2, or PS3's)
They never said PS3 had 10Teraflops of power *sigh*.

And in the pressconferance, the Nvidia guy said that they had been working for 4years with this project, so i guess last summer seems very odd :)
 
But they also acknowledge that the PS3 part is a custom version of their next gen GPU, too. My guess is that it is what it is...a custom console form of the next wave of their desktop video parts. It's only logical and it sounds perfectly in line with their statements.
 
Nostromo said:
I already wrote something about shading operations here and on B3D, but I'm going to repeat it another time ;)
Shader ops are a MEANINGLESS unit measure, cause every hw vendor has different defintions of shader ops, even between different GPU generations from the same vendor!
If we want to try to compare different GPU shading power we should count floating point operations instead of shader operations.
Regarding R500: each ALU can do a vec4 operation and a scalar operation per clock cycle.
ATI says those are 2 shader operations (even if those 2 ops are COMPLETELY different things from a computational standpoint!), so 48 ALUs * 2 shader ops = 96 shader ops per cycle.
But we're smarter than them so we're going to count floating point operations per clock cycle.
R500's ALU does 10 floating point operations per cycle (8 ops from a vec multiply-add and 2 ops from a scalar multiply-add), so it's rated at 10*48*500 Mhz = 240 Gigaflop/s (this is a lot!)
What about RSX? Well..we don't know much about it. Nvidia released a couple of numbers:
1) 136 shader ops per cycle
2) 51 Giga dot products per second.

The first number is useless cause we don't know RSX ALUs, and we don't know how nvidia count shader ops. (remember: each vendor has its shader ops definitions)
The second number is someway interesting, it tells us RSX does 51*10^9/550*10^6 = 92 dot products per clock cycle.
R500 ALUs should be able to do one dot product per clock cycle, so RSX is almost 2x faster than R500 in this (frequently used) mathematical operation.
A dot4 takes 7 floating point ops, so we can tell RSX is rated at least at 350 Gigaflop/s,
but we can expect each RSX ALU to be able to do a dot product or a fmadd instruction (this is a very common thing in modern GPUs) so RSX rating goes up to 92*8*550 Mgz = 409 Gigaflop/s..wooow! :)

Disclaimer: I'm not saying those numbers are correct cause I extrapolated a lot of thing and made assumptions here and there, but please...just stop to use shading ops as an indicator of how much powerful is a GPU :)

To be fair, I do not think all of those Dot Products come from the GPU, they were quoted in the section "system performance".

Still, it would only mean there are other tricks up RSX's sleeves: no PPP, no Hardware Sound&Video Encoding/Decoding engine in the GPU, etc... they used those 300+ MTransistors for something, don't you think ;) ?
 
Panajev2001a said:
To be fair, I do not think all of those Dot Products come from the GPU, they were quoted in the section "system performance".

Still, it would only mean there are other tricks up RSX's sleeves: no PPP, no Hardware Sound&Video Encoding/Decoding engine in the GPU, etc... they used those 300+ MTransistors for something, don't you think ;) ?

GS ON THE DIE!!!! YOU HEARD IT HERE FIRST!!!!!! :lol
 
Panajev2001a said:
To be fair, I do not think all of those Dot Products come from the GPU, they were quoted in the section "system performance".

Still, it would only mean there are other tricks up RSX's sleeves: no PPP, no Hardware Sound&Video Encoding/Decoding engine in the GPU, etc... they used those 300+ MTransistors for something, don't you think ;) ?

I never got to read your overall impression of the PS3. What do you think of the specs so far? What do you suspect the video card will have up it's sleeve? How does the PS3 stack up against 360 in your opinion?
 
Nostromo said:
No, it's wrong, just re-read my post :)
A vector op is eight floating point ops, and a scalar op is two floating point ops.
ALUs can do vector and/or scalar FMADD in one cycle.
I restate di obvious, shader ops are a meaningless metric.

I'm still confused by where the four ALUs..per ALU..figure is coming from. Or 4 flops per cycle.

Or do those four ALUs make up the one vector ALU - one ALU for each component? 2 flops per component from each ALU?

3rdman said:
I don't get it...You're saying in one sentence that it can do 96bn shader ops per cycle (500MHz x 48 x 4 = 96 billion shader ops) and in the next sentence you say that its 48...is this a case of markieting numbers?

http://techreport.com/etc/2005q2/xbox360-gpu/index.x?pg=1

Sorry to be dense, but I want to understand the discrepency. Why is that considered a "different metric"?

Your thinking a floating point op is equal to a shader op. Or 8 (or as I wrongly figured, 4) floating point ops was greater than one shader op. It's like saying 1000 meters is greater than 1 kilometer. A floating point ALU is a ALU but it's not the same as a shader ALU that we were discussing before.

MightyHedgehog said:
Depends on the game, IMO. What they're doing with the PS3 CPU will determine whether or not you'll be using that extra horsepower to compensate for things you don't do on the GPU in on the X360.

I don't think there'll be much or anything you "can't do" on RSX vs Xenos. But yes, you can leverage Cell to help out with certain things..

I've seen little that changes things versus before these articles. RSX still looks more powerful based on the paper claims they've made. ATi are just talking in terms that let them use bigger numbers, but the same could be done with RSX. Of course, power is differently used between both chips.
 
gofreak said:
I don't think there'll be much or anything you "can't do" on RSX vs Xenos. But yes, you can leverage Cell to help out with certain things..

I've seen little that changes things versus before these articles. RSX still looks more powerful based on the paper claims they've made. ATi are just talking in terms that let them use bigger numbers, but the same could be done with RSX. Of course, power is differently used between both chips.

That's true, I'm sure. I guess I'm talking about my general assessment, as limited by my lack of HW understanding as it is, that Cell is a monster for the types of computations that would suit several kinds of graphics hardware functions and that any lack of specialized hardware on the nVidia side would be covered by the CPU, assuming it has as much slack as it appears to, while running game code concurrently. It would be the best approach for Sony if they wanted to maximize the use of their hardware, looking at cost efficiency.

And here's DeanoC's info on some things that he has long hinted at about the GPU in X360 from B3D forums...

DeanoC said:
DaveBaumann said:
tEd said:
[Is it true that they only have 4 texture units? I was little surprised to say at least

No, its 4 groups of 4. They are grouped in four as these are the most common sampling requirements.

Xenon has 32 memory fetch units, 16 have filtering and address logic (textures) and 16 just do a straight lookup from memory (unfiltered and no addressing modes AKA vertex fetch).

Unification means that any shader can use either type (filtered or unfiltered) as it see fit (no concept of dependent reads or otherwise). This means that the XeGPU has an almost CPU like view of memory.
 
KingV said:
I remember reading somewhere about the idea that you'd be able to get game invites and emails while watching TV over Xbox Live on the 360. I figure they probably need a TV encoder to do that. Not sure why they would NOT include some TIVO functionality if that is indeed the case.

People. The 360 is not a tivo, and it never will be. It will play shows you recorded on your Windows Media Center Edition PC. It's a Windows Media Center Extender. That's it.
 
Razoric said:
I never got to read your overall impression of the PS3. What do you think of the specs so far? What do you suspect the video card will have up it's sleeve? How does the PS3 stack up against 360 in your opinion?

I have not posted my over-all impression yet :).

Obviously though, I think PlayStation 3 compars very well next to Xbox 360: it will be a nice generation to watch unfold :).
 
Panajev2001a said:
I have not posted my over-all impression yet :).

Obviously though, I think PlayStation 3 compars very well next to Xbox 360: it will be a nice generation to watch unfold :).

DUDER, BC via GS on RSX die!! What do you think?!? huh?huh? huh? huh? :D
 
sonycowboy said:
My god. You're like the #1 Xbot aren't you. You simply aren't going to let it go that the PS3 is the more powerful system are you? Even after being owned over and over and over again?

Honestly, at this point, we don't truly know enough about either system to say, other than by specs, the PS3 is 2x Xbox360 and for months, various print and media outlets have been saying the PS3 is more powerful based on anonymous comments from developers (EGM several times actually).

The PS3 is coming out AFTER the Xbox360. By standard rules of Moore's law, the PS3 is going the be the more powerful system, but clearly you will grasp at any and all straws desperately trying to convince yourself that it simply isn't so. Even when Microsoft themselves defer the power advantage to the PS3.

I'll admit, I don't know crap about hardware internals beyond what we see posted here, even though I, in fact, have been following it pretty closely. And if, by some chance, when the dust settles, the Xbox360 ends up being more powerful, either because Sony makes a miscalculation in what they were aiming for or Microsoft hits a serious home run, I'll be the first to congratulate them.

But, you, in spite of overwhelming evidence (yes, it's all paper numbers and anonymous quotes at this point), you simply cannot allow for the possibility that the Xbox360 will be a weaker system.

I bow to your indomitable spirit. Never surrender.



so by your logic (moore's law) Revolution should be even more powerfull then PS3?
 
Panajev2001a said:
To be fair, I do not think all of those Dot Products come from the GPU, they were quoted in the section "system performance"
Yes, you're right, maybe they added CELL numbers too.
A 7 SPEs CELL running at 3.2 Ghz can do 25 GDot/s, RSX almost triple NV40 dot per seconds figure.

Still, it would only mean there are other tricks up RSX's sleeves: no PPP, no Hardware Sound&Video Encoding/Decoding engine in the GPU, etc... they used those 300+ MTransistors for something, don't you think ;) ?
We know RSX is derived from G70 and we know G70 is derived from NV40, so we don't expect RSX to have a PPP (and CELL certainly is a wonderful PPP ;) ) but we still don't know if they had the time to remove the video processor and other not needed stuff.
Vertex shaders are still there nonetheless..
 
GhaleonEB said:
Seriously, man, GET OFF YOUR HIGH HORSE. You're just as bad as GoBG lately.
:lol Yeah, actually, GAF has been pretty bad lately. This place is probably going to be unreadable until late 2006. I'm gonna have to read news and exit even more often than I do now...or amass an ignore list to rival TToB's.
 
Nostromo said:
Yes, you're right, maybe they added CELL numbers too.
A 7 SPEs CELL running at 3.2 Ghz can do 25 GDot/s, RSX almost triple NV40 dot per seconds figure.


We know RSX is derived from G70 and we know G70 is derived from NV40, so we don't expect RSX to have a PPP (and CELL certainly is a wonderful PPP ;) ) but we still don't know if they had the time to remove the video processor and other not needed stuff.
Vertex shaders are still there nonetheless..

They need the Vertex Shaders there (CELL still has LOTS of stuff to do: I am very happy to see Vertex Shader ALU's in RSX, very happy indeed :D)... and they took the time to re-work uite a bit of things according to their last interviews (touching unit's latencies, modifying buffers all over the chip, upgrading the command buffer and realted logic, etc...) so I seriously do not expect the Video Processor to be there.
 
gofreak said:
I'm still confused by where the four ALUs..per ALU..figure is coming from. Or 4 flops per cycle.

Or do those four ALUs make up the one vector ALU - one ALU for each component? 2 flops per component from each ALU?
A R500 ALU has got 2 units, the first one works on 4D vectors, the second one works on 1D vectors (a scalar)
Both these 2 units are capable of ONE floating poing multiply-add (fmadd) operation per clock cycle.
This means every clock cycle a single ALU can do one vec4 fmadd and one scalar fmadd.
A vec4 fmadd is composed of 4 multiplications and 4 adds, a scalar fmadd is composed of 1 multiplication and 1 add -> 4 + 4 + 1 + 1 = 10 floating point operations per clock cycle.
 
Nostromo said:
A R500 ALU has got 2 units, the first one works on 4D vectors, the second one works on 1D vectors (a scalar)
Both these 2 units are capable of ONE floating poing multiply-add (fmadd) operation per clock cycle.
This means every clock cycle a single ALU can do one vec4 fmadd and one scalar fmadd.
A vec4 fmadd is composed of 4 multiplications and 4 adds, a scalar fmadd is composed of 1 multiplication and 1 add -> 4 + 4 + 1 + 1 = 10 floating point operations per clock cycle.

I get that, but I'm still confused by the mention of "4 ALUs". Is that four ALUs for each component within the vec4 ALU?
 
Nostromo said:
Pana: what interview are you talking about?

There will definitely be some differences between the RSX GPU and future PC GPUs, for a couple of reasons:

1) NVIDIA stated that they had never had as powerful a CPU as Cell, and thus the RSX GPU has to be able to swallow a much larger command stream than any of the PC GPUs as current generation CPUs are pretty bad at keeping the GPU fed.

2) The RSX GPU has a 35GB/s link to the CPU, much greater than any desktop GPU, and thus the turbo cache architecture needs to be reworked quite a bit for the console GPU to take better advantage of the plethora of bandwidth. Functional unit latencies must be adjusted, buffer sizes have to be changed, etc...

http://www.anandtech.com/tradeshows/showdoc.aspx?i=2423&p=3

(middle of the page)
 
gofreak said:
I get that, but I'm still confused by the mention of "4 ALUs". Is that four ALUs for each component within the vec4 ALU?
I never mentioned '4 ALUs'. you can also say a R500 ALUs contains some simpler ALUs, as ALU is a soft definition.
I'd prefer to keep things simple anyway ;)
 
AB 101:
I thought nVidia has been working for a good 2 years with Sony.
That's only when they started talking and opening up negotiations. nVidia had already been developing the graphics processor as part of their next generation GeForce line, so the chip's design wasn't done hand-in-hand with Sony for the PS3 for too much of that time nor too much of its total development.

Pimpwerx:
For one, no one knows what the internal makeup of the RSX is.
nVidia has already detailed some of the major changes they made that account for differences from past GeForces while revealing similarities that show it's scaled from past PC chips in significant ways.
RSX is a Sony chip with NVidia IPs in it.
Not at all. The graphics processor was largely designed by nVidia and based off of their architecture, and Sony is contributing mostly on the implementation and integration sides.
I'm still hoping for some Chaperone stuff, or something like it, b/c that's essentially what ATI put in Xenos, some self-shadowing and AA hw that frees up bandwidth and the GPU.
The purpose of those approaches is to move deferred rendering for visible surface determination and scene division for data size manageability closer to the device and further from the game software, getting some of the benefits PowerVR enjoys like fast stencil for shadows, fast AA, and fast Z check.
That eDRAM on Xenos is easily the coolest thing about the architecture IMO.
Embedded RAM set-ups like that have some philosophical similarities to TBDLR.
From that Sony presser, it would seem that RSX was built from the ground up to do HDR.
It includes it, but such a conventional architecture was definitely not built from the ground up for it. The memory requirements are better suited to a processor with low bandwidth requirements.

sonycowboy:
we don't truly know enough about either system to say, other than by specs, the PS3 is 2x Xbox360
2x is a completely arbitrary measure without specific conditions being compared. It's like a FLOPS number without qualifying how that performance can be applied.

GhaleonEB:
Im shocked at how well designed the 360 is given Sony/Toshiba's experience.
The cost/capability of ATi's designs has usually been ahead of Sony/Toshiba's and nVidia's graphics chips.
Contrast this debut to the DC/PS2 unveiling. The specs for the PS2 were light years ahead of the DC
Depends on which specs. Anyway, this example is the perfect reason that no part of this X360/PS3 situation should come as a surprise since it took three times the amount of silicon cost, a newer fabrication process, and a lot of time for the PS2 to beat the DC's design as it did. When the launch conditions are brought much closer as with X360 and PS3, the more cost effective design really begins to close the gap.
 
Embedded RAM set-ups like that have some philosophical similarities to TBDLR.

Now you are just making shit up ;).

What is the fundamental difference between ATI's choice and Flipper and the Graphics Synthesizer ? You have the ROP's embedded with DRAM there too :P.
 
Panajev2001a:
Now you are just making shit up ;).
The purpose of both is to solve bandwidth limitations by minimizing off-chip access in order to gain speed in intensive and/or related operations like Z determintion, anti-aliasing, and stencil support.
What is the fundamental difference between ATI's choice and Flipper and the Graphics Synthesizer ?
X360's ring of logic around the eDRAM is effective for the aforementioned operations like checking visibility with a fast device-side Z pass.
 
Lazy8s said:
The purpose of both is to solve bandwidth limitations by minimizing off-chip access in order to gain speed in intensive and/or related operations like Z determintion, anti-aliasing, and stencil support.

So, then by this statement the integration of things like, say, early Z rejection or even the Z-buffer itself share "philosophical similarities to TBDLR."

Your argument is about as abstract as possible since anything this side of Raycasting is going to share "philosophical similarities" in that the methodologies seek to reduce the computational and bandwith requirements to render a scene; irregardless to the fact that their methods to do this arre vastly different....

... You know, it's not everyday someone implies that Raytracing has "philosophical similarities to TBDLR" :) So, I'm going to have to side with my Italian friend and say you're full of shit, their "philosophical similarities" aren't microarchitectural, they practically end at the fabrication step in which both have embedded memories.

Inductive Fallacy... False Analogy? I do believe so.
 
Isn't it obvious wich is more powerful? I mean come on, the answer is quite obvious:

We'll know in 5 years



Well, that's how it goes with these things, one has a very powerful CPU and less of a GPU, one has a Very powerful GPU with less of a CPU. They're probably both going to trade the graphics crown until last gen titles when top level devs on each side can push the machine for all it's worth and shortcomings become obvious.

Almost like today with PS2 and Xbox, as for awhile the PS2 wass keeping up rather well, to the shit-processored Xbox until the last wave of normal mapped and effect layden titles hit. Next gen it probably going to be the same, but with a narrower margin still.
 
it will be interesting to see what Nvidia can do for Playstation4 graphics, when they get an entire Playstation-length console cycle of ~6 years to develop the GPU instead of 1 to 2 years.
 
midnightguy said:
it will be interesting to see what Nvidia can do for Playstation4 graphics, when they get an entire Playstation-length console cycle of ~6 years to develop the GPU instead of 1 to 2 years.

I believe ATI also had just about 2 years to work on the Xbox 360 GPU.
 
mr2mike said:
Well, that's how it goes with these things, one has a very powerful CPU and less of a GPU, one has a Very powerful GPU with less of a CPU.

I don't think this is true. There's more to point to RSX being more powerful than Xenos than not.

A lot of very uninformed comment is being made about X360's GPU around the web currently as far as I can see. Some people are thinking it's twice the power, or more, than it actually is.
 
the fact is that ATI just has more talented engeneers then Nvidea nowadays.
Ati bought the brilliant ArtX team, witch went on to design the 9700 core, while Nv bought the 3DfX team, witch then designed the geforce FX fiasco.
On top of that , alot of Nv's more talented engeneers left the company for either ATI or other companies when things started to go downhill.
I really think Sony wouldve prefered ATI to do their GPU, but by the time they realised they needed help ATI was already tied up with REV/360/next pc card.

that said, when it comes to PC cards, i do think Nv might come out on top with the next series of cards, simply because ATI had to put so many resources on the console GPU's it had in development. Financially i would imagine these GPU's to be more lucrative then a single gen of PC cards though

I'm also wondering what ATI will have done with the rev GPU, theyve been working on it since GC was done, and theyve had the artX team on it, so it should be pretty good.
 
Hajaz said:
the fact is that ATI just has more talented engeneers then Nvidea nowadays.
Oh bullshit, the fact is they both have quite talented engineering teams.

Hajaz said:
Ati bought the brilliant ArtX team, witch went on to design the 9700 core, while Nv bought the 3DfX team, witch then designed the geforce FX fiasco.

The ArtX myth yet again arises... The Nv30 "fiasco" had little to do with 3dfx as it was already to far along to be influenced by the 3dfx buy-out; it had much more to do with the 130nm stepping, a too ambitious design and lack of ridgid oversight on the project. Not to mention ATI yeilded a good part.

And JFYI, the NV40s texture and shading architecture was designed by a team lead by Emmett Kilgariff, who happened to be the lead architect for 3dfx on Rampage and it's "Texture Computer." And we all know how inferior the NV40 (especially it's shading abilities) are... /sarcasm

Hajaz said:
On top of that , alot of Nv's more talented engeneers left the company for either ATI or other companies when things started to go downhill.
I really think Sony wouldve prefered ATI to do their GPU, but by the time they realised they needed help ATI was already tied up with REV/360/next pc card.

The first is disingenuous as turn-over happens throught the industry, and I've heard nothing of nVidia having problems. The second is your opinion and I happen to highly disagree.

EDIT: And no, I don't trust the "Rage3D archives" -- I am sorry.
 
oh, the Nvidia employees defecting thing was well covered in the gpu press, if you were keeping up with such things at that time, with interviews and everything...
i cba to dig through 2 years of old news just to link it to you though. search rage3d archives if you missed it.

Surely you dont think its pure luck with yields that made ATI's marketshare in pc cards grow, while Nv's has shrunk dramatically?
 
It's really hard to talk about PS3 HW vs Xbox 360 HW when you have so many biased people coming off the ATI vs Nvidia wars. So much misinformation. :\
 
Top Bottom