XBOX NEXT GPU specs?!?

Phoenix said:
I'll toss in my 'no fucking way' and depart. IBM can't reliably fab 3.0Ghz parts at sufficient yield for their own uses, but they are going to be shipping tri-core (which I have yet to ever see) 3.5Ghz parts.

My magic 8-ball says:

All signs point to bullshit.

They are about 7-8 months from manufacturing, I'd imagine, so it's quite possible for this to happen by then.
 
IJoel said:
They are about 7-8 months from manufacturing, I'd imagine, so it's quite possible for this to happen by then.

Besides, if they can't get the yields up, all they have to do is just downgrade the specs a bit. 3Ghz instead of 3.5 wouldn't bother the devs too much.
 
Phoenix said:
Well... if they can pull tri-core 3.0Ghz+ parts from their ass in 7 months AMD and Intel are doomed :D

I have my doubts about the tri core business as well. Dual core I can see no problem.
 
Shogmaster said:
I have my doubts about the tri core business as well. Dual core I can see no problem.

If they manage to do it I swear to you this - I will be one of the first software architects trying to figure out how to put Linux or OSX onto that bitch to turn it into a server. FOr 300 bucks they are going to give me 3, 3.0Ghz+ cores of computational power? Shit I'll buy them in quantities of 100...
 
These are basically the same specs that have been shown 20 times over the past year. What exactly is new here? Heck I remember that spec sheet with a flowchart diagram and all that jazz that had all of the specs listed here almost verbatim and not to mention a bunch more stuff to boot. In other words... OLD. What's everyone getting all worked up about?
 
shpankey said:
These are basically the same specs that have been shown 20 times over the past year. What exactly is new here? Heck I remember that spec sheet with a flowchart diagram and all that jazz that had all of the specs listed here almost verbatim and not to mention a bunch more stuff to boot. In other words... OLD. What's everyone getting all worked up about?

Because Xbox 2 is SCARY
 
GaimeGuy said:
Yes the Xenon's GPU, with the system set to release in the next year, is going to be several times more powerful than the best video card on the market right now. Riiiight.
ohh how quickly we forget the past.
Every console is ahead of the best Vidcards when it's released.
 
gohepcat said:
ohh how quickly we forget the past.
Every console is ahead of the best Vidcards when it's released.

That was true.... until they started using pretty much off the shelf videocard chipsets in the consoles themselves.
 
Phoenix said:
That was true.... until they started using pretty much off the shelf videocard chipsets in the consoles themselves.

Xbox GPU was superior to PC GPUs on release.
 
JoshuaJSlone said:
From one generation to the next, Sony's hardware will have improved 6 years' worth compared to MS's 4.


Oh your making a logical error there. It doesn't matter who delvelops longer. It's the date that it's available. By your logic, if you started a console in 1990 and finished it today it would be the best console ever. You're also assuming that Xbox2 development was started when Xbox was launched. Incorrect.

Tech moves at a pretty even rate. Microsofts Xbox was a head of the PS2 becasue it launched later. PS2 would have been ahead of the Xbox if the launched it a year after it.

There will be very little diffenece next gen. They are all launching together.
 
Duckhuntdog said:
Well gosh, Nintendo and Sony shouldn't even bother.

Sides wasn't it the ArtX team that finally cranked the ball out of the park for ATI? Yes, for the most part.

ATI's greatest achevement, the 9700, was not developed by them IIRC. They, unfortunately have been sticking with that tech a little too long now.
 
Phoenix said:
Superior in what sense?

The highest PC GPU at the time of Xbox release was a GeForce 3. Xbox was a GeForce 3 in a sense but sported 2 pixel shaders, which meant double the geometry and double the pixel shading routines. (which almost makes it a GeForce 4 ti4200, except it lacks a few features which made it into the final GF4 design.
 
Phoenix said:
Superior in what sense?
The GPU in the Xbox is a 'spiffed up' GeForce 3 so yes it was more powerful than the GeForce 3. I am, however, in the dark about when the GeForce 4 launched.

The NV in the Xbox had higher-level shaders, and doubled fillrate or something like that.
 
DopeyFish said:
The highest PC GPU at the time of Xbox release was a GeForce 3. Xbox was a GeForce 3 in a sense but sported 2 pixel shaders, which meant double the geometry and double the pixel shading routines. (which almost makes it a GeForce 4 ti4200, except it lacks a few features which made it into the final GF4 design.

The NV2A and the NV20 were from the same family actually, similar to what we would call a Pro and XT comparison today. While one had the ability to push 'more' (NV2A), it did not contain any features of a generational difference. It would be like calling a 2004 Camry superior to a 2003 Camry. While the 2004 Camry has a few more horses under the hood, its not really superior to it. Making that the definition would mean that memory with 1ns latency speed improvement is superior to other memory of the same type that's just a little slower.
 
Vagabond said:
The GPU in the Xbox is a 'spiffed up' GeForce 3 so yes it was more powerful than the GeForce 3. I am, however, in the dark about when the GeForce 4 launched.

The NV in the Xbox had higher-level shaders, and doubled fillrate or something like that.
Like 2 months after Xbox.
 
Phoenix said:
The NV2A and the NV20 were from the same family actually, similar to what we would call a Pro and XT comparison today. While one had the ability to push 'more' (NV2A), it did not contain any features of a generational difference. It would be like calling a 2004 Camry superior to a 2003 Camry. While the 2004 Camry has a few more horses under the hood, its not really superior to it. Making that the definition would mean that memory with 1ns latency speed improvement is superior to other memory of the same type that's just a little slower.
Nope that's not true. The NV2A and double the Vertex units (Basicly the exact same diference between the NV20 and NV25) The Gforce 4 was a Gforce 3 with 2 vertex units insted of one. It had a coupld of other tweaks also so....
 
Phoenix said:
The NV2A and the NV20 were from the same family actually, similar to what we would call a Pro and XT comparison today. While one had the ability to push 'more' (NV2A), it did not contain any features of a generational difference. It would be like calling a 2004 Camry superior to a 2003 Camry. While the 2004 Camry has a few more horses under the hood, its not really superior to it. Making that the definition would mean that memory with 1ns latency speed improvement is superior to other memory of the same type that's just a little slower.

uh, GeForce 4 and GeForce 3 were from the same family too, dude. It's the same processor, just all spiffied up. (ie dual shaders instead of just a single)

NV2A was 3/4 the way to a GeForce4 Ti4200 (same clock speed, same polygonal output).

It was DOUBLE a standard GeForce 3, so how that is not a generational difference... I don't know. NV2A/GF4 are also Direct X 8.1 chips, GeForce 3 was a Direct X 8 chip.

So i have no idea wtf you are getting at.

gohepcat said:
Like 2 months after Xbox.

3-4. They unveiled in Feb'02, so they probably started reaching shops in late feb, early march
 
DopeyFish said:
uh, GeForce 4 and GeForce 3 were from the same family too, dude. It's the same processor, just all spiffied up. (ie dual shaders instead of just a single)

NV2A was 3/4 the way to a GeForce4 Ti4200 (same clock speed, same polygonal output).

It was DOUBLE a standard GeForce 3, so how that is not a generational difference... I don't know. NV2A/GF4 are also Direct X 8.1 chips, GeForce 3 was a Direct X 8 chip.

So i have no idea wtf you are getting at.

Simple. Since you know DirectX (apparently) then you know that the differences between the DX8 and DX8.1 feature sets were insignificant as say compared to the difference between the fixed function pipeline in DX8 and the programatic pipeline in DX9, OpenGL 1.5, etc. The GeForce4 and the GeForce3 are actually not from the same family. The NV2A was engineered around the nForce microcontroller design which became the nForce reference motherboard design. Dunno if you sat through any of nVidias seminars at GDC or any of their programmers summits, but if you did then you'll recall that there is a distinct difference in the bus structures of the GeForce3 and the GeForce4 and the mechanisms through which high latency system memory is accessed when going across the bus.

A generational difference is something that something of the previous generation would be incapable of doing within the same or even modified rendering path. Shader model 2-3 DX9, OGL 2 parts are generationally different from their DX7 OGL 1.2 counterparts in fairly significant ways requiring you to write a different rendering path. SO I have to ask, do you write code in D3D or OpenGL or are you just giving me numbers off a spec sheet as the foundation for your hypothesis?
 
gohepcat said:
Nope that's not true. The NV2A and double the Vertex units (Basicly the exact same diference between the NV20 and NV25) The Gforce 4 was a Gforce 3 with 2 vertex units insted of one. It had a coupld of other tweaks also so....

FYI, the difference between a stock video card and the Pro or XT variant is that it has double the pixel/vertex/texel pipelines than the stock.

Geek on...
 
m0dus said:
I guess I'm saying, in terms of computer technology, a more protracted dev cycle that begins 1-2 years earlier isn't necessarily better, IMO.

Well a few things that you can count on it being are: cheaper, better tested, and less risky.
 
Phoenix:
even though your comments were not directed at me, I concider myself 'skooled' on GeForce3, NV2A and GeForce4. I didn't know some of the things you mentioned. because I too thought that the GF3 and GF4 were from the same family. well almost. like Voodoo1 and Voodoo2.

1.) I know NV2A has some stuff that even the GF4 does not have

2.) the GF4 has some stuff that the NV2A does not have.


NV2A and GF4 both have a large increase in geometry / vertex performance over the plain GF3 and GF3 ti200 and even GF3 ti500 because both NV2A and GF4 have the 2nd Vertex Shader, and a bump in clockspeed (moreso with the ti 4400 and ti 4600) which results in 2-3x the geometry performance over the plain GF3.


maybe you can skool me even more :) plus, i'm sure i still don't have some things correct, in what i said above.


Dopey:
NV2A was 3/4 the way to a GeForce4 Ti4200 (same clock speed, same polygonal output).

iirc, NV2A is clocked at 233 MHz and GeForce4 Ti4200 is clocked at 250 MHz. I'm sure Ti4200 gets slightly higher polygon performance than NV2A, at least in theory, not taking into account PC bottlenecks that Xbox does not have. (then again Xbox might have bottlenecks that PC doesnt have, i dont know).

XGPU was originally to be clocked at 300 MHz, then it was knocked down to 250 MHz, then 233 MHz was the final shipping clockspeed.
 
The same chip that will power the Xbox 2 will be on the market for the PC at roughly the same time. The Xbox GPU will have some extras on it.
 
The same chip that will power the Xbox 2 will be on the market for the PC at roughly the same time. The Xbox GPU will have some extras on it.

maybe, maybe not.

R520 for PC is probably going to be quite a bit different than Xbox 2 VPU (R500 or R5xx).

I don't think R520 will have eDRAM / embedded DRAM.

then it might be a long wait for R600 ( summer or fall 2006?) which should have more in commen with Xbox 2 VPU (unifed shaders)
 
I was ... somehow wrong about triangle rates / clockspeed.

But not sure how the math works out.

GeForce 4 @ 250 MHz does 112 mvps
NV2a @ 233 MHz does 117.5 mvps

I've seen some calcs that people use to get the accurate number, but wouldn't this be a little screwed up? (nvidia claims both numbers)

And no, i don't wander far into tech field Phoenix... my knowledge is limited :P
 
xexex said:
Phoenix:
even though your comments were not directed at me, I concider myself 'skooled' on GeForce3, NV2A and GeForce4. I didn't know some of the things you mentioned. because I too thought that the GF3 and GF4 were from the same family.

You could consider them 2nd generation cousins at best. They aren't as radically different as the FX series (which was a core redesign), but they are very different fish in the nVidia pool. The NV2A is where the GF4 was heading when it was 'frozen' for the XBox and then somewhat mangled to sit as an embedded part in the nForce design. It was very fun talking to the guys. I remember how they used to go on at length about the nForce audio controller. In the reference design there are some significant differences in the way the GeForce3 and the GeForce operate on their own internal busses and bridges. Think of it this way - the GF3 development started heading toward project X. Somewhere along the way someone said 'hey I like where you are going with this'. This path now branches with one road that eventually becomes the GeForce4 and a (slightly) shorter road that ends in the NV2A.

NV2A and GF4 both have a large increase in geometry / vertex performance over the plain GF3 and GF3 ti200 and even GF3 ti500 because both NV2A and GF4 have the 2nd Vertex Shader, and a bump in clockspeed (moreso with the ti 4400 and ti 4600) which results in 2-3x the geometry performance over the plain GF3.

This is of course very true. But tossing on more and more shaders does not a generational leap make (and actually even that becomes impossible beyond a certain point even with a die shrink). That just makes them faster parts of each other. Once you start looking at the underlying algorithms on the chips you start getting the changes: the Z/W Buffer , stencil buffer, anti aliasing, etc. buffer changes are where you start seeing some key differences.

Its similar to how the (unfortunately) crappy FX5200 line is pretty slow and nasty compared to the GeForce4 line, but there are things that the FX5200 can do that the GF4 just can't do in hardware.
 
Phoenix said:
FYI, the difference between a stock video card and the Pro or XT variant is that it has double the pixel/vertex/texel pipelines than the stock.

Geek on...
Nope. I'm not trying to be a jerk, but that's wrong. Back then XT and Pro models were always just speed bumps, never a change in architecture. (I think some current ATI boards have more rendering paths)
 
We don't have information to compare even slightly, the power and programmability of PS3 Vs Xbox2 yet, so why even bother?

On the one hand, you could say that ATI have a better track record of performance, but then Sony hasn't released a graphics chip in nearly 5 years. So there is *no* track record to go on. Therefore it isn't really fair to compare.

Some are saying that ATI has the benefit of shorter dev cycles, therefore it could be better than stuff started way back in 2002. Good point - its when you start and what your baseline is that counts, not so much when you will release.

Having said that, ATIs PC heritage could be a hindrance. Taking 'the next chip' in their PC roadmap might not be a suitable match to the needs of a console. Lower resolutions, lower memory, different architectures mean different needs. IMO, Sony are likely to be speccing their solution better for a home environment. Whether they've aimed high enough, or are hindered by being in development for too long, I don't know.


PSP is a nice improvement over PS2 (graphics quality and rendering features wise, not power wise) but Sony has not proved that they are in the same leauge as Nvidia or ATI.

You know its a handheld, right?
 
This part seems way off. The X800XT is rated @ 700 million vertices per second with over 8 Gigapixels per second at peak. R500 based Xenon GPU is supposedly much faster than the X800XT. I would have thought it would be more like at least 4 billion vertices per second peak and over 20 Gigapixels per second peak fillrate.
The limiting factor for triangle rate is probably the triangle setup/scan conversion. 500m tris/sec gives you 8.333m tris/frame at 60fps, which is 9.04 tris/pixel at 720p, which should be more than enough (if i'm doing the math right). I'd imagine that most vertices will have a heck of a lot more done to them than the straightforward minimal transform that the PS2/xbox peak vertex numbers are based on...
 
yeah, but that 500m tris/sec is based on 500m vert/sec. The old 'one vertex = one poly' is being rolled out again. You need some serious tools to get that level of efficiency - you need your entire gameworld to be made of one big long strip of tris. Not going to happen.

I was also sceptical about 'peak tris' being irrelvant for real world usage, when you add even basic lighting and texturing. But that list says 'for non trivial polygons', so that number should be acheiveable with at least 1 texture and 1 light.
 
Can someone explain how they can give an exact poly output figure when the ALUs can be divvied up in whichever way? are they giving the figure based on a reasonable in-game split (say 24pixel-24vertex), or is it just the limit of the setup engine?

Also how many ALUs (if they are comparable) does NV40 or R420 have?

Apparently there are r500 GPUs going out to devs, these would surely be underclocked as they are the actual chips that will be in the console.
 
mrklaw said:
I was also sceptical about 'peak tris' being irrelvant for real world usage, when you add even basic lighting and texturing. But that list says 'for non trivial polygons', so that number should be acheiveable with at least 1 texture and 1 light.
No, that number is achievable for LOT more.

It's pretty simple math too:
The text rates GPU at 96 shader ops per cycle (48vector+48scalar).
For 500mhz chip, that gives you exactly "96" shader ops per vertex @ 500MVerte/sec.

For X800XT, you get a "whopping" "8'(4+4, same dual issue as above) shader ops per vertex @ 700MVert/sec.
Sounds like around 10x lower to me.

Something similar can be shown for pixel processing.

Obviously - the text spins around the fact that resources are shared so you never use all of it for just one or the other, but nonetheless, it's quite high performance part.
 
Fafalada said:
No, that number is achievable for LOT more.

It's pretty simple math too:
The text rates GPU at 96 shader ops per cycle (48vector+48scalar).
For 500mhz chip, that gives you exactly "96" shader ops per vertex @ 500MVerte/sec.

For X800XT, you get a "whopping" "8'(4+4, same dual issue as above) shader ops per vertex @ 700MVert/sec.
Sounds like around 10x lower to me.

Something similar can be shown for pixel processing.

Obviously - the text spins around the fact that resources are shared so you never use all of it for just one or the other, but nonetheless, it's quite high performance part.

Are you saying that, based on the listed rumour specs, this thing should not only be able to push 500m verts/sec, but also apply a full 96 ops for each and every one of them?

What about fillrate&pixel shaders? Although for 9million polys on screen per frame, I guess vertex/pixel is the same thing.
 
I seriously considered getting an Xbox recently, and might just get this thing next year. Sounds like they've got some cool stuff coded into this chip. But I'm confused. I thought Sony was aiming for numbers well north of 1B verts/sec for the PS3. What exactly is MS doing shooting so low with not just the verts, but the fillrate too. Is 4GP/s gonna suffice in the age of HDTV? What RAM/bandwidth solutions have been pursued. I've been out of the gaming tech scene for a little while now, and know little on the MS system. I like that 10MB of flexible eDRAM, and the GPU->L2 cache link sounds very interesting, but what's the bandwidth gonna be? That 1MB of L2 not only seems small to feed 3 cores IMO, but it better have the bandwidth to feed them and the GPU, or all that performance goes out the window. I've gotta look around and read up on this stuff. One article ain't gonna cut it. :?

That said, the next Xbox has a really good chance to pick up marketshare. But if they're launching next Fall, where's the product release? I thought Sony did a good job with the Playstations by starting the PR campaigns over a year in advance. Xbox Next hype should be starting soon I hope. PEACE.
 
What I find funny is that folks actually think the PS3 is going to hit those theoretical numbers ;). If they do then the guy wanting to build a server farm with xbox 2's should wait.
 
MS did not want to kill this holiday season by hyping Xbox Next too soon. Brace yourself for January 5.
As far as I can tell, this will be the calendar :
- January 5 : Bill Gates unveils the final design, with some tech demos
- GDC : Full specs, more tech demos, maybe some game announcements
- E3 : Many games unveiled and playable
Nothing official of course, but I'm pretty sure this is more or less the plan.
MS has 1st/2nd party games up to March or April, so we won't get a lot of true Xbox 2 games media until then.
 
Blimblim with gofreak edits said:
As far as I can tell, this will be the calendar :
- January 5 : Bill Gates unveils the "vision", some specs, some tech demos
- GDC : Full specs, more tech demos, maybe some game announcements
- E3 : Final hardware design, Many games unveiled and playable
Nothing official of course, but I'm pretty sure this is more or less the plan.
.

Fixed.

I wouldn't expect the final consumer design to be unveiled so soon.
 
gofreak said:
Fixed.

I wouldn't expect the final consumer design to be unveiled so soon.
Maybe, but if MS unveiled the Xbox console design at CES 2001, so I'm expecting them to do the same.
Of course I could be wrong, but I think only showing tech demos would be too little for Bill Gates.
 
CrimsonSkies said:
What I find funny is that folks actually think the PS3 is going to hit those theoretical numbers ;). If they do then the guy wanting to build a server farm with xbox 2's should wait.
In-game? No. On paper? Why not? Then again, I'm basing this on the scuttlebutt from late last year. I don't know if the number of cores has changed or the bandwidth figures. The thing is that the PS2 could put up gaudy numbers, but lacked bandwidth in some key areas (image quality aside). The whole concept of CELL and the BE was supposed to take care of that, right? I mean, it's supposed to have huge pools of eDRAM, a large external RAM dump and fat pipes running to both chips and the RAM hopefully at the system clock. I hope I'm not off the mark there. This was what I gathered from the patents. I thought it was accepted that the target numbers should be over 1B verts, the concern was always whether the libraries would be up to snuff for devs soon enough, or if it'll be a mess like the PS2's early days.

I figure this next gen should be a battle of libraries. Who's gonna have the most feature-rich libraries? But Kutaragi promised an increase of a couple orders of magnitude. If those Xbox Next figures are for in-game, and not just raw, then that's awesome, and I'm pleased. But from what I gather in reading that article, those are peak figures. Cell is supposed to be so bandwidth driven that I'd be suprised if it put up huge numbers. :? PEACE.
 
gohepcat said:
Nope. I'm not trying to be a jerk, but that's wrong. Back then XT and Pro models were always just speed bumps, never a change in architecture. (I think some current ATI boards have more rendering paths)

I'm actually referring to today. There weren't and Pro or XT GeForce 3 cards back then.
 
Doesn't AMD have a processor coming out that's like.... well its incredibly faster than Intel's current line?


Something like AMD's 2.5mhz chip can go up to 3.5mhz.... while Intel's fastest chip goes only at like 2.7mhz.

I don't know what the hell I'm talking about. but somebody make sense of it for me.
 
Top Bottom