Next-gen architecture comparison

mrklaw

MrArseFace
I know there has been some talk of PS3 tech-wise here recently, but I thought it'd be good to have a general comparison thread.

I was wondering what Xenon's architecture was looking like, and how they are sharing the graphics workload? Xbox was simple CPU for housekeeping, running the game etc, and big ass GPU doing the polys.

But Xenon apparantly has 3x2 3+GHz processors in it. Thats way too much for simple game logic, so are they using a similar model to PS2 - having the CPU do the transformations, and the GPU is just shaders/rasteriser? If so, does this mean a big leap from 'traditional' PC parts, and therefore any past performance in no indication of future performance?

What advantages/disadvantages - removal of transform leave lots of on-die space for edram/more shader pipelines, but then you have external bus bandwidth issues
 
I doubt the CPUs in Xenon are responsable for T&L...Perhaps they can help like in the GC case but I doubt the entire T&L is build around it. I suppose the GPU will take the main responsibility for it.
 
mrklaw said:
then you don't need 6 3GHz processor cores, surely? Something doesn't match up.

It's not 6, it's 3 (that's rumoured). They can each handle 2 threads, but that's not the equivalent of 6 processors.

You can never have enough power for physics, AI etc. (imo).
 
The xenon gpu is rumoured to have a unified shading model, with each shading pipeline being able to function as either a pixel or a vertex shader, depending on circumstances, plus vertex shaders can run on the cpu. So if a developer wanted a relatively low poly load with a ton of highly complex per-pixel shaders, then they would probably have all the vertex processing on the cpu, and have the gpu dedicated to processing pixels, whereas if you had a ton of polys with relatively simple pixel shaders, you'd probably have the gpu processing vertices as well.
 
arhra said:
The xenon gpu is rumoured to have a unified shading model, with each shading pipeline being able to function as either a pixel or a vertex shader, depending on circumstances, plus vertex shaders can run on the cpu. So if a developer wanted a relatively low poly load with a ton of highly complex per-pixel shaders, then they would probably have all the vertex processing on the cpu, and have the gpu dedicated to processing pixels, whereas if you had a ton of polys with relatively simple pixel shaders, you'd probably have the gpu processing vertices as well.

That's interesting..I knew about the unified shading pipeline, but wasn't sure that vertex processing could be done on the CPU. And that would answer why you might need more power there, physics/AI aside.
 
Is the CPU-GPU bandwidth on the Xenon enough to sustain the necessary amount of vertices? I thought the whole reason GPUs came about was to do away with bandwidth problems since your T&L will be done on the same die as the rasterizer and thus you could go with wider and faster busses. I think the CPU still needs to do dynamic geometry (memory is bad), but most geometry in a scene should be static I think. Whoa, I'm guessing too much. I'll just wait for some more concrete info. PEACE.
 
Pimpwerx said:
Is the CPU-GPU bandwidth on the Xenon enough to sustain the necessary amount of vertices? I thought the whole reason GPUs came about was to do away with bandwidth problems since your T&L will be done on the same die as the rasterizer and thus you could go with wider and faster busses. I think the CPU still needs to do dynamic geometry (memory is bad), but most geometry in a scene should be static I think. Whoa, I'm guessing too much. I'll just wait for some more concrete info. PEACE.

Current PCI-Express 16x technology has enough headroom for such.. As is, even AGP 8x is not fully utilized. It's just not done on PCs because a single 3GHz CPU is barely enough for AI/Physics etc. in complex games, I believe..
 
Pimpwerx said:
Is the CPU-GPU bandwidth on the Xenon enough to sustain the necessary amount of vertices? I thought the whole reason GPUs came about was to do away with bandwidth problems since your T&L will be done on the same die as the rasterizer and thus you could go with wider and faster busses. I think the CPU still needs to do dynamic geometry (memory is bad), but most geometry in a scene should be static I think. Whoa, I'm guessing too much. I'll just wait for some more concrete info. PEACE.

The CPU and GPU share the same memory, so it becomes a matter of memory bandwidth..

Also, according to the leaked docs, the GPU will be able to directly read one of the CPUs' cache (L2 cache, I believe?).
 
gofreak said:
The CPU and GPU share the same memory, so it becomes a matter of memory bandwidth..

Also, according to the leaked docs, the GPU will be able to directly read one of the CPUs' cache (L2 cache, I believe?).
Yuppers, just saw that. Thanks. Sounds good then. CES and ISSCC can't come soon enough. PEACE.
 
Also, according to the leaked docs, the GPU will be able to directly read one of the CPUs' cache (L2 cache, I believe?).

Sounds like PS2 - separate chips doing the equivalent job of a combined GPU in PCs. Although it sounds limiting, it gives you potential benefits, like more space per chip to get performance and features in. The key as pimpwerx says is to make sure your bus is OK.
 
Top Bottom