Rumor: Xbox 3 = 6-core CPU, 2GB of DDR3 Main RAM, 2 AMD GPUs w/ Unknown VRAM, At CES

Status
Not open for further replies.
Is the PS3 a 6 core cpu?

I know it's like this multi-thing that all developers were pissed off at, until ND showed them the way?

The PS3 technically has 7 cores, though the cores are asymmetrical so it's quite different to a true septa-core.

Only the master core, the PPE, is a fully functional core. The other 6, the SPEs, have comparatively limited (though still incredibly useful) functionality.
 
Assuming Microsoft are using GDDR5 then 2GB is all they can promise developers at this point. The densities required to offer 4GB in a console just aren't there yet. You can't have developers targeting a 4GB box if you're not 100% sure you can deliver it. Far better giving them a 2GB target and leave 4GB as a nice bonus later on, if it can happen.
 
The PS3 technically has 7 cores, though the cores are asymmetrical so it's quite different to a true septa-core.

Only the master core, the PPE, is a fully functional core. The other 6, the SPEs, have comparatively limited (though still incredibly useful) functionality.


So were the developers pissed because the PS3 was asymmetrical? or that it had multiple cores? I remember a few, I can't remember names, lambasting the "cell".
 
So were the developers pissed because the PS3 was asymmetrical? or that it had multiple cores? I remember a few, I can't remember names, lambasting the "cell".

Both. Or more precisely:

1 - it invited devs into a world of relatively high parallelism when many would have still preferred just better single threaded performance, even if the latter wouldn't have the same legs. Most devs at the start of this gen didn't really know how to exploit this much parallelism. It came up a fair bit early in the generation about whether Sony (or MS for that matter) would have been better off with just a typical dual-core intel, if the potential performance gains once people wrapped their head around Cell were worth the sweat etc.

2 - the spus threw out lots of programmer 'comforts' (something had to give to pack 8 of them on there), leaving some performance pitfalls for the unwary programmer - e.g. poor branch prediction, a memory model that required careful attention etc. They are quite fully functional - they can manage themselves independently, you can throw c code on there and it'll run (even if not necessarily very efficiently), but to get the best out of them requires care. Makes it difficult to just throw anyone onto SPU coding (e.g. a junior programmer), reducing engineer mobility etc.

3 - there was asymmetry, two types of processor, which reduces code mobility to some degree etc.

Developers have, I think, been getting more used to the idea of parallelism, so hopefully that won't be as difficult a curve on the higher parallelism of next-gen systems. I don't think devs 'hate' Cell anymore, I think some of the more ambitious ones may in fact love it now they can see what kind of performance is possible with SPUs (judging by tech presentations...third party ones too). But I can also understand why some devs, particularly 5 years ago, were sort of appalled by it.
 
Both. Or more precisely:

1 - it invited devs into a world of relatively high parallelism when many would have still preferred just better single threaded performance, even if the latter wouldn't have the same legs. Most devs at the start of this gen didn't really know how to exploit this much parallelism. It came up a fair bit early in the generation about whether Sony (or MS for that matter) would have been better off with just a typical dual-core intel, if the potential performance gains once people wrapped their head around Cell were worth the sweat etc.

2 - the spus threw out lots of programmer 'comforts' (something had to give to pack 8 of them on there), leaving some performance pitfalls for the unwary programmer - e.g. poor branch prediction, a memory model that required careful attention etc. They are quite fully functional - they can manage themselves independently, you can throw c code on there and it'll run (even if not necessarily very efficiently), but to get the best out of them requires care. Makes it difficult to just throw anyone onto SPU coding (e.g. a junior programmer), reducing engineer mobility etc.

3 - there was asymmetry, two types of processor, which reduces code mobility to some degree etc.

Developers have, I think, been getting more used to the idea of parallelism, so hopefully that won't be as difficult a curve on the higher parallelism of next-gen systems. I don't think devs 'hate' Cell anymore, I think some of the more ambitious ones may in fact love it now they can see what kind of performance is possible with SPUs (judging by tech presentations...third party ones too). But I can also understand why some devs, particularly 5 years ago, were sort of appalled by it.
And why engines designed for an earlier generation of parallelism aren't exactly well suited if not correctly optimized. Closer to conventional than the Emotion Engine of the PS2 era, but esoteric enough to cause compatibility issues.
 
Both. Or more precisely:

1 - it invited devs into a world of relatively high parallelism when many would have still preferred just better single threaded performance, even if the latter wouldn't have the same legs. Most devs at the start of this gen didn't really know how to exploit this much parallelism. It came up a fair bit early in the generation about whether Sony (or MS for that matter) would have been better off with just a typical dual-core intel, if the potential performance gains once people wrapped their head around Cell were worth the sweat etc.

2 - the spus threw out lots of programmer 'comforts' (something had to give to pack 8 of them on there), leaving some performance pitfalls for the unwary programmer - e.g. poor branch prediction, a memory model that required careful attention etc. They are quite fully functional - they can manage themselves independently, you can throw c code on there and it'll run (even if not necessarily very efficiently), but to get the best out of them requires care. Makes it difficult to just throw anyone onto SPU coding (e.g. a junior programmer), reducing engineer mobility etc.

3 - there was asymmetry, two types of processor, which reduces code mobility to some degree etc.

Developers have, I think, been getting more used to the idea of parallelism, so hopefully that won't be as difficult a curve on the higher parallelism of next-gen systems. I don't think devs 'hate' Cell anymore, I think some of the more ambitious ones may in fact love it now they can see what kind of performance is possible with SPUs (judging by tech presentations...third party ones too). But I can also understand why some devs, particularly 5 years ago, were sort of appalled by it.

Also, the ease of development (having to compile SPE code and PPE code separately, then link them and see how they interact, for instance) and the quality of Sony's tools (which have improved over time, but there are limits because of fundamental problems like the one I just mentioned). Development on other platforms is just more pleasant and thus quicker; it's easier to rapidly prototype, iterate and improve your code on the fly.

It's not just about the Cell, other components are problematic as well (RSX is relatively weak so certain tasks are better moved onto the CPU which is not something you would normally have to - or initially even know how to - do; the amount of available memory and its structure is also a disadvantage when compared to the more comfortable situation on Xbox 360).
 
Also, the ease of development (having to compile SPE code and PPE code separately, then link them and see how they interact, for instance) and the quality of Sony's tools (which have improved over time, but there are limits because of fundamental problems like the one I just mentioned). Development on other platforms is just more pleasant and thus quicker; it's easier to rapidly prototype, iterate and improve your code on the fly.

It's not just about the Cell, other components are problematic as well (RSX is relatively weak so certain tasks are better moved onto the CPU which is not something you would normally have to - or initially even know how to - do; the amount of available memory and its structure is also a disadvantage when compared to the more comfortable situation on Xbox 360).

Sure, but the question I think was just about whether the CPU specifically was considered difficult because of its parallelism (and thus would some of these challenges apply to equally or more parallel next-gen processors), or because of characteristics that will be specific to Cell. I don't think it was a general question about PS3. Your first bit about asymmetry is relevant, I think I mentioned that in point 3. I think the answer to the question is 'both' but that with regard to the bit that'll be relevant to next-gen CPUs - high parallelism can be challenging to utilise - devs I think are in a better position now than 5 years ago, there's be a lot of learning about scalability. Some of the models that fell out of the learning process this gen will probably still be popular and effective next-gen (e.g. the task queue model). Devs would take to - say - a 6 core CPU today much better than they could in 2005 or 2006, even aside from an assumed easier/friendlier core type.
 
[Nintex];34119928 said:
A monkey and a frog predicting the next generation, lets sell the movie rights to hollywood and get rich.

Only if I can join in. I'm an overcooked hamsteak. IT'S PERFECT!

Type, Size and Price is irrelevant. If it's dedicated to the GPU it's VRAM. Stop trying to create distinctions where it doesn't exist.

You might as well count CPU cache as dedicated CPU RAM in that case.
 
It was specifically the Cell I was asking about, if it was a 6-core CPU? and if these rumours were something similar to Cell, how would developers react?
 
So were the developers pissed because the PS3 was asymmetrical? or that it had multiple cores? I remember a few, I can't remember names, lambasting the "cell".
They were pissed because the SPEs have an unusual memory model. They can't just read/write from/to any memory address in the system. They only have full random access to their own local store. To work with main memory, they have to pull and purge whole blocks of memory with DMA accesses, which necessitates a software design where data that will be needed "soon" can be grouped together in memory, and transfers can be initiated well ahead of time to cover the DMA latency.

There's actually a seventh active SPE, but it's generally ok to ignore it because it's reserved for the operating system.

The extra core, the PPE, has the traditional memory model (coherent cache covers up random memory access automatically), but it alone isn't particularly beefy. If developers rely on that one core exclusively, their code won't run very fast.

Though it bears pointing out that the PPE is not naturally a "master" core. If all legacy code was refactored to work with the SPE model, you can actually let the PPE idle and run everything you want on SPEs. They are completely functional general-purpose cores. The only thing that makes it hard is the aforementioned memory model. Using them as job consumers and letting the PPE do orchestration work is just the easiest, most straightforward way to split things up in an inherited code base.
 
Last I checked the EDRAM was dedicated to the GPU. Therefore it's VRAM. It's possible to have both Unified and dedicated memory within the one system, you know.
In case of X360 it's dedicated to ROPs, not GPU.
GPU has no access to edram outside the ability to send information for ROPs to write. (no shader can read from edram)
 
They aint going to show the new Xbox... there is no way. They would have garnered some attention for this, or something would have leaked. There isn't even any site that is counting down to this conference. If they do show anything it'll be like, "The future will be shown E32012" slide at the very end.
 
GDDR5 or not, only 2GB of main RAM in the next Xbox is utterly pathetic and I hope they reconsider if that is truly what they're doing.
 
Hey guys.

The CES thread for Microsoft will be in the OT. If there are gaming related announcements, feel free to talk about them over here as well. Because there's no evidence that MS's conference will be gaming-focused, OT is the primary place to talk about it.

If this rumour does not come true, we will lock this thread. If this rumour does come true, we will lock this thread and create a new one, obviously :p
 
Xbox twitter keeps mentioning Steve Ballmers CES Keynote. What to think.

That xbox is a consumer device and that ces is a consumer electronics show...

You probably get a new wireless headset out of the night and an overview of what xbox is now capable of.

CES has been a let down for the last years for people expecting cool gaming things.
 
The new box will not be announced or hinted at guys. There's a better chance that a dragonite will come flying through my window than that happening.
 
Microsoft usually reveals the system and then an E3 blowout so if it is planned for E3 i would think sometime between now and march/april they reveal it and then E3 will be the blowout.
 
Xbox twitter keeps mentioning Steve Ballmers CES Keynote. What to think.

Probably wants everyone to pay attention to the inevitable sales numbers. Also might get to hear Ballmer talk about how every company at CES is copying Kinect stuff proving once and for all that it's the FUTURE! (I expect Ballmer to shout future)
 
GDDR5 or not, only 2GB of main RAM in the next Xbox is utterly pathetic and I hope they reconsider if that is truly what they're doing.

As discussed previously in this thread, you're wrong.

It's utterly pathetic that people compare amounts of RAM in a PC to how much a console needs, how about that?


It's much easier for people who know little about the actual hardware/tech involved to focus on a number without context than try to understand what that number actually means.

This.
 
Probably wants everyone to pay attention to the inevitable sales numbers. Also might get to hear Ballmer talk about how every company at CES is copying Kinect stuff proving once and for all that it's the FUTURE! (I expect Ballmer to shout future)

Will there be sweating?
 
Status
Not open for further replies.
Top Bottom