PSM: PS4 specs more powerful than Xbox 720

Status
Not open for further replies.
Well, I'd like for them to use at least 16 or so for soft body (particularly cloth and hair) physics. That should be enough to do a reasonably good job for a modest number of characters.

Physics, physics and more physics is the only possible answer. However, interaction with those physics is always going to require the last 'tangible' thread to finish before the cause/effect can be processed.

There is simply no need to have this many threads on the CPU. The GPU assembles the image frames, this is the only reasonable place to scale the number of threads upwards.
 
I'm so pumped about the unveiling of the new hardware for PS4 and Xdude720, one of my favourite things about new generations. I just love seeing sexy new hardware.
 
Just what threads are devs meant to be putting on these wishful 32 SPUs?

There are games today splitting tasks into enough small batches of work to keep a lot of threads going at once - if they could.

Whatever type of core these systems use, it's likely there'll be more running simultaneously. I don't know about 32, but based on the rumoured Xbox 3 spec I would expect 12-way SMT there at least.

Every thread you split image processing into adds frame lag.

It depends entirely how much processing you're doing per thread. If it's a relatively trivial amount of work, maybe. But if the work to be done dwarfs the setup cost of more batches/threads then having a higher number of batches of smaller pixels would result in an overall lower latency.

Anyway, whether the CPU should be doing that particular work in the first place will depend on the overall setup, the GPU resources etc. It may make more sense to do it all on the GPU in a PS4 (where, you know, it'll be done using hundreds of threads :P)
 
Just what threads are devs meant to be putting on these wishful 32 SPUs?

Well, they're all gonna running job systems, so it's not like they'll be thinking in terms of threads. There's more to good looking games than drawing pixels. The number, complexity, fidelity and behavior of the objects being rendered are just as important. When you increase the complexity of the models and environments, the requirements on the physics and AI simulations increase commensurately. And with that much spare, flexible power, there's no reason not to continue augmenting graphical tasks so the GPU can do what it's best at: shading pixels.

Developers will always cater to they least common denominator, so which ever system is the weakest expect games to look like what that console can produce UNLESS its significantly underpowered. Only 1st party and exclusives will push the graphics envelope...just my opinion of course. We saw this with the PS2/Xbox/Gamecube but the Wii eliminated this possibility from happening this gen.

Or, if it's not worth the bother, they'll just make games for the one system where their customers actually are. Cross platform development just wasn't as common in the ps2/xbox/gc era as it is today, and if the next generation has a clear-cut leader for core games, that will be true again.

yeah, but again, you're looking at a ps2 vs gamecube and xbox scenario. It's not likely to happen again.

it has to significantly outsell both the next xbox and the wiiu, both of which will have more time on the market, and have better development tools by the time the ps4 is released.

A single year isn't as significant as you think, especially if the two early platforms are competing with eachother over the same casual audience. And as Sega learned with the Dreamcast, people are willing to wait for more power when promised.
 
Draconian anti consumer drm and nfc surveillance of inserted discs that lock to the first console it gets within 1 foot of. You know the things consumers want. And a gtkwebkit2.0 browser, its multi threaded don't ya know!
Webkit is multi-threaded, webkit2 is multi-theading and multi-process.
 
A single year isn't as significant as you think, especially if the two early platforms are competing with eachother over the same casual audience. And as Sega learned with the Dreamcast, people are willing to wait for more power when promised.

Stop misrepresenting the past. I know sega fanatics will kill me for saying this but anyone with a brain in to financials knew sega couldn't support the system. Sony gave developers plenty of reasons to go PS2. A bigger storage format was one with dvd support for one. DC lost because sony still had all their partners from the psx era on board. Considering how PS1 was performing as history shows another year was worth the wait.
 
It always makes more sense to do it on the GPU, unless you've sunk otherwise wasted money into a failed CPU design.

It didn't always in PS3 - and before you say it, I dare say that even if Sony had put a better GPU in there for the time, it would still have been a win to involve Cell in some of that work given the scale of the performance and quality improvements over RSX alone.

NEXT-GEN, however, might be another matter. Probably will be.

Still, though, there were PS3 games running dozens of job types, fanning out to thousands of jobs per frame, on the CPU. So even taking out image processing tasks or other graphics-y stuff, I wonder if they wouldn't have a lot of CPU-side parallelism going on. Now obviously that doesn't mean you could theoretically keep thousands of cores busy simultaneously - there would be dependencies and all that - but some of those job types probably wouldn't have a hassle keeping 'even' 32 cores busy during their portion of the frame's processing, and would yield a lower overall latency if they were there.

Now I have no idea if there is enough of that kind of work to justify such a high level of parallelism on the CPU side, but when you hear devs talking about that scale of work in current gen games, it gives me a little bit of pause. I think next-gen devs will have double-digit number of threads available to them on the CPU side (even if not 32), and they'll probably find good use for it.
 
It didn't always in PS3 - and before you say it, I dare say that even if Sony had put a better GPU in there for the time, it would still have been a win to involve Cell in some of that work given the scale of the performance and quality improvements over RSX alone.

NEXT-GEN, however, might be another matter. Probably will be.

Still, though, there were PS3 games running dozens of job types, fanning out to thousands of jobs per frame, on the CPU. So even taking out image processing tasks or other graphics-y stuff, I wonder if they wouldn't have a lot of CPU-side parallelism going on. Now obviously that doesn't mean you could theoretically keep thousands of cores busy simultaneously - there would be dependencies and all that - but some of those job types probably wouldn't have a hassle keeping 'even' 32 cores busy during their portion of the frame's processing, and would yield a lower overall latency if they were there.

Now I have no idea if there is enough of that kind of work to justify such a high level of parallelism on the CPU side, but when you hear devs talking about that scale of work in current gen games, it gives me a little bit of pause. I think next-gen devs will have double-digit number of threads available to them on the CPU side (even if not 32), and they'll probably find good use for it.

This sounds like a sure fire way to have your system play second fiddle to a more useable target architecture again. Just like PS3.
 
Said the same thing about PS3... Yet Gears of war 3 looks as good as the best looking PS3 games. Not to mention 90 percent of multi platform games favor on 360.. I'll take this with a grain of salt

That's what I thought first too. Just wait and see how the games look and play, not overly bothered about having the 'best' designed/spec'd console if the games look and play like crap.
 
This sounds like a sure fire way to have your system play second fiddle to a more useable target architecture again. Just like PS3.

I'm pretty confident both the next Xbox and the next Playstation will run double-digit number of threads on the CPU side. If the next Xbox has six cores, it'll be 12-way SMT at least. As I said, not necessarily 32, but double-digits? I think so.

And in terms of target architectures and programming models, listening to what devs have been saying the job pool model may be the common approach next gen - so scaling between different levels of parallelism with one approach is easier (even if performance differs between targets).
 
No Kutaragi, no Sony engineers downing copious volumes of viagra and cialis to maintain a hard on for asymmetric multiprocessor designs. Simple.
 
Some information possibly impacting a Sony schedule for the PS4

1) IBM taped the PS3 Slim Cell and RSX then Sony produced the Cell and RSX @ 45 nm in the Nagasaki Toshiba plant that Sony bought back last year to produce CMOS and SOI Exmor-R camera elements.

2) Sony is skipping the 32nm process for the PS3...why? Because the investment to retool to produce 32nm chips in the Nagasaki plant is expensive. I'd guess that Sony is going to skip two node jumps and will retool the Nagasaki plant for some 22-25nm die process.

3) IBM will tape the P4 processor and GPU then Sony will manufacture them in the Nagasaki plant at a smaller die size with the new equipment they purchase. This new equipment can make ~22nm cmos camera sensors and general purpose DSP chips for multiple uses. 4K TV and cameras are going to need power efficient (small die size) components. 2014

4) Expect a significant price reduction and slimmer slim PS3 after the new die process is implemented in the Nagasaki plant.

Big question is when will there be 2 die process jumps and what will Sony target as a min node process that fits their roadmap?
 
This sounds like a sure fire way to have your system play second fiddle to a more useable target architecture again. Just like PS3.

Why? We're talking about a system that would have 4 PowerPC cores and 8 threads with 32 SPUs on top of that. If devs don't want to touch the SPUs, the PS4 version could still easily be the best version. But that would be silly. You're talking as if devs would be starting from zero again with SPUs on PS4. Using SPUs is basically a solved problem. They'll just bring their PS3 solutions with them to the new system, only now with 4-5 times as much SPU time to make use of. If there's no underpowered GPU to fight and the PS4 also has more memory than any competing platform, devs will have very little left to complain about.
 
No Kutaragi, no Sony engineers downing copious volumes of viagra and cialis to maintain a hard on for asymmetric multiprocessor designs. Simple.

The 'asymmetric hard on' appears to be living on in next-gen IBM designs, though, so it's not impossible one or more machine will have something that is a hybrid depending on when IBM's next is ready.
 
I'll believe it when I see it. Remember what Sony said about the PS3? I have yet to see many games that look better than the 360.

Bullshit aside, I love this time in the consoles timeline. All this next gen talk gets me excited for new hardware.
 
NEXT-GEN, however, might be another matter. Probably will be.
I do think so. It is unreasonable to assume that more threads are not going to help, but I think the point of diminishing returns happens for a quite small number of threads (<16).

I'm fairly certain the PS4 will have a much more traditional setup than the PS3. Standard IBM or AMD processor a minimum of six cores or four modules, relatively standard AMD GPU, 2+ GB of main RAM and some seperate small pool for the OS like they did in the Vita. Lots of people are speculating on a copy paste of the PS3 design when Cell has not been an all around positive experience for Sony (and sticking to it definitely won't be) and Nvidia won't be involved in any of the next gen consoles.

Cell will probably be included in the PS4, but it will probably be almost the exact same thing, shrunken down to 28nm. They could use it for backwards compatibility, to run several background tasks and notably for image post-processing. It would be wonderful if they did that.
 
I do think so. It is unreasonable to assume that more threads are not going to help, but I think the point of diminishing returns happens for a quite small number of threads (<16).
Depends on the size of the cache for the SPUs and PPUs. Silicon real estate is expensive and the programmable (MARS) SPU design works if there were more cache + Sony owns the SPU design but has to pay for IBM CPU.

Cell will probably be included in the PS4, but it will probably be almost the exact same thing, shrunken down to 28nm. They could use it for backwards compatibility, to run several background tasks and notably for image post-processing. It would be wonderful if they did that.
More cache needed for SPE and there were a couple of issues discovered that need fixing.

All this a guess.
 
Cell will probably be included in the PS4, but it will probably be almost the exact same thing, shrunken down to 28nm. They could use it for backwards compatibility, to run several background tasks and notably for image post-processing. It would be wonderful if they did that.

If they can integrate the necessary SPU sauce for BC into the 'main' CPU rather than on a separate chip, it would be better, more usable I think.

Power7+ or Power8 sounds like it could be an elegant solution. IBM has said that Cell development would be rolled into their next lines. Both of these are slated to incorporate 'accelerators' - remind you of anything? :)

There's been a lot of talk about Cell being a dead end, about Sony being foolish to use it again, but I think it depends what you mean by 'Cell'. Power7+/Power8 may well be wrapping in some of the key bits of Cell - or in a custom variant, be able to - which if they did would yield something close to a next-gen cell/power hybrid.
 
Depends on the size of the cache for the SPUs and PPUs. Silicon real estate is expensive and the programmable (MARS) SPU design works if there were more cache + Sony owns the SPU design but has to pay for IBM CPU.
Well price is one thing, but that's not what I mean. There is a point at which extra cores simply aren't useful anymore, because tasks cannot be parallelized any further (check Amdahl's Law ^) or that there aren't any more tasks. If I'm not mistaken many PS3 games already didn't manage to use all SPEs well enough, and if anything that will be less so next gen when a much better GPU can take over more tasks.

There are some people who seem to want Cell as the main chip in the next PS4 but if anything Cell as the main chip in the PS4 is going to cripple it even more than Cell did in the PS3, and this time it won't have its raw power advantage to compensate. Even if they invest the money to use a POWER7 core or something. The architecture of Cell is just too troublesome for gaming. That was the case for this gen, but it most definitely will be next-gen.
Power7+ or Power8 sounds like it could be an elegant solution. IBM has said that Cell development would be rolled into their next lines. Both of these are slated to incorporate 'accelerators' - remind you of anything? :)

There's been a lot of talk about Cell being a dead end, about Sony being foolish to use it again, but I think it depends what you mean by 'Cell'. Power7+/Power8 may well be wrapping in some of the key bits of Cell - or in a custom variant, be able to - which if they did would yield something close to a next-gen cell/power hybrid.
IBM has said that in 2008 and has failed to do anything with that promise. Instead they have focussed tremendously on GPGPUs since then, which are the Cell's architecture sworn enemy in many respects.

You state that it is possible for them to use Cell. Of course it is. It would be a tremendous investment to bring back and update an architecture that is dead in many respects, but they could do it. What I'm missing still is a compelling reason why they should do so. It's hard to argue how using the SPEs again would give them a competitive advantage that is worth the troubles they bring.
 
Well price is one thing, but that's not what I mean. There is a point at which extra cores simply aren't useful anymore, because tasks cannot be parallelized any further (check Amdahl's Law ^) or that there aren't any more tasks. If I'm not mistaken many PS3 games already didn't manage to use all SPEs well enough, and if anything that will be less so next gen when a much better GPU can take over more tasks.

There are some people who seem to want Cell as the main chip in the next PS4 but if anything Cell as the main chip in the PS4 is going to cripple it even more than Cell did in the PS3, and this time it won't have its raw power advantage to compensate. Even if they invest the money to use a POWER7 core or something. The architecture of Cell is just too troublesome for gaming. That was the case for this gen, but it most definitely will be next-gen.
IBM has said that in 2008 and has failed to do anything with that promise. Instead they have focussed tremendously on GPGPUs since then, which are the Cell's architecture sworn enemy in many respects.

You state that it is possible for them to use Cell. Of course it is. It would be a tremendous investment to bring back and update an architecture that is dead in many respects, but they could do it. What I'm missing still is a compelling reason why they should do so. It's hard to argue how using the SPEs again would give them a competitive advantage that is worth the troubles they bring.
SPU is still more efficient at some tasks than even a new IBM power cpu. Drawback for SPU was no DMA and memory access time plus something with the ring scheduling. If you are talking about giving game developers a general purpose CPU that is slower and gets hotter but is easier to use then there is no arguement.

Amdahl's Law is the inverse of what we have in a Game that is frame bound. You need as many CPUs as needed to process a frame in the time you have for a frame or 1/30th of a second. Beyond that # Amdahl's law applies but you don't need more than you need.

Sony mentioned more programmable DSPs and I think they are needed for Augmented Reality sensors and as configurable hardware for anything that is needed in the future. Need a set of standards for port connections though.
 
SPU is still more efficient at some tasks than even a new IBM power cpu.
Of course, of course. But as I said, are they worth the troubles? Is it worth designing around the SPEs again, and continue the architecture that has alienated itself from the other platforms last generation? Is it worth expanding on the SPEs when a respectable number of developers already shown that many of them cannot find a real good use for all they have now, notably in a multiplatform scenario? Is it worth spending silicon on, when they could achieve power (much) more efficiently (and more compatible with the competition) when using that spent silicon on the GPU? Is it, considering previous questions, worth updating a 2005 design with 2011 parts (POWER7 / Wii U architecture) when you could use more those 2011 parts more directly?

I'm inclined to answer all those questions with no.
 
IBM has said that in 2008 and has failed to do anything with that promise. Instead they have focussed tremendously on GPGPUs since then, which are the Cell's architecture sworn enemy in many respects.

Actually, their comment about folding Cell into their next work rather than continuing it as a separate line came toward the end of 2010, some time after the announcement of Power7 (their current latest):

http://www.xbitlabs.com/news/cpu/di...rate_Cell_Chip_into_Future_Power_Roadmap.html

We won't know if that's true, or outwardly obvious, until they actually show some of their next-gen (7+/8) work. But they've talked about in the future being able to do with Power what you were once only able to do with Cell, and the talk of accelerators in Power7+ and Power8 suggests scope for asymmetrical design. Others have speculated that the default accelerator type in those chips might be something SPU-alike or SPU compliant. That's merely speculation, but IBM's interest in hybrid designs and designs with accelerators on board is continuing to feed into Power work.

You state that it is possible for them to use Cell.

I'm not talking about using Cell as it is in PS3, or a straight-line derivation of that, I'm talking about whether future Power chips might allow them to include some pertinent Cell 'bits', and cover them more elegantly from a BC point of view than simply including a separate PS3-Cell.
 
Actually, their comment about folding Cell into their next work rather than continuing it as a separate line came toward the end of 2010, some time after the announcement of Power7 (their current latest)
That's right. I had it mixed up with the latest release of an actual Cell-based CPU, which was in 2008 I believe.
I'm not talking about using Cell as it is in PS3, or a straight-line derivation of that, I'm talking about whether future Power chips might allow them to include some pertinent Cell 'bits', and cover them more elegantly from a BC point of view than simply including a separate PS3-Cell.
Okay. That is a possibility that I'd give a bit more credit. Some 8 SPEs added to a 'SoC' could work well. I still don't see why Sony should include more SPEs though.
 

charlequin always forgets a crucial part of BR business. It´s not only royalties that Sony make money from BR. It´s also about orders from film companies, and tv companies. They give Sony money so Sony can make BR films, tv series, documentaries, etc .... They order Sony to manufacture BR, and thus Sony makes money from that.
 
So at the end of day, what is the point of having a new console if is not going to be head and shoulders above what the previous generation did?

Is it because there is a cycle that needs to be repeated?

:lol
These new consoles will be a significant leap ahead of current gen consoles. It may take a year or two before we start seeing true next gen, can't-be-done-on PS360-hardware games, but they will come.


charlequin always forgets a crucial part of BR business. It´s not only royalties that Sony make money from BR. It´s also about orders from film companies, and tv companies. They give Sony money so Sony can make BR films, tv series, documentaries, etc .... They order Sony to manufacture BR, and thus Sony makes money from that.

Not really. Sony doesn't own BR. It's an old argument that's been rehashed for years on GAF.
 
I was only saying that the majority of BR are manufactured by Sony, which make Sony money.

And that money will never even begin to make up for the nearly $5b loss SCE made, never mind recover all the lost market and mind share. The BR money they make now is the equivalent of charlequin's bake sale...
 
I wonder if Sony feels they must include some casual gimmick. I mean, if they don't include move or kinect...they can make a system the same price as the competition but still be significantly more powerful...hmm
 
SPU1: AA
SPU2: AF
SPU3: V-sync
SPU4: textures
etc etc

The simple solution is to isolate each SPU to a single task. But what you want to do is have many SPUs to work on the same thing in parallel. It's like arranging a bunch of blind people to push a big boulder in a certain direction. It's doable, but takes a lot of dedication.
 
The simple solution is to isolate each SPU to a single task. But what you want to do is have many SPUs to work on the same thing in parallel. It's like arranging a bunch of blind people to push a big boulder in a certain direction. It's doable, but takes a lot of dedication.

and I was kinda not being serious.
 
And that money will never even begin to make up for the nearly $5b loss SCE made, never mind recover all the lost market and mind share. The BR money they make now is the equivalent of charlequin's bake sale...

Correct. It's the same old argument that keeps getting rehashed.
 
I wonder if Sony feels they must include some casual gimmick. I mean, if they don't include move or kinect...they can make a system the same price as the competition but still be significantly more powerful...hmm

I hope this is the direction Sony take.
Make the packaging similar to PS2, concentrate on hardware and a joypad, nothing else. Give us the dualshock with sparkly black back.
I think the dualshock will employ a split in two similar to a nav and move controller/ wiimote.
Also dump Cell, it's obvious devs hate and still hate it.
 
I hope this is the direction Sony take.
Make the packaging similar to PS2, concentrate on hardware and a joypad, nothing else. Give us the dualshock with sparkly black back.
I think the dualshock will employ a split in two similar to a nav and move controller/ wiimote.

Unfortunately they just have to include a major gimmick...I don't think a wii-mote rehash is the route to go. Man I can't even imagine what their board meetings must be like.
 
If Sony goes for a Kinect-like setup, which wouldn't be a first considering the previous iterations of their Eye-cameras, Nintendo is going to have the most traditional new input. That should be fun :P

I don't think MS or Sony are going to give up their controllers though
 
If Sony goes for a Kinect-like setup, which wouldn't be a first considering the previous iterations of their Eye-cameras, Nintendo is going to have the most traditional new input. That should be fun :P

I don't think MS or Sony are going to give up their controllers though

I don't know why they chose move...not only did they have eye-toy, it was very successful on PS2. Weird.
 
And that money will never even begin to make up for the nearly $5b loss SCE made, never mind recover all the lost market and mind share. The BR money they make now is the equivalent of charlequin's bake sale...

Sometimes things don't work out how you want them to only this time it cost Sony a huge amount of money .
It was not bad plan to tell the truth but to many things went wrong .

Any way on to PS4 i don't think they going to use Cell again but maybe like gofreak said the next Cpu might have certain stuff from Cell to make BC easier .
Still i question how much BC really means to people as i am a person that don't really care about it after the first year .

I wonder what type of ram they going to use some were hoping for XDR2 but that might cost to much and there no need for it.

Still from what i see from Vita they going to use some off the shelf parts tweak them and we should get a system that good if they going for around 399 without any other crap in there .
 
What other direction is there to take apart from copying Nintendo or MS ?
The problem with copying MS is that they will most likely will have all of the kinks ironed out for kinect2.0 and surely MS will have some patents in place.
If they want to copy Nintendo they already have vita and they must be able to pair it to PS4 and use it like a tablet.
 
I think vita has proved we are going to see something similar to nextbos if those hardware rumours are true.
Vita shows that the kutaragi days are gone.
 
I think vita has proved we are going to see something similar to nextbos if those hardware rumours are true.
Vita shows that the kutaragi days are gone.

Vita shows Sony has gotten smart .
Good specs , easy to develop for , able to get the price down fast plus a few others.
There is no need for crazy , these are not the old days PC tech has gone so far ahead that you don't need to make anything new .
Use of the shelf parst add some tweaks and you can get a awesome system depending on how much your budget is.

If they bring out a system for 399 and take lost for about 50 we could get a power house with right parts in a close system.
 
Because GPUs lack the bandwidth to do GP tasks and render at the same time?
Find me general purpose tasks relevant to gaming to fill 32 SPEs with (or even 8, for that matter). Like the original poster, I was referring to vertex processing and image quality processing (which are graphics tasks).
 
Find me general purpose tasks relevant to gaming to fill 32 SPEs with (or even 8, for that matter). Like the original poster, I was referring to vertex processing and image quality processing (which are graphics tasks).

I wasn't even referring to the 32 SPE thing but if you want me to really list gaming tasks to split up I could, its quite easy.
 
Find me general purpose tasks relevant to gaming to fill 32 SPEs with (or even 8, for that matter). Like the original poster, I was referring to vertex processing and image quality processing (which are graphics tasks).

Gesture recognition + eye tracking plus emotion monitoring (Voice, sweat, heartrate, grip on controller)
Augmented reality + multi-person tracking
Multi-screen
Generating AR feedback other than visual (tactile with vibration modulated by velocity of hand or whatever would simulate texture)

and more that I can't think of.
 
@iamshadowlark: I'd love to know. I'd also like to know why they aren't incredibly prevalent on the PS3.
@jeff_rigby: Only your first two are sort of intense, and all besides the first sound like a very niche application (not an application to design your console around).
 
Status
Not open for further replies.
Top Bottom