• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

vg247-PS4: new kits shipping now, AMD A10 used as base, final version next summer

z0m3le

Banned
I think it was already "basically confirmed" that the early Durango dev kits are using two 4-cores Xeon.

I remember Proelite stating that maybe a few cores will be replaced by different hardware in the more advanced dev kits / final hardware

Yeah, this is what I've been hearing 2 quad Xeons running at 1.6GHz double threaded.

Thing about Intel CPUs is that hyperthreading can be disabled, those threads still being there means most likely that Microsoft is targeting a 16 thread console, and since I very much believe that they are using a custom AMD CPU, this would mean 16 cores for XB3.

Between Kinect, OS, possible background DVR functionality, all while gaming... They will need a hefty CPU for these tasks, and this is what the leaked PDF pointed to 2 years ago. It's also interesting too because they planned to keep Xenon on board for backwards compatibility, which could technically be why this CPU is so custom, if they integrate it on a MCM or even on die as AMD likes to do, I could see it being a very complicated, custom CPU. That last part is speculation based on an early rumor, but if they did do something like this, they could certainly have a monster CPU that AMD couldn't simply match by throwing more cores at it for Sony.
 

Maximilian E.

AKA MS-Evangelist
Maybe MS will realize some of their visions/goals they had with 360 (of having a "crazy" amount of cores).. I don´t remember their initial goal was with 360 but wasn´t it something along 16 cores OoO or something similar?
 
I am sorry that I'm just throwing a wrench into the speculation going on right now, but it's just a bit ridiculous given real rumors that have been flying around. Where does this 16core Sony rumor even come from? the first I heard of it was a few pages back when Jeff told me that it was possible.

you realize how silly this sounds right?
dev kits are dev kits, specs change over time, i doubt they'll use the same cpu but their architectures will probably be very similar.
 

KageMaru

Member
IMO what matters is the hardware dedicated to gaming. I can care less if Durango has 16 cores/threads if they can't all be used by the developers.

I also don't agree that both companies will be using the exact same CPU.

I may be recalling incorrectly, but iirc, Xenon is essentially a tri-core modified Cell PPE...

So it's not like one of them hasn't been played before in this regard.

MS originally wanted a 3.5Ghz OoO CPU but got the 3.2Ghz IO PPU instead. Sounds like IBM played both of them last gen. =p

Maybe MS will realize some of their visions/goals they had with 360 (of having a "crazy" amount of cores).. I don´t remember their initial goal was with 360 but wasn´t it something along 16 cores OoO or something similar?

There was a video floating around with an engineer (maybe?) from MS commenting on how one of the early 360 designs was a x86 CPU with a handful of ARM satellite processors. Sounded oddly similar to the Cell on a higher level.
 

RoboPlato

I'd be in the dick
Perhaps this video might help.
Cool. Will watch after work.


FWIW I don't think that the CPUs will be the same between systems but I think the gaming performance will be very similar. Less cores on the Ps4 but since some of them will be locked on the next Xbox it won't be a huge difference. They'll likely use a GPGPU to counter that. I'm expecting next gen to be very similar in terms of power with less differences between multiplatform titles than this gen.
 

KageMaru

Member
Cool. Will watch after work.


FWIW I don't think that the CPUs will be the same between systems but I think the gaming performance will be very similar. Less cores on the Ps4 but since some of them will be locked on the next Xbox it won't be a huge difference. They'll likely use a GPGPU to counter that. I'm expecting next gen to be very similar in terms of power with less differences between multiplatform titles than this gen.

Here's also a PDF for Instant Radiosity:

http://www.liensberger.it/Web/Blog/wp-content/uploads/Instant_Radiosity_kl08.pdf

Also, both (or all 3) next gen systems will use a GPGPU. GPUs have had GPGPU capabilities for a while now. Even the Xenos in the 360 has done simple GPGPU functions such as collision particles in Reach. =p

I agree though that both will be close in performance. It's a good thing too IMO.
 
Could someone sum up instant radiosity for me? I know what GI is but I only have a vague idea about radiosity.

Radiosity is a global illumination technique. Instant Radiosity is a way to approach the problem, suggested at Siggraph in late 90s, in a fast/efficient way.

TBH I do not expect Radiosity on real time games next gen, one of the reasons being the unpredictability of performance cost. But who knows... some devs might find creative solutions. Rendering is not my specialization :S
 

jaosobno

Member
Could someone sum up instant radiosity for me? I know what GI is but I only have a vague idea about radiosity.

In my understanding, the method works by having a primary light source which illuminates the scene. The light travels from that primary source and upon contact with other reflective surfaces a new light sources (reflective points) are created and those points further illuminate the scene. Basically what you archieve is realistic light reflections all over the scene.

For example, let's say you have a desk, a small mirror laying on a desk and a lamp. The lamp is a primary light source and it illuminates your entire desk. There is a mirror on your desk which must be treated as a reflective surface. The light travels from the lamp and hits that small mirror. At the point of impact, new light source is created. Light further travels from that mirror and hits wall of your room and further illuminates it. If at the point of the impact there was something reflective on your wall, a new light source would be created and light would continue to travel from that point on, further illuminating the scene.

EDIT: I see that you already got responses above.
 

KageMaru

Member
About time.

Well if you think about it, the same thing happened this gen too. ;p

Only the architecture of both consoles was different enough to warrant strengths in different areas. It'll be interesting to see how things turn out next gen.
 

squidyj

Member
Could someone sum up instant radiosity for me? I know what GI is but I only have a vague idea about radiosity.

It reduces the scene to a set of patches and figures out the visibility between any two patches and then transfers a perfectly diffuse energy between them based on attenuating factors of distance and that visibility. It only handles diffuse pathways though, so you can't bounce specular light around and it also won't generate caustics. It just makes a very smooth bounce lighting.


In my understanding, the method works by having a primary light source which illuminates the scene. The light travels from that primary source and upon contact with other reflective surfaces a new light sources (reflective points) are created and those points further illuminate the scene. Basically what you archieve is realistic light reflections all over the scene.

For example, let's say you have a desk, a small mirror laying on a desk and a lamp. The lamp is a primary light source and it illuminates your entire desk. There is a mirror on your desk which must be treated as a reflective surface. The light travels from the lamp and hits that small mirror. At the point of impact, new light source is created. Light further travels from that mirror and hits wall of your room and further illuminates it. If at the point of the impact there was something reflective on your wall, a new light source would be created and light would continue to travel from that point on, further illuminating the scene.

EDIT: I see that you already got responses above.

yeah, for instant radiosity because instant radiosity uses virtual point lights but it's still diffuse emission. Everything is 'reflective' unless it just absorbs light, there's light being transmitted basically uniformly in all directions and it's that assumption that allows for the instant radiosity calculations, it doesn't work with specular surfaces unfortunately.
 
Just a little information for all those people crying for more cores/threads and arguing that console A has x cores but console B only has y:

More cores/threads allow for a certain speedup due to paralleism. Assuming we have a digital filter with the input stream: ... x(n) x(n-1) x(n-2) x(n-3) ...

For that we need the following calculations for the output stream y(n) = x(n)*h0 + x(n-1)*h1 + x(n-2)*h2 + x(n-3)*h3 -> This is just some example (real life) calculation and can of course be different. Now we furthermore asusme that a multiplication takes the same time as an addition (normally a multiplication takes longer) we can show how much time the calculation needs:

The sequential solution with 1 "core" needs:

* * * * + + + this results in 7 time units

The solution with a "dual-core" needs:

* * + +
* * +
this results in 4 time units spent

Now we use a "quad-core" that needs:

* + +
* +
*
*
which results in 3 time units.

So the speedup from 1 to 2 cores is 1.75 but from 2 to 4 cores it is only 1.33 - because we can't do everything in parallel. So the maximum speedup is connected to the percentage of tasks that can't be parallelized. So just more cores, stream processors, etc. doesn't automatically mean a big performance gain but can lead to difficulties because the code has to be adapted.
 

McHuj

Member
Just a little information for all those people crying for more cores/threads and arguing that console A has x cores but console B only has y:

More cores/threads allow for a certain speedup due to paralleism. Assuming we have a digital filter with the input stream: ... x(n) x(n-1) x(n-2) x(n-3) ...

For that we need the following calculations for the output stream y(n) = x(n)*h0 + x(n-1)*h1 + x(n-2)*h2 + x(n-3)*h3 -> This is just some example (real life) calculation and can of course be different. Now we furthermore asusme that a multiplication takes the same time as an addition (normally a multiplication takes longer) we can show how much time the calculation needs:

The sequential solution with 1 "core" needs:

* * * * + + + this results in 7 time units

The solution with a "dual-core" needs:

* * + +
* * +
this results in 4 time units spent

Now we use a "quad-core" that needs:

* + +
* +
*
*
which results in 3 time units.

So the speedup from 1 to 2 cores is 1.75 but from 2 to 4 cores it is only 1.33 - because we can't do everything in parallel. So the maximum speedup is connected to the percentage of tasks that can't be parallelized. So just more cores, stream processors, etc. doesn't automatically mean a big performance gain but can lead to difficulties because the code has to be adapted.

Adapting your algorithm to the architecture is very important. In you're above example, you're better off computing y(n), y(n+1), ... on separate cores then the speed up is a lot closer to the number of cores. Your overhead comes from any synchronization required and if you need to compute a large data set, that's minimal since, in this case the cores will never be modifying/writing to the same address.
 
But the Xperia Play is just the same of the Playstation Certified Tablets.
It's just Sony understanding that they have to move on with their mobile products while corporate issues (the individual interests of the gaming division in this case) prevented them to take the bold steps necessary to make successful products. Those products ended up being confusing, weak and basically sent to die.
Sony failed with their portable music readers 10 years ago because their music division was against digital music for piracy issues, while their engineers refused to embrace the market standard (mp3) to promote their own in house solution (atrac3).
Now it's kinda the same with the Playstation division involved, Vita should have been an Android tablet device with the full Playstation experience running on it (PS1, PS2, PlayStation Network, next gen portable games), a digital music store and a Google partnership.
Unfortunately the gaming division management refused the idea to lose the full control of their platform, adopting free market standards and ended up with a traditional proprietary product with very limited commercial appeal and a hardware full of unexploited potential.
But the point is that sooner or later they have to move on, it's either adapt or die.

But how long would it be before ps vita games were fully hacked on an android? I give it two weeks.

Ideally sony would have its own tablet/smartphone OS with a robust app store (with plenty of non gaming apps), music store, ebook/manga/comic store, robust browser, etc but the fact that they aren't pushing for ANY of that suggests to me that they aren't ready to make the change you suggest (and that I agree with). The fact that they have a music store that's exclusive to australia/new zealand and have an ebook store that's exclusive to their readers and have a manga store on vita that's only in Japan suggests that they are still very segregated. They need more synergy between all those stores they have. Basically one store integrating it all, kinda like how google and apple do it and how MS is moving forward to doing it.

If they can get their OS to only have access to google's app store, I think they would do it, but they won't. I don't want to derail the thread anymore though, so if you reply, send me a PM instead.
 
Adapting your algorithm to the architecture is very important. In you're above example, you're better off computing y(n), y(n+1), ... on separate cores then the speed up is a lot closer to the number of cores. Your overhead comes from any synchronization required and if you need to compute a large data set, that's minimal since, in this case the cores will never be modifying/writing to the same address.

Of course you need to adapt your algorithm but that is the hard part I want to point out. In your version I would get y(n) to y(n-3) on 4 cores after the same time but what if y(n) is needed earlier and so on. I just wanted to point out that something what takes 6 seconds on a single-core doesn't mean it takes 1 second on a hexa-core. There are a lot of problems that benefit from multi-core architecture but on the other hand data-dependancies, communication overhead, etc. limit you probably to a speedup S(n) < n where n is your number of cores. There are a few exceptions where you can even reach super-linear speedup with more cores and cache, etc. but that would go too far. I just wanted to provide some basic understanding of how more cores philosophy doesn't mean more performance. Sames goes for the clock-rate aswell.
 
So basically your point is "Amdahl's Law exists" ;)

Haha well I wasn't sure if I should mention Amdahl and make it even more complicated. I think my "picture" might be more helpfull for the basic discussion instead of S(n) = n/(1+(n-1)*f) <= 1/f ;-)

In modern computers (different to consoles) a bigger problem than cores and clockspeed is probably bus congestion or if you have to discard your whole pipeline of instructions that costs you more than x GHz instead of y GHz. I think it doesn't hurt to talk a bit about the technology behind the "spec rumours". Gustafson's law is next ;-)
 

RoboPlato

I'd be in the dick
Thanks to everyone for the radiosity posts. It helped a lot. That's why I enjoy this thread, I've learned more about tech here and other rumor threads than any other. It makes it fun.
 

Grim1ock

Banned
Wow loads of replies. Where to start lol. Comments in italics

If Vita is any indication, SCEI is doing brilliant job making new hardware.

There's nothing fancy about vita hardware. a bloated, slow software, lack of ps1 games all stem from an unspired hardware specs.


Gaming works differently in that the symbiotic relationship between hardware and software is more biased toward latter especially because third party developers products account for the largest segment of software sales on almost all platforms save Nintendo. And if they can't make games with the same ease on PS as they can with their direct competitor Xbox, then you're handed down poor ports that cost the same for consumers.

Poor mutliplatform ports for the ps3 has absolutely nothing to with a too hard exotic architecture. It's all to do with marketshare. If MS and xbox were a distant third and sony was leading developers would focus their energies and make sure proper ports would have been established. Plain and simple


In this way, Sony and MS can, with input from major devs and publishers can establish the target parameters pertaining performance for their next gen systems which the third parties expect not only to be similar in performance but also when it comes to ease of programming.

No one is asking sony for alien technology. But having a generic unspired specs is really useless when they a primarily hardware company should be innovating in this regard.


The biggest differentiator next gen will be OS features and exclusives. Unless PS4 turns out be weaker by an unexpected amount and its OS is severely bloated yet much lacking against its competitor, the first parties give PS4 the edge.


What's the point of having talented first parties at sony if they release a garbage console with weak specs? You can be the einstein of game development but if the hardware is not there then all the software tricks on earth will make no difference.

All these OS features you keep banging on about and exclusives don't run on thin air. They need chips capable of running them.

Look what happened when sony decided to gimp certain parts of the ps3. Instead of having 512mb of ram for both main memory and video ram they halved it. 5 Years later they stll could not incorporate party chat because of insufficient RAM. Think of how long will the ps4 is going to last. another 7-10 years. Think of all the services you are going to add to the system and ask yourself if having a weak filled console is worth it in the long run


The market has changed considerably.

What's the purpose of spending billions just to say that you've done it in house if you can buy something similar if not better from a specialized company?

Because doing it in house means you can design and control the flow process better and have good quality control. You don't have to pay anyone for IP royalties too. The CPU and optical format in the ps3 are the only things sony owns in the ps3. Everything else is licensed from third parties.



Hardware power at this point is becoming a commodity, it's just a matter of how much you want your product to cost, you can even make a 700$ console bigger than the original Xbox and put a GTX680 in it, but that's not the way of doing business.
Differentiation at this point is done through games, applications, controllers, services, retail price not by having a slightly higher or lower clockspeed. The global market really doesn't care or Nintendo at this point would be dead.

Your games, applications, controllers, services etc etc all depend on hardware. Maybe you don't understand what i am saying but i will say it again. You can be the greatest software engineer on earth but if your tools and hardware are gimped, weak etc compared to the competition then there is nothing you can do.




so, you're confusing "design" with hardware engineering? the super slim is exactly the same console as ken's "brain child", just in a different form factor.
sony removed linux support, sacd support and 2 usb ports long before the first slim came along, that's pretty much the only hardware related stripping that went on.
the super slim is selling well just as well as the other revisions, it's cheaper to make and they're profiting from that by keeping the same price. it's an attempt to further cut costs and introduce the ps3 to emerging markets further along the way.
they are focusing on services because that's exactly what they need to focus, along with 1st party content.


Ken's brain child had backwards compatibility, SACD, 4 usb ports, card readers. Kaz hirai basically nerfed everything to the bare minimum. Where else have you come across on earth a product where instead of adding more hardware features they take them out?


In this day and age great hardware will only take you so far, especially since mostly everything that's not super high end is razor thin market today. With services you can keep making money for years from loyal customers. Jeff Bezos said he would give kindle fires away if people would use and spend money on the services provided.

Furthermore, Ken's 60GB playstation was costing sony $200+ per console sold and cost them billions of dollars and, outside of a few 1st party games, didn't really provide a major graphical advantage over the 360 which was released a year earlier, whereas Kaz' slim was making them money per console sold. Not sure you want another $200+ loss per console.


Great hardware is what made sony's Imaging and camera the few success stories in sony electronics at the moment. If kaz hirai was in charge of that department they would made pointless copy cat cameras from nikkon and canon and add some 'services' like youtube and twitter to it for differentiation and call it a day.

And laughably they would be using mediocre sensors from samsung and panasonic. Instead sony supplies sensors right now from samsung to apple to olympus and to canon themselves right now.

ken's 60gb was expensive because they were using technologies that were new and not matured. Blu ray and the cell took alot of costs and it would have taken time to bring it down.
Ken actually wanted to add more RAM on the ps3 and and make alot of changes to the hardware specs but that clown stringer thought of the short term and basically said no.
Then the fools from sony world wide studios decided to dump millions of dollars into lair and heavenly sword instead of those publishers who gave them the edge last generation and the rest is as they say is history.


The playstation 4 will be the defining console for sony. In terms where they are as a hardware company. The vita was designed as per third party wishes according to yoshida and look where they are now. No where. It's an uninspired handheld. Released under kaz hirai. which proves my point that Kaz hirai is oblivious to all things hardware related.

Either they can learn the lessons sony's imaging division learned or they can continue the path of cheap hardware with the sevices mantra and watch MS take them out of the console buisness for good. Either way something will give way next gen.
 

Ashes

Banned
Man I saw £135 ps3 front and centre at a Tesco today. Didn't realize ps3 these days were that cheap. There was a wonderbook bundle for £199.

Felt sorry for the wii u. I had to go look for it in the games isle. And there was a stack of basic ones on the shelf.

If a 12gb ps3 costs only £135 to make, they must really be raking it in with the profit on the 500gb version.
 

Mindlog

Member
What are the odds that Microsoft is putting those binned 720 Frankenstein chips to use in the '361'? The early reports indicated a terrible yield (too be expected given rumours.) Might as well try to recycle something out of the refuse?
Look what happened when sony decided to gimp certain parts of the ps3. Instead of having 512mb of ram for both main memory and video ram they halved it. 5 Years later they stll could not incorporate party chat because of insufficient RAM.
What would you remove from the system to add that additional RAM? The last thing. The very last thing that Sony needed for the PS3 is an additional Billion+ dollar outlay.
 

i-Lo

Member
What's the point of having talented first parties at sony if they release a garbage console with weak specs? You can be the einstein of game development but if the hardware is not there then all the software tricks on earth will make no difference.

All these OS features you keep banging on about and exclusives don't run on thin air. They need chips capable of running them.

Look what happened when sony decided to gimp certain parts of the ps3. Instead of having 512mb of ram for both main memory and video ram they halved it. 5 Years later they stll could not incorporate party chat because of insufficient RAM. Think of how long will the ps4 is going to last. another 7-10 years. Think of all the services you are going to add to the system and ask yourself if having a weak filled console is worth it in the long run

I'll give 2/10 because:

  • Selective reading and reitering what I said about how market share may have dictated development back to me after deleting what I wrote.
  • We both agree unanimously that PS4 should have powerful hardware to run both games and apps both of which are fundamental to its success.

The only thing is you haven't explained what do you mean by "generic hardware". Clearly you either don't know what it is you want in terms of spec and are expecting Sony bring out the next best proprietary thing since a sharp object to slice bread or you can't reconcile that "power" and "standardization" of hardware in the same sentence.
 

leroidys

Member
Just a little information for all those people crying for more cores/threads and arguing that console A has x cores but console B only has y:

More cores/threads allow for a certain speedup due to paralleism. Assuming we have a digital filter with the input stream: ... x(n) x(n-1) x(n-2) x(n-3) ...

For that we need the following calculations for the output stream y(n) = x(n)*h0 + x(n-1)*h1 + x(n-2)*h2 + x(n-3)*h3 -> This is just some example (real life) calculation and can of course be different. Now we furthermore asusme that a multiplication takes the same time as an addition (normally a multiplication takes longer) we can show how much time the calculation needs:

The sequential solution with 1 "core" needs:

* * * * + + + this results in 7 time units

The solution with a "dual-core" needs:

* * + +
* * +
this results in 4 time units spent

Now we use a "quad-core" that needs:

* + +
* +
*
*
which results in 3 time units.

So the speedup from 1 to 2 cores is 1.75 but from 2 to 4 cores it is only 1.33 - because we can't do everything in parallel. So the maximum speedup is connected to the percentage of tasks that can't be parallelized. So just more cores, stream processors, etc. doesn't automatically mean a big performance gain but can lead to difficulties because the code has to be adapted.

Just to elaborate on your excellent example a little bit, it's not actually going to work out quite like this because each processor core is superscaler , and the throughput of a CPU for something small like 7 operations is pretty unrealistic in general and not very useful.

Also (I believe) that whatever register that contains n will be smashed pretty hard, though I'm sure some higher levels of compiler optimization would take care of that.
 
Just to elaborate on your excellent example a little bit, it's not actually going to work out quite like this because each processor core is superscaler , and the throughput of a CPU for something small like 7 operations is pretty unrealistic in general and not very useful.

Also (I believe) that whatever register that contains n will be smashed pretty hard, though I'm sure some higher levels of compiler optimization would take care of that.

Thanks for the input. If the CPU has a superscalar pipeline there will be a certain speedup but depending on the task you can lose a lot of time if the pipeline is discarded because your branch prediction didn't work as planned. In real applications you also have a bit more room to maneuver because of certain optimizations - everything a good scheduler does for you. I can't really tell how all that works out in a console since I am more into microprocessors and realtime computing but some approaches probably will be the same no matter if it is an AMD APU or a DEC Alpha :)
 

mrklaw

MrArseFace
Yeah, this is what I've been hearing 2 quad Xeons running at 1.6GHz double threaded.

Thing about Intel CPUs is that hyperthreading can be disabled, those threads still being there means most likely that Microsoft is targeting a 16 thread console, and since I very much believe that they are using a custom AMD CPU, this would mean 16 cores for XB3.

Between Kinect, OS, possible background DVR functionality, all while gaming... They will need a hefty CPU for these tasks, and this is what the leaked PDF pointed to 2 years ago. It's also interesting too because they planned to keep Xenon on board for backwards compatibility, which could technically be why this CPU is so custom, if they integrate it on a MCM or even on die as AMD likes to do, I could see it being a very complicated, custom CPU. That last part is speculation based on an early rumor, but if they did do something like this, they could certainly have a monster CPU that AMD couldn't simply match by throwing more cores at it for Sony.


if they have a new kinect they should have onboard processing to improve response times and reduce bandwidth needed to transmit. But they may choose eg USB3 and put that processing on the console.

DVR and OS shouldn't need a monster CPU at all. I've said many times before that PS3 has a DVR which works using the existing OS reserved SPU just fine while playing games etc. Modern DVRs are all bitstream copying so no need to compress video etc, they're just storing files.

Even if they run a surface style windows 8 based OS, that shouldn't need a huge processor for background tasks, and when in the foreground the game isn't running anyway so you can use the full processor.

I just don't understand why you'd need a 16 thread CPU unless they're simply going for a CPU heavy architecture and the CPU will be used for everything other than graphics (whereas perhaps Sony are going with GPGPU for some of those processes)

we wont' be able to know what that means for power comparison until we know more about the specifics of each machine but a 16 thread CPU doesn't automatically mean much faster than a 4 or 8 thread CPU with GPU backup.

just too many unknowns right now.
 

leroidys

Member
Thanks for the input. If the CPU has a superscalar pipeline there will be a certain speedup but depending on the task you can lose a lot of time if the pipeline is discarded because your branch prediction didn't work as planned. In real applications you also have a bit more room to maneuver because of certain optimizations - everything a good scheduler does for you. I can't really tell how all that works out in a console since I am more into microprocessors and realtime computing but some approaches probably will be the same no matter if it is an AMD APU or a DEC Alpha :)

Yeah I don't know anything about programming for consoles, so I'm just going off of what I was taught in general for modern multicore CPUs. I could be totally off. Thanks for all the explanations.

So what levels of performance are we realistically expecting for the PS4?

Toy story gfx
 

StevieP

Banned
Wow loads of replies. Where to start lol. Comments in italics



There's nothing fancy about vita hardware. a bloated, slow software, lack of ps1 games all stem from an unspired hardware specs.





What's the point of having talented first parties at sony if they release a garbage console with weak specs? You can be the einstein of game development but if the hardware is not there then all the software tricks on earth will make no difference.

All these OS features you keep banging on about and exclusives don't run on thin air. They need chips capable of running them.

Look what happened when sony decided to gimp certain parts of the ps3. Instead of having 512mb of ram for both main memory and video ram they halved it. 5 Years later they stll could not incorporate party chat because of insufficient RAM. Think of how long will the ps4 is going to last. another 7-10 years. Think of all the services you are going to add to the system and ask yourself if having a weak filled console is worth it in the long run




Your games, applications, controllers, services etc etc all depend on hardware. Maybe you don't understand what i am saying but i will say it again. You can be the greatest software engineer on earth but if your tools and hardware are gimped, weak etc compared to the competition then there is nothing you can do.







Ken's brain child had backwards compatibility, SACD, 4 usb ports, card readers. Kaz hirai basically nerfed everything to the bare minimum. Where else have you come across on earth a product where instead of adding more hardware features they take them out?





Great hardware is what made sony's Imaging and camera the few success stories in sony electronics at the moment. If kaz hirai was in charge of that department they would made pointless copy cat cameras from nikkon and canon and add some 'services' like youtube and twitter to it for differentiation and call it a day.

And laughably they would be using mediocre sensors from samsung and panasonic. Instead sony supplies sensors right now from samsung to apple to olympus and to canon themselves right now.

ken's 60gb was expensive because they were using technologies that were new and not matured. Blu ray and the cell took alot of costs and it would have taken time to bring it down.
Ken actually wanted to add more RAM on the ps3 and and make alot of changes to the hardware specs but that clown stringer thought of the short term and basically said no.
Then the fools from sony world wide studios decided to dump millions of dollars into lair and heavenly sword instead of those publishers who gave them the edge last generation and the rest is as they say is history.


The playstation 4 will be the defining console for sony. In terms where they are as a hardware company. The vita was designed as per third party wishes according to yoshida and look where they are now. No where. It's an uninspired handheld. Released under kaz hirai. which proves my point that Kaz hirai is oblivious to all things hardware related.

Either they can learn the lessons sony's imaging division learned or they can continue the path of cheap hardware with the sevices mantra and watch MS take them out of the console buisness for good. Either way something will give way next gen.

No.

Next gen is about software, services, product differentiation (ie why you see Sony researching helmets and dual moves, etc) and ecosystem. Not internal system hardware.
 

Log4Girlz

Member
No.

Next gen is about software, services, product differentiation (ie why you see Sony researching helmets and dual moves, etc) and ecosystem. Not internal system hardware.

The most important thing is to have as close to a tablet or pc type experience as possible. The PS4 and Durango or whatever should do nearly anything these devices can do at a more reasonable price/performance ratio.
 

StevieP

Banned
The most important thing is to have as close to a tablet or pc type experience as possible. The PS4 and Durango or whatever should do nearly anything these devices can do at a more reasonable price/performance ratio.

Tablets are a *horrible* price/performance ratio (well, if we're talking about the higher end/higher selling ones) compared to most other traditional computing devices.

You're going to see a lot of focus on the non-gaming aspects of future consoles, though, and GAF is going to cry a lot about it.

Globox_82 said:
so is it safe to say that both next Box and Psquad will be losing money from day 1, on hardware that is?

Probably, but it's not certain. At all. At the very least, you'll probably see less loss. Not as little as the Wii U is likely losing per system, but probably nowhere near as much as the 360/PS3 were losing (something like $2-300 per console). Neither division is as eager to lose money on the boxes. Microsoft can afford it more easily, but if you read what bkilian has had to say (a confirmed Microsoft audio engineer in their console division) and others on this forum who have friends higher up in MS (Vinci) it goes something like this:

bkilian said:
No, see, when 360 launched, the XBox org was a "strategic bet" (Microsoft dumps tons of money into strategic bets - not all of them pan out). Now it's a profit center. It would be infeasible to reduce year over year profit growth. So selling hugely underpriced hardware now is going to be a tough sell.

But I wasn't referring to ancillary revenue. I was referring to direct hardware profits. The 360 launched with a roadmap to profitability using process shrinks and volume discounts. It's successor won't be so lucky. Process shrinks are getting harder to execute and energy efficiency is not linear with process size (much more leakage at smaller sizes).
Also, the customer focus has changed. People spend more time on 360 now consuming media than playing games. Sure, games are good, but what keeps that ancillary revenue coming in now is evenly split. You don't need a monster, power hungry, money losing superbox to provide streaming movies, and the games will adapt to the resources they have. A modest increase could be workable. Quadruple the memory, and even with no changes in CPU and GPU, the games would be significantly better.

If the rumors have any truth in them, both sides are aiming a lot lower this next generation that the previous.

So I was not saying MS is not currently making money on it's games business, I was just pointing out that your original statement overlooked the fact that the company may not be as willing to dump money into the ecosystem as it was last time around.

Vinci said:
Which would cause MS proper to split in half. Seriously, the Office, Windows, and Tools groups would be fucking furious if the X-Box division were to squander all of the money they're making for the company. They're already not especially fond of that group losing so much money.

The bkilian quote is more than 6 months old, so take that as you will.
 

Perkel

Banned
edit:
fuck it don't want to continue fanboy talk

edit2:

I wonder if they go with multicore GPU. There is no reason to stay with single core GPU.
 
GPUs are already (massively) multicore in the sense that their design involves placing large numbers of execution hardware in parallel. Any kind of "crossfire" or "SLI" setup just adds cost while being detrimental to performance. Given the choice it's always better to use the silicon budget to create a single monolithic GPU than it would be to have 2 that are half the size. Otherwise the term "multicore" has no application in the expected technology. We aren't talking about embedded GPUs like SGX or Mali designs that explicitly created to be modular and scalable, which is expressed in terms of GPU "cores". For AMD we already talk about how many CUs or "compute units" we might expect which more or less he same idea.
 

DBT85

Member
I'd like to see the next gen systems able to utilise the speed of an SSD properly unlike the PS3.

I'd happily take out whatever drive comes in the PS4 and put in a 250gb SSD if it got the related performance boost that it should get.

I wonder if asking if anyone else will do that is thread worthy.
 

StevieP

Banned
SSD still doesn't have the right cost/performance ratio (nor the reliability) required in consoles. And I say that as someone who owns about a dozen of them in my various PCs.
 
We have no idea.

Somewhere below or above the next XBox :)

My guess is slighlty below.
What if the next generation game console is the do everything box and will replace the game console, PC, DVR, XTV Set top box, Blu-ray, DVD, Music CD and serve all to everything in the home. What if is already answered by the Leaked Xbox 720 powerpoint; it's happening and low power modes are necessary.

Only the DVR-XTV set top box functionality might be split off as a separate accessory or have plug-in accessory modules via cartridge, Network or USB3. I suspect that both consoles will have HDMI pass-thru as it adds little to the cost but Cable card and tuners with International standard differences make including tuners in the console not practical.

I realize the importance of a minimum spec for CPU and GPU but I believe the big feature of next generation game consoles is Accessory support; this is also supported by both the Sony CTO and the leaked Xbox 720 powerpoint. Edit: Beaten by SteveP above.

I believe every effort is being made to reduce the power usage in every mode by making good hardware choices. For instance; 2 hardware decoders and 1 encoders rather than using CPU or GPU, DSP and specialized hardware to pre-process Kinect2 or Depth camera. This also increases efficiency which would also allow background serving while playing a game.

There is also that AMD is implementing the second generation "Cell Vision" of distributed processing with HSA Fusion and HSAIL. A Fabric memory model and a next generation lower latency network card (& WiFi direct) in CE equipment and PCs would allow distributed processing and true sharing of resources in the home network. If I were Microsoft I would be including support for this in Windows 8 and the Xbox 720. If I were Sony I would be including support for this in the PS4 and Sony CE equipment (TVs, Stereos, Blu-ray, Tablets) running under eLinux. This is another possibility for the Microsoft-Sony.com domain registration, both are to support this with the same standard.

We don't know what's to come but for sure AMD is relying on game consoles to make HSA a standard. There are many legacy standards that should be dropped that have been carried forward, this might be the opportunity for a new start in game consoles which DO NOT have to support legacy code.

HSA Fusion
Legacy X86 16 bit code dropped in favor of AMD64
Xwindows giving way to Wayland as the default in Linux/Unix
OpenCL instead of Direct Compute
OpenGL instead of OpenGLES or DirectX
WebGL being supported by Microsoft

From what I read, there is little chance of Microsoft dropping DirectX but the W3C standard is OpenCL and OpenGL so Microsoft has to support OpenCL with Direct Compute and may have to support WebGL with DirectX which creates unnecessary overhead.
 

Log4Girlz

Member
Tablets are a *horrible* price/performance ratio (well, if we're talking about the higher end/higher selling ones) compared to most other traditional computing devices.

You're going to see a lot of focus on the non-gaming aspects of future consoles, though, and GAF is going to cry a lot about it.

Yeah they are, but people eat them up due to their ultra convenient form factor. Console just have to maximize their value to people. Having a device being only really good for 1 or maybe 2 functions other than gaming will be fucking death, especially since they take up precious room in an entertainment center. Future consoles need to be equally good at running apps as they are just basic games.
 

DBT85

Member
SSD still doesn't have the right cost/performance ratio (nor the reliability) required in consoles. And I say that as someone who owns about a dozen of them in my various PCs.

I don't mean for them to include one in the box. I mean to put one in myself.

The PS3 can use them but it gets very little performance boost from doing so. If the PS4 a) allows me to change the drive for my own and b) actually gets a noticeable boost from using a SSD then I'll swap it out in a heartbeat. I know I'm not the average consumer though.
 
GPUs are already (massively) multicore in the sense that their design involves placing large numbers of execution hardware in parallel. Any kind of "crossfire" or "SLI" setup just adds cost while being detrimental to performance. Given the choice it's always better to use the silicon budget to create a single monolithic GPU than it would be to have 2 that are half the size. Otherwise the term "multicore" has no application in the expected technology. We aren't talking about embedded GPUs like SGX or Mali designs that explicitly created to be modular and scalable, which is expressed in terms of GPU "cores". For AMD we already talk about how many CUs or "compute units" we might expect which more or less he same idea.
Is AMD going to allow or support power gating for parts of the GPU in the APU.? We can already split the GPU into multiple Compute units and this is supported by GCN and OpenCL 1.2 but if a large GPU is only all on or all off it might require a small GPU in the APU and a discrete second GPU that could be turned off.

Edit: Wow, great NeoGAF page so far (including the following three at the time of edit).
 

Kenka

Member
So what levels of performance are we realistically expecting for the PS4?
My two cents:


gZBCN.png


The greener the more likely (to me).
 

dumbo

Member
if they have a new kinect they should have onboard processing to improve response times and reduce bandwidth needed to transmit. But they may choose eg USB3 and put that processing on the console.

I think the kinect "interpretator" would need to be programmable for each application, so putting it 'offboard' might end up more complex/expensive in the long term?

Even if they run a surface style windows 8 based OS, that shouldn't need a huge processor for background tasks, and when in the foreground the game isn't running anyway so you can use the full processor.

I think the original designs seemed to be for 2 processors - a 'big' processor for the console and a little arm chip for low-power mode (aka 'set-top box mode').

Is it possible that this design simply evolved by "moving the little CPU" into the big CPU.

e.g. something like this:
Design A: 4 core, 3.2ghz processor + external dual core ARM processor.
Design B: 8 core, 2.4ghx processor, with 2 cores reserved for set-top box mode.

The performance would seem "similar", but it's a much simpler design and I assume far cheaper to build.
 

Vol5

Member
I don't mean for them to include one in the box. I mean to put one in myself.

The PS3 can use them but it gets very little performance boost from doing so. If the PS4 a) allows me to change the drive for my own and b) actually gets a noticeable boost from using a SSD then I'll swap it out in a heartbeat. I know I'm not the average consumer though.

If the PS3 utilises USB3 it's fair to say it has a 6Gb SATA setup, meaning, hopefully, that placing a SSD into it will see a significant performance boost when calling from the hard drive. A deciding factor woul be the SSD itself, IOPS specifically.
 

Log4Girlz

Member
What's preventing Sony from throwing a modified Android OS in this new machine and doing everything in their power to create a PC-lite in terms of functionality?
 
Top Bottom