DF: Xbone Specs/Tech Analysis: GPU 33% less powerful than PS4

I don't think you understand the point I'm trying to make. Did people enjoy DMC any less because it didn't look as good as Ninja Gaiden? I don't believe so, at least I didn't and I'm sure the same can be said for many here at GAF.

However, now all of a sudden a smaller gap is suddenly a big deal to many here and I'm sure it'll effect their opinion on Xbone games even though they could still be quality games.

You're telling me this should be considered normal?
Uh, welcome to NeoGAF, I guess? Where graphics pissing matches are frequent, and form their own metagame. Gameplay is irrelevant in such discussions.
 
I don't think you understand the point I'm trying to make. Did people enjoy DMC any less because it didn't look as good as Ninja Gaiden? I don't believe so, at least I didn't and I'm sure the same can be said for many here at GAF.
People enjoyed Madden on the PS3 less, since it looked and ran worse than the 360 version.
 
Doesn't change the fact that people enjoyed playing games on the PS2 despite the performance gap.

Simply put the games were on the PS2, it was a partial monopoly. It's not like the PS2 and Xbox came out at the same time for the same price with the same games and people chose the PS2.
 
'sigh' so is this what we are going to get for the next 10 years off some of you?, if the Xbox One happens to have a beter looking/running version of a game then its purely down to corruption by developers or its down to aliens or something?.

I guess so? What are you not understanding here? Other than memory setup (which is clearly superior on the PS4) these two consoles have basically identical architectures, but with the PS4 having more GPU power and higher memory bandwidth. It's simply more powerful in some ways and less powerful in no ways at all.

So no, there should be no real reason for a game to ever run worse on the PS4, unless it instead looks better. If it looks the same but still runs worse, then the devs must have purposefully gimped it. Or Sony's dev tools are absolute shit, but from what we've been hearing they are actually great.
 
I guess so? What are you not understanding here? Other than memory setup (which is clearly superior on the PS4) these two consoles have basically identical architectures, but with the PS4 having more GPU power and higher memory bandwidth. It's simply more powerful in some ways and less powerful in no ways at all.

So no, there should be no real reason for a game to ever run worse on the PS4, unless it instead looks better. If it looks the same but still runs worse, then the devs must have purposefully gimped it. Or Sony's dev tools are absolute shit, but from what we've been hearing they are actually great.

But you have to consider that easy to program for cloud computing offloading, it's more than a secret sauce.
 
Yeah, but the CPUs on these systems aren't exactly beefy, and require good parallelism. If the quad core 3.2Ghz CPUs had made it I might be more optimistic about 60fps, at least in the short term.

Longer term as devs get used to the CPUs - and also perhaps leverage GPGPU more - cpu and general processing may become less of a hold back on framerate.

I am making an assumption about the CPU being the one holding things back but just a hunch.


My point was usually OS like windows is bloated with services which take CPU power. I mean just go to services in your msconfig and see all those services which won't be needed for console OS and that is not even full list of things that run in background. That is not much of % total CPU power but they need power and time to work. It won't make jaguar i5 but comparing it to normal jaguar on win os especially 4core version is somewhat far from truth.
 
But you have to consider that easy to program for cloud computing offloading, it's more than a secret sauce.

Dammit, forgot about that. You're right, MS is clearly gonna have a GTX Titan running in the cloud for each and every XBO sold, and there will be no latency as all the data gets sent back and forth, and the graphics will be amazing.
 
Uh, welcome to NeoGAF, I guess? Where graphics pissing matches are frequent, and form their own metagame. Gameplay is irrelevant in such discussions.

I know, but I'll still chuckle at the silliness of it all.

People enjoyed Madden on the PS3 less, since it looked and ran worse than the 360 version.

That's understandable while there have been many many other multi-plat games that have looked and ran worse but people still accepted on both the PS3 and 360.

Simply put the games were on the PS2, it was a partial monopoly. It's not like the PS2 and Xbox came out at the same time for the same price with the same games and people chose the PS2.

You can try to justify this change in menality, but it means nothing when it comes to exclusives.

Basically the gap in performance and different approaches are worth discussing, but that's hardly what's happening here in this thread. Instead it's mostly corporate alliance filled hyperbole. I thought Brashnir summed it up pretty well actually.
 
Basically the gap in performance and different approaches are worth discussing, but that's hardly what's happening here in this thread. Instead it's mostly corporate aliance filled hyperbole. I thought Brashnir summed it up pretty well actually.

If in doubt, fall back on the fanboy label.. One might argue you have a pretty strong alliance yourself and it explains your defensiveness to the recent "hyperbole".
 
Xav de Matos on the new Joystiq podcast.


"the graphical capabilities are the same on the systems, except the ability to power that memory and speed up to next process is faster on the PS4".

"there have been no developers that have come out yet, to say that there are limitations vs one system's GPU to the next. It appears that the games themselves will be sort of similar in that vein, other than the DDR is faster".




The gaming media taking their notes from Microsoft PR.
 
Basically the gap in performance and different approaches are worth discussing, but that's hardly what's happening here in this thread. Instead it's mostly corporate alliance filled hyperbole. I thought Brashnir summed it up pretty well actually.
You mean "allegiance".

I can sympathize about the "cult of GDDR5", I had my own run-in with them. But its also true that the situation between PS4 and XB1 seems rather clear-cut. Well, if you understand grade school math, otherwise you need to spend 8 pages discussing percentages.
 
If in doubt, fall back on the fanboy label.. One might argue you have a pretty strong alliance yourself and it explains your defensiveness to the recent "hyperbole".

I'm not defensive at all, just pointing out the silliness, hoping one day it'll change.

For someone with such a strong alliance, it's odd how much I've been bashing the Xbone and looking forward to the PS4, but hey maybe it's all just a part of my imagination. I just never saw the point of trolling and never will.

You mean "allegiance".

I can sympathize about the "cult of GDDR5", I had my own run-in with them. But its also true that the situation between PS4 and XB1 seems rather clear-cut. Well, if you understand grade school math.

I thought alliance would work too, but you're correct, thanks for that. =)

Also, I don't disagree that the difference is clear cut, never have. I just wish there was less influence with the discussion is all. However, I'm done as I see now I'm just taking the thread OT which wasn't my intent with my smart ass comment.
 
To put this into non hyperbolic terms. The GTX 780 is rumored to be 27% faster than the GTX 680. The PS4 is 50% faster than the Xbone.

This means the PS4 is two GPU generations more advanced than the Xbone.
 
To put this into non hyperbolic terms. The GTX 780 is rumored to be 27% faster than the GTX 680. The PS4 is 50% faster than the Xbone.

This means the PS4 is two GPU generations more advanced than the Xbone.
I doubt you'd find many people who would agree that the 7** series of cards is really a "new generation".
 
If anyone is really curious how much difference 50% more power makes, there's a perfect example already on the console market.

Anyone care to venture a guess what it is?
 
Xav de Matos on the new Joystiq podcast.


"the graphical capabilities are the same on the systems, except the ability to power that memory and speed up to next process is faster on the PS4".

"there have been no developers that have come out yet, to say that there are limitations vs one system's GPU to the next. It appears that the games themselves will be sort of similar in that vein, other than the DDR is faster".




The gaming media taking their notes from Microsoft PR.

Basically the gap in performance and different approaches are worth discussing, but that's hardly what's happening here in this thread. Instead it's mostly corporate alliance filled hyperbole. I thought Brashnir summed it up pretty well actually.

i think ms promoters try to make the 2 systems identical.

and you seem to agree that they are liars,so?
 
"there have been no developers that have come out yet, to say that there are limitations vs one system's GPU to the next."
I guess Sony didn't get the memo that there's no limitations in 1.2 TF vs. 1.8 TF. Could've saved a bunch of money.
 
I understand that, but a lot of the GAF trumpeting is just fanboy drivel from people who don't understand what any of it really means, just like it was in 2005/6. In fact, people who were saying exactly what you're saying here about PS3/360 was shouted down by these morons and their cult of Cell back then until the results proved them wrong.

People who know how this stuff actually works can see the clear and obvious advantages of the PS4's design and components over the X1's and what it really means. There are, however, a lot of misinformed people out there who are just in the cult of GDDR5.

Completely agree, we're on the same page then. A lot of people just repeat talking points and don't realize what the differences mean. I'm really excited that they're both on the same architecture this time around..


schennmu said:
To put this into non hyperbolic terms. The GTX 780 is rumored to be 27% faster than the GTX 680. The PS4 is 50% faster than the Xbone.

This means the PS4 is two GPU generations more advanced than the Xbone.

No, it means that the PS4 GPU is roughly 50% more powerful than the Xbone GPU. They're same generation GPUs.
 
Uh, welcome to NeoGAF, I guess? Where graphics pissing matches are frequent, and form their own metagame. Gameplay is irrelevant in such discussions.
two games. Both cost $60. Both have identical game play. Both have good controllers equally well suited to the genre. One looks better. Which would you buy?

Further extrapolating that, if one of two systems tends to have the better looking version, would that not be a selling point for that system?

we are talking about situations where all else is quite likely equal more often than not, other than graphics.

so unless you think GTA6 is going to have better game play on one system than the other, then yeah, gameplay is kind of irrelevant here.
 
Correct. the Wii is a gamecube clocked 50% higher. The difference affects both CPU and GPU, but the CPUs in X1 and PS4 are largely inconsequential.
That's not a totally fair comparison. Hardly any games pushed the Wii.

two games. Both cost $60. Both have identical game play. Both have good controllers equally well suited to the genre. One looks better. Which would you buy?

Further extrapolating that, if one of two systems tends to have the better looking version, would that not be a selling point for that system?
What are the chances of MS moneyhatting exclusive DLC, though? And people actually caring?
 
I was not completely serious with this comparison as you might have noticed. Still interesting IMO.

How big was the jump from 500 series to Kepler?
Around 30% when you look at 580 to 680. But that transition was when NV started selling their mid-level chips as high-end. And 580 was to 480 as 780 is to 680. The last "real" generational upgrade (IMHO of course) at NV was 285 to 480, and that was 70%.
 
I guess Sony didn't get the memo that there's no limitations in 1.2 TF vs. 1.8 TF. Could've saved a bunch of money.

like i said before,anyone that "forgets" to mention the gpus in console comparisons either doesn't understand gfx tech or he/she is on ms payroll.
 
Correct. the Wii is a gamecube clocked 50% higher. The difference affects both CPU and GPU, but the CPUs in X1 and PS4 are largely inconsequential.

Have we ever received any figures to how many cores the OS takes up on each CPU? IIRC the Xbone was rumored to be 2 of the 8 cores, while the PS4 was 1 I think. That could make a difference as well.
 
My point was usually OS like windows is bloated with services which take CPU power. I mean just go to services in your msconfig and see all those services which won't be needed for console OS and that is not even full list of things that run in background. That is not much of % total CPU power but they need power and time to work. It won't make jaguar i5 but comparing it to normal jaguar on win os especially 4core version is somewhat far from truth.

No,
Windows services use barely any CPU time at all! They do not slow down windows to anywhere near close enough for a 8 core jaguar CPU to beat a 4 core i5!

P.S
If you looked at the damn benchmark results you will see that the i5 the 4 core jag is being compared with is only a low clock speed 2 core version!
 
Basically the gap in performance and different approaches are worth discussing, but that's hardly what's happening here in this thread. Instead it's mostly corporate alliance filled hyperbole. I thought Brashnir summed it up pretty well actually.
There isn't much to not understand though -- referring to the post you linked. The PS4 is simply more powerful across the board, leaving no reason for mostly every game to either look or run better on the system.

Graphics will and have always been a big deal in places like this. Especially now when these two systems are releasing around the same time and Microsoft's focus is shifting more towards entertainment while being far behind Sony in the 1st party department.

It makes gamers wonder why they shouldn't even bother with an XBOne. So naturally there's going to be a little hyperbole.
 
What is the Cult of GDDR5? Are they the people saying the PS4 is faster than a high end GPU because it has more memory?

They're the idiots who have run around the past 2 months shouting about 8GB DDR5 while knowing absolutely nothing about what it means, just like the idiots who went on and on about Cell in 2005/6.

They're not difficult to spot. They shout memes because it's all they know.
 
Can someone explain the meaning of "coding to the metal"?

As I understand it, that means writing code that talks directly to the processor without any middleware/engine to consider. Is this correct? Do games coded down to the metal not have an engine to speak of? What games have been coded to the metal?

It's always intrigued me, but I've never had a full understanding of the concept.
 
That's not a totally fair comparison. Hardly any games pushed the Wii.

What are the chances of MS moneyhatting exclusive DLC, though? And people actually caring?

Probably not significantly dissimilar to the chances of Sony doing the same. Which they have done on PS3, and which they've already announced will be happening for games on the PS4 like Destiny.
 
You mean "allegiance".

I can sympathize about the "cult of GDDR5", I had my own run-in with them. But its also true that the situation between PS4 and XB1 seems rather clear-cut. Well, if you understand grade school math, otherwise you need to spend 8 pages discussing percentages.

Well it's not entirely clear cut. We still need to find out the specific capabilities and advantages of the esram which is completely different from edram in the 360. Ive been reading on b3d that it might convey specific pathing and latency advantages to the one gpu. So while the bandwidth may not be as high as gddr5 it can still do some things*

*Note: no clue what these may be or how significant they may be.
 
Can someone explain the meaning of "coding to the metal"?

As I understand it, that means writing code that talks directly to the processor without any middleware/engine to consider. Is this correct? Do games coded down to the metal not have an engine to speak of? What games have been coded to the metal?

It's always intrigued me, but I've never had a full understanding of the concept.

It can be better to code that far into a chip to get that extra power or better performance on things but it completely breaks any kind of future chipset support for that code if things are changed too much, can't think of a better way to explain it as i'm in a rush.

Coding to the metal kind of says that any games on the xbone will not be BC on their next console unless they keep the same chipsets but just raise the power.
 
Well it's not entirely clear cut. We still need to find out the specific capabilities and advantages of the esram which is completely different from edram in the 360. Ive been reading on b3d that it might convey specific pathing and latency advantages to the one gpu. So while the bandwidth may not be as high as gddr5 it can still do some things*

*Note: no clue what these may be or how significant they may be.
Yeah, I actually posted about this earlier in the thread. If you want to pointer-chase in an unorganized 30 MB data structure then clearly XB1 is your system. There could well be an order of magnitude performance difference.
 
Yeah, I actually posted about this earlier in the thread. If you want to pointer-chase in an unorganized 30 MB data structure then clearly XB1 is your system. There could well be an order of magnitude performance difference.
Do you know of any real world examples where 32MB of low latency storage would make an impact?
 
It can be better to code that far into a chip to get that extra power or better performance on things but it completely breaks any kind of future chipset support for that code if things are changed too much, can't think of a better way to explain it as i'm in a rush.

Coding to the metal kind of says that any games on the xbone will not be BC on their next console unless they keep the same chipsets but just raise the power.
So it's a way of reducing the overhead of processor cycles by cutting out the middle man (the middleman being a set of dev tools or something similar) I guess?

Are there any examples of this? I'd imagine it'd probably be the sort of thing only accomplished by quality first party devs with an in depth understanding of the architecture right?
 
Well it's not entirely clear cut. We still need to find out the specific capabilities and advantages of the esram which is completely different from edram in the 360. Ive been reading on b3d that it might convey specific pathing and latency advantages to the one gpu. So while the bandwidth may not be as high as gddr5 it can still do some things*

*Note: no clue what these may be or how significant they may be.

It will help some in the bandwidth department, and will allow a low-latency local store on the GPU for a lot of tasks. It's a useful thing both for its bandwidth and physical proximity to the GPU. (when you get up to modern GPU speeds, the speed of light becomes an issue when moving data)

In the end, though, it's a very very tiny advantage sitting next to a much larger disadvantage. Even with it, the X1's GPU will probably be missing cycles due to data starvation more often than the one in the PS4

Clocks scale 100% with performance... oh this is awesomesauce

Theoretical FLOPS scale 100% with clock.
 
They're the idiots who have run around the past 2 months shouting about 8GB DDR5 while knowing absolutely nothing about what it means, just like the idiots who went on and on about Cell in 2005/6.

They're not difficult to spot. They shout memes because it's all they know.
Is it more, faster RAM or is it not?

This isn't like Cell where there's this unknown hidden potential.
 
So it's a way of reducing the overhead of processor cycles by cutting out the middle man (the middleman being a set of dev tools or something similar) I guess?

Are there any examples of this? I'd imagine it'd probably be the sort of thing only accomplished by quality first party devs with an in depth understanding of the architecture right?

Pretty sure Naughty Dog codes in this way.
 
Do you know of any real world examples where 32MB of low latency storage would make an impact?
There are some. I mean that's why we have caches in CPUs. I don't believe that they are significant enough in terms of the frame time they consume in the vast majority of games to make up for all the other apparent performance disadvantages of the platform though. And of course, it's not like you get any advantage out of that memory pool "for free". You have to put in work to use it, a bit like the SPEs in Cell (though not as extreme).
 
They're the idiots who have run around the past 2 months shouting about 8GB DDR5 while knowing absolutely nothing about what it means, just like the idiots who went on and on about Cell in 2005/6.

They're not difficult to spot. They shout memes because it's all they know.

there's nothing idiotic about it. I remember when the rumors about these machines started flying around, GAF experts were suggesting we would be lucky to get 2 GB's of RAM.

Jump from 2 GB to 8 GB was amazing and it helps devs....
 
If anything is going to be the cell of this generation its going to be "Cloud Processing". There is some hyperbole with the GDDR5 though too.
 
So it's a way of reducing the overhead of processor cycles by cutting out the middle man (the middleman being a set of dev tools or something similar) I guess?

Are there any examples of this? I'd imagine it'd probably be the sort of thing only accomplished by quality first party devs with an in depth understanding of the architecture right?

its more like a layer


actual hardware on the bottom with layers inbetween, those layers help making coding / porting easier, but the more layers the more performance is lost. Its why carmack bitches all the time about PCs performance being lost because of OS, and driver resources. Consoles have less layers in general and everyone has the same cpu/gpu/ram they can code for specific configurations. I would love to see what a game would look / run like if it was coded for my pc specifically, but then it wouldn't run on anyone else (assuming their hardware wasn't identical to mine).

I was a bit too young at the time, but I think back when PC (aka minicomputers) were coming around, games were coded to their specific platforms more (TANDY, SPECTRUM, APPLE 2, etc). Because they pretty much had to squeeze as much as they could from the hardware at the time. A lot more bloat going around now, that pcs just brute force through. (MOOOAR RAM!, CORES!! etc)
 
It makes gamers wonder why they shouldn't even bother with an XBOne. So naturally there's going to be a little hyperbole.

I really want to move on from this discussion, but before the 24 hour authentication bullshit surfaced, I thought the point of still bothering with the Xbone was because there will still be good exclusive games on the system, regardless of having less performance. Basically I would have used the PS4 or PC for multi-plat titles + exclusives while using the Xbone for exclusives alone.

So it's a way of reducing the overhead of processor cycles by cutting out the middle man (the middleman being a set of dev tools or something similar) I guess?

Are there any examples of this? I'd imagine it'd probably be the sort of thing only accomplished by quality first party devs with an in depth understanding of the architecture right?

Programming to the metal is mostly a myth. It's used on both current gen systems by both multi-platform and 1st party developers but not to the extent that many think. For the majority of game code, programmers will be using some form of an API depending on what platform they are coding for. From what I understand, the APIs are tailored to each system to such an extent that there would be minimal advantage to coding to the metal versus an API. It wouldn't be worth the additional time necessary at least.

Here's a thread at B3D that covers the subject:

http://forum.beyond3d.com/showthread.php?t=62049&highlight=programming+metal
 
Can someone explain the meaning of "coding to the metal"?

As I understand it, that means writing code that talks directly to the processor without any middleware/engine to consider. Is this correct? Do games coded down to the metal not have an engine to speak of? What games have been coded to the metal?

It's always intrigued me, but I've never had a full understanding of the concept.


It means they directly can program hardware without API like DirectX.

API is simply easier to work with and it covers all kinds of hardware so you do not need to program for each different hardware. Because API needs to cover all hardware it also means it doesn't use every single piece of silicon to full extend.

Consoles do not have different hardware parts in every box so API can be very thin and very fast because it is created only for one hardware spec but still API is API and it is created to help all developers with their work and it is not created for single project which may want to use hardware differently.

Good example of coding to metal is GOW3 MLAA. They used normal MSAAx2 on their GPU but they created MLAA to work with SPUs and thanks to that they had better and faster AA. They did something which was not in API they directly programmed hardware.

Next is Naughty Dog and how they used PS1 chip in PS2 to help rendering their game.
 
Top Bottom