DF: Xbone Specs/Tech Analysis: GPU 33% less powerful than PS4

I think people are smoking crack if they think games are suddenly going to take advantage of this "cloud" for actual processes necessary to run the game. Developers need to develop the game for the hardware they are guaranteed to have. Since you only have to connect to the internet once every 24 hours, the cloud is not guaranteed hardware, therefore it cannot be relied upon when developing your game for what you will have available.

Unless it's some sort of MMO or "SimCity" style forced server connection to play your game, this won't be a thing.
 
I could see something like adaptive AI being used in that context. Low bandwidth, high impact to the end-user experience.

Also ads.

There is nothing to stop sony doing the same, the fact they have that giakia could offer the exact same cloud rendering if it becomes the fairy dust ms seems to be alluding to it becoming.
 
there was a 45 minute Architecture conference with the 4 main engineers and Dan Greenwalt of T10.

They spoke extensively about it and I watched the whole thing.

Dan kept saying the machine will just get more powerful over time and we will find more and innovative ways to render things on the cloud (non essential, low latency) and it is going to be exciting... he was thrilled and not just talking PR from what I could see,,, they are working on things now

It was hosted by Major Nelson given to the press but the twitch video has been removed from his site I am still looking to see if it pops up again

I think it might still be available via giant bomb's twitch account from their live coverage.
 
good gizmodo article also mentions the cloud (touches on the whole console too)

http://gizmodo.com/xbox-one-all-the-nerdy-details-you-dont-know-yet-509381624

"You know how in Skyrim sometimes you can look at a specific part of a specific wall and your framerate will randomly dip down into the afterlife? That workload (which is probably a silly mistake, but still) would probably be shifted off to some Microsoft server, and never make it to your Xbox."

I don't think they know what they're talking about. If you get a framerate drop looking at nothing but a wall, chances are that wall has a fancy shader and your device doesn't like rendering every single pixel using that fancy shader and you run into fillrate issues. Or are they talking about something else (I played Skyrim but don't remember noticing what they describe)? Because drawing your screen is still happening on the Xbox One itself and it's probably the last thing they'll offload to the cloud.

Also, I don't see how they can claim XBone/PS4 specs are nearly identical..
 
Percentages, how do they work.

A being 33% less than B is the same as B being 50% more than A.

This has been pointed out to you numerous times, but despite this basic mathematics lesson, you seem intent to remain deliberately obtuse.

XboxOne GPU is 250% more powerful then the one in the WiiU. Or the WiiU is 70% less powerful if you prefer.

(numbers for WiiU unconfirmed)

Edit: oh god I called out people for not knowing math and I made a basic mistake.

What? What am I reading? How is this possible?
 
What? What am I reading? How is this possible?

In terms of some metric, imagine the Xbox One scores 100 (100 is also identical to 100% of the Xbox One's performance). The Wii U scores 30. If something was 100% more powerful than the Wii U, it would score a 60. If something was 200% more powerful than the Wii U, it would score a 90. A score of 105 would be 250% more powerful than the Wii U.

If the Xbox One scores a 100 on this imaginary benchmark and the Wii U scores a 30, the Wii U is 70% less powerful than the Xbox One.

I don't think they know what they're talking about.

Ding ding ding.
 
It's amazing Sony was able to keep things locked up tight, and also get everything out there announced way in advance compared to MS. They ended up with the better hardware. Multiplatform titles should at the very least be identical, but more likely better looking/playing on the PS4, and their first party devs are going to show a real difference.
 
Am I the weird one in thinking the lack of a huge graphical leap could be good in keeping costs low and forcing developers to come up with interesting game ideas rather than spend years on a new engine?

I dunno, maybe I'm just backwards.
If there isn't a graphical leap than what is the incentive for getting a new console?
 
And when we know that Sony has magic ninjas at Santa Monica and at Naughty Dog... The fucking gap would go huge on Sony exclusive games i think.

i seen what i needed to with Killzone Shadow Fall. And coincidentally thats the one game they did a real gameplay walkthrough of.

ibkLh2II7UZlqe.gif


i mean jesus, just look at that o_O
 
1152/768 = 1.5 or 1152 is 50% more than 768

768/1152 = .66 or 768 is 33% less than 1152

Math...trippin ballz yo

In terms of some metric, imagine the Xbox One scores 100 (100 is also identical to 100% of the Xbox One's performance). The Wii U scores 30. If something was 100% more powerful than the Wii U, it would score a 60. If something was 200% more powerful than the Wii U, it would score a 90. A score of 105 would be 250% more powerful than the Wii U.

If the Xbox One scores a 100 on this imaginary benchmark and the Wii U scores a 30, the Wii U is 70% less powerful than the Xbox One.



Ding ding ding.

Holy shit. Mind blown!
 
In terms of some metric, imagine the Xbox One scores 100 (100 is also identical to 100% of the Xbox One's performance). The Wii U scores 30. If something was 100% more powerful than the Wii U, it would score a 60. If something was 200% more powerful than the Wii U, it would score a 90. A score of 105 would be 250% more powerful than the Wii U.

If the Xbox One scores a 100 on this imaginary benchmark and the Wii U scores a 30, the Wii U is 70% less powerful than the Xbox One.

You magnificant bastard, I have read the last few pages of the thread (not the entire thread) and I'm wasted and this is the first time I understood what the fuck was going on with the numbers. I don't do math well sober, drunk, it's a joke. :)

Thanks for explaining how the numbers are being compared / contrasted here.
 
Lets put it that way: the PS4 GPU has 2 more 360s in it than the Istilldon'tknowwhattocallitbox.

That isn't a huge difference? Games will be MORE similiar performance-wise than ever because the architecture is similiar? Which basically allows for easy improvements for the stronger platform? But that difference won't be there or MS will force parity?

Every.goddamn.thread.After the 1,2 Tflops number showed up.

Accept that the XBone is weaker, that the PS4 is still not the jump we needed according to Epic's "Samaritan needs 10x360 performance for 1080p" and hope that there'll always be ways to fake many effects.
 
Lets put it that way: the PS4 GPU has 2 more 360s in it than the Istilldon'tknowwhattocallitbox.

That isn't a huge difference? Games will be MORE similiar performance-wise than ever because the architecture is similiar? Which basically allows for easy improvements for the stronger platform? But that difference won't be there or MS will force parity?

Every.goddamn.thread.After the 1,2 Tflops number showed up.

Accept that the XBone is weaker, that the PS4 is still not the jump we needed according to Epic's "Samaritan needs 10x360 performance for 1080p" and hope that there'll always be ways to fake many effects.

Samaritan was produced yonks ago and we have already seen other tech demos running on PS4 that look better to me, i.e., Agni and Deep Down.
 
can someone explain me the "8GB DDR3 + 32mb esram" thing pls?

anand has a pretty good description here

http://anandtech.com/show/6972/xbox-one-hardware-compared-to-playstation-4/3

Game developers can look forward to the same amount of storage per disc, and relatively similar amounts of storage in main memory. That’s the good news.

The bad news is the two wildly different approaches to memory subsystems. Sony’s approach with the PS4 SoC was to use a 256-bit wide GDDR5 memory interface running somewhere around a 5.5GHz datarate, delivering peak memory bandwidth of 176GB/s. That’s roughly the amount of memory bandwidth we’ve come to expect from a $300 GPU, and great news for the console.

Die size dictates memory interface width, so the 256-bit interface remains but Microsoft chose to go for DDR3 memory instead. A look at Wired’s excellent high-res teardown photo of the motherboard reveals Micron DDR3-2133 DRAM on board (16 x 16-bit DDR3 devices to be exact). A little math gives us 68.3GB/s of bandwidth to system memory.

To make up for the gap, Microsoft added embedded SRAM on die (not eDRAM, less area efficient but lower latency and doesn't need refreshing). All information points to 32MB of 6T-SRAM, or roughly 1.6 billion transistors for this memory. It’s not immediately clear whether or not this is a true cache or software managed memory. I’d hope for the former but it’s quite possible that it isn’t. At 32MB the ESRAM is more than enough for frame buffer storage, indicating that Microsoft expects developers to use it to offload requests from the system memory bus. Game console makers (Microsoft included) have often used large high speed memories to get around memory bandwidth limitations, so this is no different. Although 32MB doesn’t sound like much, if it is indeed used as a cache (with the frame buffer kept in main memory) it’s actually enough to have a substantial hit rate in current workloads (although there’s not much room for growth).


Vgleaks has a wealth of info, likely supplied from game developers with direct access to Xbox One specs, that looks to be very accurate at this point. According to their data, there’s roughly 50GB/s of bandwidth in each direction to the SoC’s embedded SRAM (102GB/s total bandwidth). The combination of the two plus the CPU-GPU connection at 30GB/s is how Microsoft arrives at its 200GB/s bandwidth figure, although in reality that’s not how any of this works. If it’s used as a cache, the embedded SRAM should significantly cut down on GPU memory bandwidth requests which will give the GPU much more bandwidth than the 256-bit DDR3-2133 memory interface would otherwise imply. Depending on how the eSRAM is managed, it’s very possible that the Xbox One could have comparable effective memory bandwidth to the PlayStation 4. If the eSRAM isn’t managed as a cache however, this all gets much more complicated.

There are merits to both approaches. Sony has the most present-day-GPU-centric approach to its memory subsystem: give the GPU a wide and fast GDDR5 interface and call it a day. It’s well understood and simple to manage. The downsides? High speed GDDR5 isn’t the most power efficient, and Sony is now married to a more costly memory technology for the life of the PlayStation 4.

Microsoft’s approach leaves some questions about implementation, and is potentially more complex to deal with depending on that implementation. Microsoft specifically called out its 8GB of memory as being “power friendly”, a nod to the lower power operation of DDR3-2133 compared to 5.5GHz GDDR5 used in the PS4. There are also cost benefits. DDR3 is presently cheaper than GDDR5 and that gap should remain over time (although 2133MHz DDR3 is by no means the cheapest available). The 32MB of embedded SRAM is costly, but SRAM scales well with smaller processes. Microsoft probably figures it can significantly cut down the die area of the eSRAM at 20nm and by 14/16nm it shouldn’t be a problem at all.

Even if Microsoft can’t deliver the same effective memory bandwidth as Sony, it also has fewer GPU execution resources - it’s entirely possible that the Xbox One’s memory bandwidth demands will be inherently lower to begin with.
 
Heh, too many people are getting hung up on the 33% less power / 50% more power because they're not taking into account the number of compute units in both the PS4 and the XB1. :P
 
Heh, too many people are getting hung up on the 33% less power / 50% more power because they're not taking into account the number of compute units in both the PS4 and the XB1. :P

yea that percentage applies strictly to the raw power of the GPUs, CPUs are identical but all the other add ons make a difference and we will not know until we know.
 
i seen what i needed to with Killzone Shadow Fall. And coincidentally thats the one game they did a real gameplay walkthrough of.

ibkLh2II7UZlqe.gif


i mean jesus, just look at that o_O
Still blows me away every time I see it. That was the next-gen 'wow' moment right there. The moment that definitely wasn't at the Xbox One reveal.
 
http://i.minus.com/ibmjWK4FJujp5K.gif[/IMG

Lost of gifs here.

[url]http://www.neogaf.com/forum/showthread.php?t=514479&highlight=killzone+shadow+fall[/url][/QUOTE]

nice.

and when a website compares specs xbox1/ps4 like gizmodo few pages back and forgets the GPU in order to say that they are basically the same,for me either they are on ms payroll or have no idea about tech. no other explanation.

even nintendo got billion of questions about wiiu gpu,but some choose to "forget" that the xbox1 has a gpu...........
 
Am I the weird one in thinking the lack of a huge graphical leap could be good in keeping costs low and forcing developers to come up with interesting game ideas rather than spend years on a new engine?

I dunno, maybe I'm just backwards.


Nope. You've just been brainwashed by Nintendo's rhetoric.

No one's forcing them to produce such graphics. There were plenty of innovative games on PSN, for instance (Flower, that 2D boy-in-the-shadow game). Did we see such from the Wii?
 
nice.

and when a website compares specs xbox1/ps4 like gizmodo few pages back and forgets the GPU in order to say that they are basically the same,for me either they are on ms payroll or have no idea about tech. no other explanation.

even nintendo got billion of questions about wiiu gpu,but some choose to "forget" that the xbox1 has a gpu...........

CPU to GPU integration is faster on PS4. ONE CPU to GPU is fast, but not as fast as PS4.

PS4 CPU is pushing to a pool of 6 or 7GB of memory. ONE is pushing to 5.

Its relatively the same. Until a game shows otherwise.
 
The GPU advantage the PS4 enjoys is nice. But where the PS4 absolutely murders the Xbone is on sheer bandwidth to main memory. 170 GB/s versus 68 GB/s is more or less night and day for developers. We're talking about situations where the same game using the same assets might be forced to use FXAA on the Xbone but can use MSAA on the PS4 because of sheer fillrate advantage.
 
The GPU advantage the PS4 enjoys is nice. But where the PS4 absolutely murders the Xbone is on sheer bandwidth to main memory. 170 GB/s versus 68 GB/s is more or less night and day for developers. We're talking about situations where the same game using the same assets might be forced to use FXAA on the Xbone but can use MSAA on the PS4 because of sheer fillrate advantage.

And we could also be talking a $50-$65 difference in price in consoles.

How much will the AA matter if the input is HDMI? Serious question. Is there certain advantages to being on HDMI regardless?
 
You magnificant bastard, I have read the last few pages of the thread (not the entire thread) and I'm wasted and this is the first time I understood what the fuck was going on with the numbers. I don't do math well sober, drunk, it's a joke. :)

Thanks for explaining how the numbers are being compared / contrasted here.

You're welcome. :)
 
No one is talking about the fact that Cerny customised the PS4 GPU further with 8 ACEs vs. the standard 2 on GCN which likely what MS has. This is a further advantage in compute for PS4.
 
No one is talking about the fact that Cerny customised the PS4 GPU further with 8 ACEs vs. the standard 2 on GCN which likely what MS has. This is a further advantage in compute for PS4.

Yep. There looks to be very little customization outside of the "move engines" for XBone according to vgleaks. I'm guessing XBone has the standard 2? Now they wouldn't need as much because they just have 12 CU's correct? but 4-5 would seem appropriate.
 
After six or so pages of percentages, we're right back to it. Thread title needs a warning, learn how percentages work before clicking!
 
What are ACEs and what do they do?

Asynchronous compute engines. They essentially act as command processors for GPGPU jobs on the CUs. The PS4 having more means it can achieve better utilisation through greater granularity when running compute tasks along with normal graphics tasks across the CUs. It makes PS4 more GPGPU friendly.
 
Asynchronous compute engines. They essentially act as command processors for GPGPU jobs on the CUs. The PS4 having more means it can achieve better utilisation through greater granularity when running compute tasks along with normal graphics tasks across the CUs. It makes PS4 more GPGPU friendly.

Interesting. Thank you. :)
 
Re. the thread title:

PS4's GPU is 50% more powerful than Xbox's.

Xbox's GPU is 33% less powerful than PS4's.

In this thread Topic Starter doesn't know that 50% more for one is not the same as 50% less for the other....

Yes.

Asigning arbitrary numbers:
XBOne = 100
PS4 = 150

150 is 150% of 100.
100 is 66.6% of 150.
What the hell is going on?

What are the actual figures?

I understand the maths you guys are explaining, but no one seems to have clarified what the actual figures are if it isn't 50% or whatever.
 
Samaritan was produced yonks ago and we have already seen other tech demos running on PS4 that look better to me, i.e., Agni and Deep Down.

I don't disagree that both looked great, especially DD, but how real was that? The thing is that Epic expected/wished for more power when they made their Samaritan slide (was it 2 or 2,5+ Tflops for 1080p?), Unreal Engine will be in more games than Panty Raid or SE's engine and those non-clipping cloth simulations of Samaritan were hot as fuck.

Sure, games don't need all of the effects to be really simulated instead of being faked and PS4 will have great looking games, as will XBone, but the MS defense force shitting up threads with their claims of parity with a huge gap in GPU power makes me want to bash my head against a wall. E.g.: Just like medium pc settings vs high settings? Guess what, guys, there's a reason why people buy better GPUs to crank up those sliders. I wish that both consoles were stronger, to get a better baseline spec being targeted by devs. That'd be better for everyone.

Sure, argue that the difference won't matter to the end consumer. Argue that features will play a big role. But don't deny the difference because of cloud sauce and magic Move Engines.
 
What the hell is going on?

What are the actual figures?

I understand the maths you guys are explaining, but no one seems to have clarified what the actual figures are if it isn't 50% or whatever.

Perhaps you should read the Digital Foundry article then?
 
Top Bottom