DF: Xbone Specs/Tech Analysis: GPU 33% less powerful than PS4

I wonder what percentage of GAF thinks "50% more powerful" means twice as powerful. I'd wager quite a few.

Ever heard of exponential functions in maths? 50% increased in power in the hand of the right developers would mean an exponential increased of 2 or 3 times in graphics, physic or ai.
 
But why you need High Bandwidth if all of information you need it´s inside the RAM already?

You need to get the data from the RAM to the CPU/Shader/Whatever you're going to process it with. A processor can't do anything with data that's sitting in RAM, it has to get it to its registers.
 
You need to get the data from the RAM to the CPU/Shader/Whatever you're going to process it with. A processor can't do anything with data that's sitting in RAM, it has to get it to its registers.

CPU, GPU, Shaders all share the same RAM from the same place, you don´t need to get the data from one to another.
 
I'm right at home, aren't I?

image.php


Yes.
 
I know I suck at math, so I simply keep my mouth shut when the subject arises. Some people in this thread should really do the same.













Of course even I fucking understand the percentages in this thread, good lord. . .
 
CPU, GPU, Shaders all share the same RAM from the same place, you don´t need to get the data from one to another.

[CPU] ----bus---- [RAM]----bus----[GPU]

If the data is sitting in RAM, it's not in the CPU's process register. That data has to move to the processor to be processed. It can't be processed while it's sitting in the RAM pool. Same thing with the GPU. The GPU needs to get the data to itself before it can do anything with it.

That has to happen for every tiny chunk of data that gets processed, billions of times per second. That's why RAM bandwidth is important.
 
The RSX graphics chip in the PS3 had around 80% more raw power than the Xbox 360's...look how that turned out. The Xbox (Original) had ALOT more power than PS2...look how that turned out. Sony will push the PS4...but nobody else will.

To people saying Xbox One = WiiU....NO. This is a modern architecture with raw performance at 1.3TFLOPS + cloud off-load Vs WiiU's archaic architecture running 300-400 TFLOPS. They're not even close.

Also...apparently nobody is taking into account CPU.

Also, almost nothing is known about real world performance - ie. Direct X 11 (exclusive to the Xbox One) will allow many shortcuts to developers...it's apparently very powerful.

This post.

Amazing.

iWffBjL8rekPL.jpg
 
That poor jr poster is probably crying in embarrassment after he saw so many people quoted and belittled him. :(

Well he isn't alone, as apparently this just doesn't make intuitive sense for a lot of folks.

But there is the diagram. So now it does for them, hopefully. If it doesn't, even with the pic, yikes... sorry about your learning disability I guess.
 
The XBox ONE works different from PC´s. XBox One have a unified RAM for everything.

Not the eRam or whatever it's called, which isn't in the unified pool and is providing the boost of bandwith.

Now I don't now much about this shit, but doesn't that interfere with the basic principal of hUMA?
 
But why you need High Bandwidth if all of information you need it´s inside the RAM already?

Maybe because its an incredibly small amount of memory that needs to move information in and out as fast as possible?

We are talking about the eSRAM right?
 
Ever heard of exponential functions in maths? 50% increased in power in the hand of the right developers would mean an exponential increased of 2 or 3 times in graphics, physic or ai.

Yes, but Xbone has the power of the cloud. How do you account for that in your exponential, recursive functions()?
 
Not gonna make fun of the guy. I've had brain farts like that too once in a while.

Especially happens when talking about quarters of a year. My brain sometimes goes, quarter = 1/4. So there are 4 months in a quarter of a year. LOL.

My brother on a university stats exam, managed to get 101% on it due to some bonus questions, lost an opportunity to get 110% due to doing 3+3=9 when working out one of the answers.
 
[CPU] ----bus---- [RAM]----bus----[GPU]

If the data is sitting in RAM, it's not in the CPU's process register. That data has to move to the processor to be processed. It can't be processed while it's sitting in the RAM pool. Same thing with the GPU. The GPU needs to get the data to itself before it can do anything with it.

That has to happen for every tiny chunk of data that gets processed, billions of times per second. That's why RAM bandwidth is important.

They work very different from each other (PS4 RAM and XBone RAM + ESRAM), with a unified RAM the need for a High Bandwidth memory falls a little. And DDR3 have a better latency then DDR5. The difference between them will be minimal. What really matter is the GPU speed difference. That 33% percents. THis will make a difference in graphics, not the RAM speed.
 
The RSX graphics chip in the PS3 had around 80% more raw power than the Xbox 360's...look how that turned out. The Xbox (Original) had ALOT more power than PS2...look how that turned out. Sony will push the PS4...but nobody else will.

To people saying Xbox One = WiiU....NO. This is a modern architecture with raw performance at 1.3TFLOPS + cloud off-load Vs WiiU's archaic architecture running 300-400 TFLOPS. They're not even close.

Also...apparently nobody is taking into account CPU.

Also, almost nothing is known about real world performance - ie. Direct X 11 (exclusive to the Xbox One) will allow many shortcuts to developers...it's apparently very powerful.

Lol ...... You don't believe anyone could post such crap ..... and them someone does.


Xbox! ........... Go Home!!
 
They work very different from each other (PS4 RAM and XBone RAM + ESRAM), with a unified RAM the need for a High Bandwidth memory falls a little. And DDR3 have a better latency then DDR5. The difference between them will be minimal. What really matter is the GPU speed difference. That 33% percents. THis will make a difference in graphics, not the RAM speed.

They don't really work all that differently. The X1's GPU has the ESRAM, which is a high-speed local store. Data still has to be moved from the ESRAM to the Shaders to be processed, there's just more bandwidth there, and less physical distance to move (the speed of light actually comes into play with modern processors), so more data can be moved in the same amount of time. Though this local store still appears to have less bandwidth than the bus to main RAM in PS4.

The reason bandwidth matters is that a processing unit can only process data once it receives it. If the processor can process data faster than the bus can supply it, the processor sits idle until the next piece of data arrives.

Bandwidth is important. If it wasn't, we'd still be using slow RAM from the 1980s.

edit - When you hear of developers optimizing code, a big part of that optimization is manipulating data so that the processing units are working more on data in their local caches and less on data from main RAM, thus saving that bandwidth for other tasks which require more frequent RAM dipping from the main pool. These keeps all the processing units actually processing on every cycle. If you have a huge wide data bus to main RAM, far less optimization of this sort is required, since you have a much easier time going to main RAM for data.
 
They don't really work all that differently. The X1's GPU has the ESRAM, which is a high-speed local store. Data still has to be moved from the ESRAM to the Shaders to be processed, there's just more bandwidth there, and less physical distance to move (the speed of light actually comes into play with modern processors), so more data can be moved in the same amount of time. Though this local store still appears to have less bandwidth than the bus to main RAM in PS4.

The reason bandwidth matters is that a processing unit can only process data once it receives it. If the processor can process data faster than the bus can supply it, the processor sits idle until the next piece of data arrives.

Bandwidth is important. If it wasn't, we'd still be using slow RAM from the 1980s.

edit - When you hear of developers optimizing code, a big part of that optimization is manipulating data so that the processing units are working more on data in their local caches and less on data from main RAM, thus saving that bandwidth for other tasks which require more frequent RAM dipping from the main pool. These keeps all the processing units actually processing on every cycle. If you have a huge wide data bus to main RAM, far less optimization of this sort is required, since you have a much easier time going to main RAM for data.

You are right, but with time and ability, developers will optimize their codes to use local caches, so the difference between then will be smaller. I don´t think its gonna be a big difference. Not a visible one. But we have to work on each one to know for sure.

I might be wrong on that, but XBox 360 works like that too (i dont know). If so, will be more easier for developers to optimize their future codes.
 
They work very different from each other (PS4 RAM and XBone RAM + ESRAM), with a unified RAM the need for a High Bandwidth memory falls a little. And DDR3 have a better latency then DDR5. The difference between them will be minimal. What really matter is the GPU speed difference. That 33% percents. THis will make a difference in graphics, not the RAM speed.

Except the ESRAM is 32mb, not 8gb.

Let me give you a hint: i'm also not a tech guy, so i just don't post BS that i don't know, because it can make us look silly.

We have come to the point where Ram bandwidth is not important. I never thought we would see this day.
 
What version of DX are you talking about? The API's for the X1 and PS4 will have lower overhead than their PC counterparts since they are targeting one hardware configuration. And most of the functions that devs use when working with API's, they would have to develop if they are coding to the metal. They would have to come up with their own library and educate the programmers in how to use it. It would be impractical, take a lot of time, and require you to spend a lot of money on staff that can develop it.


I've played around with DirectX and OGL and I really don't see any difference between them in regards to graphics. They have different coordinate system, but you can choose one and write a wrapper to convert for the other.

The post I have been talking about is about DX11 on the XBone, he implied that DX11 on the XBone is some kind of advantage, ignoring the fact that if the PS3 is anything to go by the PS4 will have both an API and also low level access (that is said to be used by most games).

If the XBone uses an API that is different then it is no longer DX11.

And why are you bring up OpenGL?

You are right, but with time and ability, developers will optimize their codes to use local caches, so the difference between then will be smaller. I don´t think its gonna be a big difference. Not a visible one. But we have to work on each one to know for sure.

You clearly are not grasping what is being said.

Each frame the GPU has to load all data it is going to use in that frame from the graphics memory, that has to come over the main memory bus in the XBone, the on chip SRAM does not have close to enough space to store what it need to be loaded each frame.
 
The post I have been talking about is about DX11 on the XBone, he implied that DX11 on the XBone is some kind of advantage, ignoring the fact that if the PS3 is anything to go by the PS4 will have both an API and also low level access (that is said to be used by most games).

If the XBone uses an API that is different then it is no longer DX11.

And why are you bring up OpenGL?

The X1 API will have its own version of DX just like the original xbox and the 360. There will be little to no overhead because they aren't targeting multiple video cards. Its most likely gonna be a customized version of DX11.
 
The RSX graphics chip in the PS3 had around 80% more raw power than the Xbox 360's...look how that turned out. The Xbox (Original) had ALOT more power than PS2...look how that turned out. Sony will push the PS4...but nobody else will.

To people saying Xbox One = WiiU....NO. This is a modern architecture with raw performance at 1.3TFLOPS + cloud off-load Vs WiiU's archaic architecture running 300-400 TFLOPS. They're not even close.

Also...apparently nobody is taking into account CPU.

Also, almost nothing is known about real world performance - ie. Direct X 11 (exclusive to the Xbox One) will allow many shortcuts to developers...it's apparently very powerful.

Welp.. that's it for the internet tonight.. I've seen enough.
 
The X1 API will have its own version of DX just like the original xbox and the 360. There will be little to no overhead because they aren't targeting multiple video cards. Its most likely gonna be a customized version of DX11.

But it will not be what we know as DX11, and still the point is that it will not give an advantage over what the PS4 will have! (as was implied by the other poster).
 
Except the ESRAM is 32mb, not 8gb.

Let me give you a hint: i'm also not a tech guy, so i just don't post BS that i don't know, because it can make us look silly.

We have come to the point where Ram bandwidth is not important. I never thought we would see this day.

Correction: Ram bandwidth is important. That is, if you like nice graphics and high framerate. The nicer you want the graphics to be at say 30 or 60fps, the more memorybandwidth is needed. It's all about throughput, which determens in this case how much graphics can be created every second. Throughput is calculated this way: Memory bandwidth available devided by desired framerate equals maximum amount of detail that can be put in each image. Even though both ps4 and xbox one will most likely run their games at 1080p, the 2MP picture is a composit of many graphical effects and textures. And the nicer the picture the more amount of detail is put into each frame. Therefor memory bandwidth is very important for games.
 
They work very different from each other (PS4 RAM and XBone RAM + ESRAM), with a unified RAM the need for a High Bandwidth memory falls a little. And DDR3 have a better latency then GDDR5. The difference between them will be minimal. What really matter is the GPU speed difference. That 33% percents. THis will make a difference in graphics, not the RAM speed.

Fixed. And what?

The ESRAM in XBone is only 32MB, that need to hold a frame of data.

I wonder if that is enough for 1080p, 60fps + AA/Post-process.
 
But why you need High Bandwidth if all of information you need it´s inside the RAM already?
Go home and be a family man.

Seriously, you don't need the vast majority of your bandwidth to get information into your memory -- you need it to access and manipulate that information. That's the basics of the basics.
 
Top Bottom