Meccanical
Member
Damn that's like, more than a half of GAF.
I wonder what percentage of GAF thinks "50% more powerful" means twice as powerful. I'd wager quite a few.
Ever heard of exponential functions in maths? 50% increased in power in the hand of the right developers would mean an exponential increased of 2 or 3 times in graphics, physic or ai.
It does have lower latency for sure, but less bandwidth.
But why you need High Bandwidth if all of information you need it´s inside the RAM already?
But why you need High Bandwidth if all of information you need it´s inside the RAM already?
But why you need High Bandwidth if all of information you need it´s inside the RAM already?
please be trolling, please be trolling, please be trolling...
please be trolling
But why you need High Bandwidth if all of information you need it´s inside the RAM already?
Ever heard of exponential functions in maths? 50% increased in power in the hand of the right developers would mean an exponential increased of 2 or 3 times in graphics, physic or ai.
But why you need High Bandwidth if all of information you need it´s inside the RAM already?
You need to get the data from the RAM to the CPU/Shader/Whatever you're going to process it with. A processor can't do anything with data that's sitting in RAM, it has to get it to its registers.
![]()
For the person earlier
You should tell this to AMD and nVidia so that they can stop wasting their money on putting GDDR5 chips on their cards.
CPU, GPU, Shaders all share the same RAM from the same place, you don´t need to get the data from one to another.
The RSX graphics chip in the PS3 had around 80% more raw power than the Xbox 360's...look how that turned out. The Xbox (Original) had ALOT more power than PS2...look how that turned out. Sony will push the PS4...but nobody else will.
To people saying Xbox One = WiiU....NO. This is a modern architecture with raw performance at 1.3TFLOPS + cloud off-load Vs WiiU's archaic architecture running 300-400 TFLOPS. They're not even close.
Also...apparently nobody is taking into account CPU.
Also, almost nothing is known about real world performance - ie. Direct X 11 (exclusive to the Xbox One) will allow many shortcuts to developers...it's apparently very powerful.
That poor jr poster is probably crying in embarrassment after he saw so many people quoted and belittled him.![]()
how did the data/info get inside the Goat, I mean RAM?
The XBox ONE works different from PC´s. XBox One have a unified RAM for everything.
That poor jr poster is probably crying in embarrassment after he saw so many people quoted and belittled him.![]()
The XBox ONE works different from PC´s. XBox One have a unified RAM for everything.
That's it guys, I'm out. I'm too tired for this right now.
But why you need High Bandwidth if all of information you need it´s inside the RAM already?
What in the holy hell is going on in this thread? Read the first page and then read the last page. This can't be reality right?But why you need High Bandwidth if all of information you need it´s inside the RAM already?
Damn straight I am delirious and can't understand what's going on here. I am glad I am not the only one.That's it guys, I'm out. I'm too tired for this right now.
Ever heard of exponential functions in maths? 50% increased in power in the hand of the right developers would mean an exponential increased of 2 or 3 times in graphics, physic or ai.
[CPU] ----bus---- [RAM]----bus----[GPU]
If the data is sitting in RAM, it's not in the CPU's process register. That data has to move to the processor to be processed. It can't be processed while it's sitting in the RAM pool. Same thing with the GPU. The GPU needs to get the data to itself before it can do anything with it.
That has to happen for every tiny chunk of data that gets processed, billions of times per second. That's why RAM bandwidth is important.
The RSX graphics chip in the PS3 had around 80% more raw power than the Xbox 360's...look how that turned out. The Xbox (Original) had ALOT more power than PS2...look how that turned out. Sony will push the PS4...but nobody else will.
To people saying Xbox One = WiiU....NO. This is a modern architecture with raw performance at 1.3TFLOPS + cloud off-load Vs WiiU's archaic architecture running 300-400 TFLOPS. They're not even close.
Also...apparently nobody is taking into account CPU.
Also, almost nothing is known about real world performance - ie. Direct X 11 (exclusive to the Xbox One) will allow many shortcuts to developers...it's apparently very powerful.
They work very different from each other (PS4 RAM and XBone RAM + ESRAM), with a unified RAM the need for a High Bandwidth memory falls a little. And DDR3 have a better latency then DDR5. The difference between them will be minimal. What really matter is the GPU speed difference. That 33% percents. THis will make a difference in graphics, not the RAM speed.
They don't really work all that differently. The X1's GPU has the ESRAM, which is a high-speed local store. Data still has to be moved from the ESRAM to the Shaders to be processed, there's just more bandwidth there, and less physical distance to move (the speed of light actually comes into play with modern processors), so more data can be moved in the same amount of time. Though this local store still appears to have less bandwidth than the bus to main RAM in PS4.
The reason bandwidth matters is that a processing unit can only process data once it receives it. If the processor can process data faster than the bus can supply it, the processor sits idle until the next piece of data arrives.
Bandwidth is important. If it wasn't, we'd still be using slow RAM from the 1980s.
edit - When you hear of developers optimizing code, a big part of that optimization is manipulating data so that the processing units are working more on data in their local caches and less on data from main RAM, thus saving that bandwidth for other tasks which require more frequent RAM dipping from the main pool. These keeps all the processing units actually processing on every cycle. If you have a huge wide data bus to main RAM, far less optimization of this sort is required, since you have a much easier time going to main RAM for data.
They work very different from each other (PS4 RAM and XBone RAM + ESRAM), with a unified RAM the need for a High Bandwidth memory falls a little. And DDR3 have a better latency then DDR5. The difference between them will be minimal. What really matter is the GPU speed difference. That 33% percents. THis will make a difference in graphics, not the RAM speed.
Power of the cloud. God... I have a feeling this is going to be the misnomer of the next generation.Yes, but Xbone has the power of the cloud. How do you account for that in your exponential, recursive functions()?
Power of the cloud. God... I have a feeling this is going to be the misnomer of the next generation.
What version of DX are you talking about? The API's for the X1 and PS4 will have lower overhead than their PC counterparts since they are targeting one hardware configuration. And most of the functions that devs use when working with API's, they would have to develop if they are coding to the metal. They would have to come up with their own library and educate the programmers in how to use it. It would be impractical, take a lot of time, and require you to spend a lot of money on staff that can develop it.
I've played around with DirectX and OGL and I really don't see any difference between them in regards to graphics. They have different coordinate system, but you can choose one and write a wrapper to convert for the other.
You are right, but with time and ability, developers will optimize their codes to use local caches, so the difference between then will be smaller. I don´t think its gonna be a big difference. Not a visible one. But we have to work on each one to know for sure.
The post I have been talking about is about DX11 on the XBone, he implied that DX11 on the XBone is some kind of advantage, ignoring the fact that if the PS3 is anything to go by the PS4 will have both an API and also low level access (that is said to be used by most games).
If the XBone uses an API that is different then it is no longer DX11.
And why are you bring up OpenGL?
The RSX graphics chip in the PS3 had around 80% more raw power than the Xbox 360's...look how that turned out. The Xbox (Original) had ALOT more power than PS2...look how that turned out. Sony will push the PS4...but nobody else will.
To people saying Xbox One = WiiU....NO. This is a modern architecture with raw performance at 1.3TFLOPS + cloud off-load Vs WiiU's archaic architecture running 300-400 TFLOPS. They're not even close.
Also...apparently nobody is taking into account CPU.
Also, almost nothing is known about real world performance - ie. Direct X 11 (exclusive to the Xbox One) will allow many shortcuts to developers...it's apparently very powerful.
The X1 API will have its own version of DX just like the original xbox and the 360. There will be little to no overhead because they aren't targeting multiple video cards. Its most likely gonna be a customized version of DX11.
Except the ESRAM is 32mb, not 8gb.
Let me give you a hint: i'm also not a tech guy, so i just don't post BS that i don't know, because it can make us look silly.
We have come to the point where Ram bandwidth is not important. I never thought we would see this day.
They work very different from each other (PS4 RAM and XBone RAM + ESRAM), with a unified RAM the need for a High Bandwidth memory falls a little. And DDR3 have a better latency then GDDR5. The difference between them will be minimal. What really matter is the GPU speed difference. That 33% percents. THis will make a difference in graphics, not the RAM speed.
Go home and be a family man.But why you need High Bandwidth if all of information you need it´s inside the RAM already?