Kotaku Rumor: Microsoft 6 months behind in game production for X720 [Pastebin = Ban]

Oh god, show me real evidence not someone saying something.

Junior

http://en.wikipedia.org/wiki/SDRAM_latency

The wiki article seems to disagree with you.

This guy explains it the best:
http://www.techspot.com/community/t...-between-ddr3-memory-and-gddr5-memory.186408/



The principle differences are:
•DDR3 runs at a higher voltage that GDDR5 (typically 1.25-1.65V versus ~1V)
•DDR3 uses a 64-bit memory controller per channel ( so, 128-bit bus for dual channel, 256-bit for quad channel), whereas GDDR5 is paired with controllers of a nominal 32-bit (16 bit each for input and output), but whereas the CPU's memory contoller is 64-bit per channel, a GPU can utilise any number of 32-bit I/O's (at the cost of die size) depending upon application ( 2 for 64-bit bus, 4 for 128-bit, 6 for 192-bit, 8 for 256-bit, 12 for 384-bit etc...). The GDDR5 setup also allows for doubling or asymetric memory configurations. Normally (using this generation of cards as example) GDDR5 memory uses 2Gbit memory chips for each 32-bit I/O (I.e for a 256-bit bus/2GB card: 8 x 32-bit I/O each connected by a circuit to a 2Gbit IC = 8 x 2Gbit = 16Gbit = 2GB), but GDDR5 can also operate in what is known as clamshell mode, where the 32-bit I/O instead of being connected to one IC is split between two (one on each side of the PCB) allowing for a doubling up of memory capacity. Mixing the arrangement of 32-bit memory controllers, memory IC density, and memory circuit splitting allows of asymetric configurations ( 192-bit, 2GB VRAM for example)
•Physically, a GDDR5 controller/IC doubles the I/O of DDR3 - With DDR, I/O handles an input (written to memory), or output (read from memory) but not both on the same cycle. GDDR handles input and output on the same cycle.

The memory is also fundamentally set up specifically for the application it uses:
System memory (DDR3) benefits from low latency (tight timings) at the expense of bandwidth, GDDR5's case is the opposite. Timings for GDDR5 would seems unbelieveably slow in relation to DDR3, but the speed of VRAM is blazing fast in comparison with desktop RAM- this has resulted from the relative workloads that a CPU and GPU undertake. Latency isn't much of an issue with GPU's since their parallel nature allows them to move to other calculation when latency cycles cause a stall in the current workload/thread. The performance of a graphics card for instance is greatly affected (as a percentage) by altering the internal bandwidth, yet altering the external bandwidth (the PCI-Express bus, say lowering from x16 to x8 or x4 lanes) has a minimal effect. This is because there is a great deal of I/O (textures for examples) that get swapped in and out of VRAM continuously- the nature of a GPU is many parallel computations, whereas a CPU computes in a basically linear way.

It goes back to the nature of graphics rendering and how the polygons are drawn. Sorry if I'm teaching my grandmother to suck eggs, but it might be a little easier if I outline the graphics pipeline. I'll use a red coloured font to show the video memory transactions and green for system RAM (probably be better as a flow chart but nvm)
On the software side you have your game (or app) ↔ API (DirectX/OpenGL) ↔ User Mode Driver / ICD ↔ Kernel Mode Driver (KMD) + CPU command buffer→ loading textures to vRAM → GPU Front End (Input assembler) .
Up until this point you're basically dealing with CPU and RAM- executing and monitoring game code, creating resources, shader compile, draw calls and allocating access to the graphics (since you likely have more than just the game needing resources). From here, the workload becomes hugely more parallel and moves to the graphics card. The video memory now holds the textures and the shader compilations that the game+API+drivers have loaded, These are added to the first few stages of the pipeline as and where needed to each the following shaders as the code is transformed from points (co-ordinates) and lines into polygons and their lighting:

Input Assembler (vRAM input) → Vertex Shader (vRAM input) → Hull Shader (vRAM input) → Tessellation Control Shader (vRAM input) (if Tessellation is used) → Domain Shader (vRAM input) → Geometry Shader (vRAM input)

At this point, the stream output can move all or part of the render back into the memory to be re-worked. Depending on what is called for, the output can be called to any part of the previous shader pipeline (basically a loop) or held in memory buffers. Once the computations are completed they then move to Rasterization (turning the 3D image into pixels):
Rasterizer → Pixel Shader* (vRAM input and output) → Output Manager (tasked with producing the final screen image, and requires vRAM input and output)

* The Compute Shaders (if they exist on the card) are tasked with post processing (ambient occlusion, film grain, global illumination, motion blur, depth of field etc), A.I. routines, physics, and a lot of custom algorithms depending on the app., also run via the pixel shader, and can use that shaders access to vRAM input and output.

So basically, the parallel nature of graphics calls for input and output from vRAM at many points covering many concurrent streams of data. Some of that vRAM is also subdivided into memory buffers and caches to save data that would otherwise have to re-compiled for following frames. All this swapping out of data calls for high bandwidth, but latency can be lax (saving power demand) as any stall in one thread is generally lost in the sheer number of threads queued at any given time.
As I noted previously, GDDR5 allows a write and read to/from memory every clock cycle, whereas DDR3 is limited to a read or a write, which reduces bandwidth. Graphics DDR also allows for multiple memory controllers to cope with the I/O functions.
 
No. You have, my friend.
You're confusing CAS timings and believing that fixes the latency, and it doesn't. Durante (who is the OP of the GPU and CPU thread) has been saying this. I even quoted him and you slapped it down.

What you quoted doesn't back your initial statement of GDDR5 bottlenecking an OOO CPU
 
Im just explaining the reason for the programmability obviously you would get the lowest latency ram you could afford :)

Of course ;) So I think just to put programmability in argumentation does not mean that latency is no issue with PS4's GDDR5-ram :)
 
This guy explains it the best:
http://www.techspot.com/community/t...-between-ddr3-memory-and-gddr5-memory.186408/



The principle differences are:
•DDR3 runs at a higher voltage that GDDR5 (typically 1.25-1.65V versus ~1V)
•DDR3 uses a 64-bit memory controller per channel ( so, 128-bit bus for dual channel, 256-bit for quad channel), whereas GDDR5 is paired with controllers of a nominal 32-bit (16 bit each for input and output), but whereas the CPU's memory contoller is 64-bit per channel, a GPU can utilise any number of 32-bit I/O's (at the cost of die size) depending upon application ( 2 for 64-bit bus, 4 for 128-bit, 6 for 192-bit, 8 for 256-bit, 12 for 384-bit etc...). The GDDR5 setup also allows for doubling or asymetric memory configurations. Normally (using this generation of cards as example) GDDR5 memory uses 2Gbit memory chips for each 32-bit I/O (I.e for a 256-bit bus/2GB card: 8 x 32-bit I/O each connected by a circuit to a 2Gbit IC = 8 x 2Gbit = 16Gbit = 2GB), but GDDR5 can also operate in what is known as clamshell mode, where the 32-bit I/O instead of being connected to one IC is split between two (one on each side of the PCB) allowing for a doubling up of memory capacity. Mixing the arrangement of 32-bit memory controllers, memory IC density, and memory circuit splitting allows of asymetric configurations ( 192-bit, 2GB VRAM for example)
•Physically, a GDDR5 controller/IC doubles the I/O of DDR3 - With DDR, I/O handles an input (written to memory), or output (read from memory) but not both on the same cycle. GDDR handles input and output on the same cycle.

The memory is also fundamentally set up specifically for the application it uses:
System memory (DDR3) benefits from low latency (tight timings) at the expense of bandwidth, GDDR5's case is the opposite. Timings for GDDR5 would seems unbelieveably slow in relation to DDR3, but the speed of VRAM is blazing fast in comparison with desktop RAM- this has resulted from the relative workloads that a CPU and GPU undertake. Latency isn't much of an issue with GPU's since their parallel nature allows them to move to other calculation when latency cycles cause a stall in the current workload/thread. The performance of a graphics card for instance is greatly affected (as a percentage) by altering the internal bandwidth, yet altering the external bandwidth (the PCI-Express bus, say lowering from x16 to x8 or x4 lanes) has a minimal effect. This is because there is a great deal of I/O (textures for examples) that get swapped in and out of VRAM continuously- the nature of a GPU is many parallel computations, whereas a CPU computes in a basically linear way.

It goes back to the nature of graphics rendering and how the polygons are drawn. Sorry if I'm teaching my grandmother to suck eggs, but it might be a little easier if I outline the graphics pipeline. I'll use a red coloured font to show the video memory transactions and green for system RAM (probably be better as a flow chart but nvm)
On the software side you have your game (or app) ↔ API (DirectX/OpenGL) ↔ User Mode Driver / ICD ↔ Kernel Mode Driver (KMD) + CPU command buffer→ loading textures to vRAM → GPU Front End (Input assembler) .
Up until this point you're basically dealing with CPU and RAM- executing and monitoring game code, creating resources, shader compile, draw calls and allocating access to the graphics (since you likely have more than just the game needing resources). From here, the workload becomes hugely more parallel and moves to the graphics card. The video memory now holds the textures and the shader compilations that the game+API+drivers have loaded, These are added to the first few stages of the pipeline as and where needed to each the following shaders as the code is transformed from points (co-ordinates) and lines into polygons and their lighting:

Input Assembler (vRAM input) → Vertex Shader (vRAM input) → Hull Shader (vRAM input) → Tessellation Control Shader (vRAM input) (if Tessellation is used) → Domain Shader (vRAM input) → Geometry Shader (vRAM input)

At this point, the stream output can move all or part of the render back into the memory to be re-worked. Depending on what is called for, the output can be called to any part of the previous shader pipeline (basically a loop) or held in memory buffers. Once the computations are completed they then move to Rasterization (turning the 3D image into pixels):
Rasterizer → Pixel Shader* (vRAM input and output) → Output Manager (tasked with producing the final screen image, and requires vRAM input and output)

* The Compute Shaders (if they exist on the card) are tasked with post processing (ambient occlusion, film grain, global illumination, motion blur, depth of field etc), A.I. routines, physics, and a lot of custom algorithms depending on the app., also run via the pixel shader, and can use that shaders access to vRAM input and output.

So basically, the parallel nature of graphics calls for input and output from vRAM at many points covering many concurrent streams of data. Some of that vRAM is also subdivided into memory buffers and caches to save data that would otherwise have to re-compiled for following frames. All this swapping out of data calls for high bandwidth, but latency can be lax (saving power demand) as any stall in one thread is generally lost in the sheer number of threads queued at any given time.
As I noted previously, GDDR5 allows a write and read to/from memory every clock cycle, whereas DDR3 is limited to a read or a write, which reduces bandwidth. Graphics DDR also allows for multiple memory controllers to cope with the I/O functions.

Still no actual latency numbers, if you look at the information from a GDDR5 suppler you'll see the latency is as comparable if not the same as DDR3.

Oh btw, he does put it well later.

http://www.techspot.com/community/t...-memory-and-gddr5-memory.186408/#post-1295335

Nothing of any importance.
A console processor requires a very limited functionality compared with a PC proc. Consoles processors aren't required to be optimized for a vast range of software, drivers, hardware changes, OS bloat, concurrent processes. Even a PC CPU can make do with a very small amount of RAM if the workload is streamlined- as it is in a console.
Vienn_22 said: ↑
You also said above that the APU is "sensitive to memory bandwidth" but that is only for the GPU side, since the APU has a CPU side too what will happen now that PS4 is using GDDR5 for both, since the 2 memories are opposites wouldn't it affect the CPU performance??
Nope, not in the slightest. What applications would a console be running that are CPU intensive and require minimal latency? CPUs require minimal latency because of multiple applications fighting for resources from available compute threads/cores - and multiple concurrent applications aren't likely to come into play with a console.
Most, if not all, applications running on a console APU would be hardware (GPU) accelerated. At this point I'm not even sure if PhysX wouldn't be HW accelerated on an AMD APU.
 
I'd like to thank @NeoGAFShitPosts for making me aware of Quinton McLeod's mental breakdown. It's been a fun read.
 
This 'if you have an alt view point you are insane' is stupid. As someone who respects Cerny and thinks Sony have been the first company not to take next gen for granted and done everything they could to make the most perfect system possible...I have no idea why people aee getting quite so agitated.

I mean am getting attacked for even disagreeing with the guy. Lot of ugliness coming out in this thread :(

Maybe am way off IDK; but certainly time everyone moves on.
 
From exactly the same source you just quoted:




Ouch. :P

Just read those posts, damn. Also the whole post.

Nope, not in the slightest. What applications would a console be running that are CPU intensive and require minimal latency? CPUs require minimal latency because of multiple applications fighting for resources from available compute threads/cores - and multiple concurrent applications aren't likely to come into play with a console.
Most, if not all, applications running on a console APU would be hardware (GPU) accelerated. At this point I'm not even sure if PhysX wouldn't be HW accelerated on an AMD APU.
 
The XBLA stuff is mostly on PC too.
Were they out on PC when they launched on the 360? I mean by definition at that time they are an exclusive right? And i'm glad they eventually make their way to PC/Windows. Microsoft exclusive, is that a better term for you?
 
Still no actual latency numbers, if you look at the information from a GDDR5 suppler you'll see the latency is as comparable if not the same as DDR3.

Oh btw, he does put it well later.

http://www.techspot.com/community/t...-memory-and-gddr5-memory.186408/#post-1295335

Nope, not in the slightest. What applications would a console be running that are CPU intensive and require minimal latency? CPUs require minimal latency because of multiple applications fighting for resources from available compute threads/cores - and multiple concurrent applications aren't likely to come into play with a console.
Most, if not all, applications running on a console APU would be hardware (GPU) accelerated. At this point I'm not even sure if PhysX wouldn't be HW accelerated on an AMD APU.

I don't want to really jump in on this latency quarrel, but I am not sure I agree with that assessment in bold, at least not in regards to the new Xbox. I am sure MS will allow 3rd parties at least some amount of background capabilities with non-gaming applications, and they also have the OS stuff that will be running in the background as well, such as Kinect, etc. So CPU certainly will have a couple things competing for time.

But it doesn't have GDDR5 ram :P

Like I said, I don't really care about the latency stuff, just wanted to say I disagreed with that particular assessment.
 
I don't want to really jump in on this latency quarrel, but I am not sure I agree with that assessment in bold, at least not in regards to the new Xbox. I am sure MS will allow 3rd parties at least some amount of background capabilities and they also have the OS stuff that will be running in the background as well, such as Kinect, etc. So CPU certainly will have a couple things competing for time.

But it doesn't have GDDR5 ram :P
 
A software company that has already had reliability issues with hardware in the past continues to have them in the present?
2Uaj7MY.jpg
 
I don't want to really jump in on this latency quarrel, but I am not sure I agree with that assessment in bold, at least not in regards to the new Xbox. I am sure MS will allow 3rd parties at least some amount of background capabilities with non-gaming applications, and they also have the OS stuff that will be running in the background as well, such as Kinect, etc. So CPU certainly will have a couple things competing for time.

I think the guys means more like, hundreds of apps like you have on your desktop but its not sound imo.
 
And further to the point, most of the algorithms that will be used will be very cache friendly your probably not even hitting the memory most of the time.

Lets not be silly. With all of those features Sony was bragging about that the PS4 would do in the background, its obvious there will be applications competing for resources; and that's not even the end of it.
 
I don't want to really jump in on this latency quarrel, but I am not sure I agree with that assessment in bold, at least not in regards to the new Xbox. I am sure MS will allow 3rd parties at least some amount of background capabilities and they also have the OS stuff that will be running in the background as well, such as Kinect, etc. So CPU certainly will have a couple things competing for time.

The big difference between PC's and consoles in regards to what is using the CPU and when is the ability to reserve cores for specific processes. Whereas PC is just everything that needs access to the CPU is just a big free for all.
 
I don't want to really jump in on this latency quarrel, but I am not sure I agree with that assessment in bold, at least not in regards to the new Xbox. I am sure MS will allow 3rd parties at least some amount of background capabilities with non-gaming applications, and they also have the OS stuff that will be running in the background as well, such as Kinect, etc. So CPU certainly will have a couple things competing for time.



Like I said, I don't really care about the latency stuff, just wanted to say I disagreed with that particular assessment.

Even with some multitasking a console is not going to come close to approaching the number of concurrent tasks / threads that a modern PC O/S and applications demand by an order of magnitude
 
Lets not be silly. With all of those features Sony was bragging about that the PS4 would do in the background, its obvious there will be applications competing for resources; and that's not even the end of it.

If the OS takes a core then it takes a core and nothing more.

You still have 7 others to run your game in. Which run nothing else.
 
I guess of course you are excluding XBLA right? Seems convenient. It amazes me that people on forums love to exclude things that don't fit their agenda.

Why bother? it's not worth it...he clearly does not know what he's talking about.

In fact by saying that I highly doubt he even has a 360 in the first place or even if he owns one he doesn't know shit about the 360's line-up of the last 4 years.
 
The article says nothing about hardware reliability . Also, you troll the Xbox way too much.

Thuway is reliable and if he says they're running into problems with hardware then I believe it. Thuway and crazy buttocks are two of the most reliable posters on this board when it comes to legit insider knowledge.
 
Even with some multitasking a console is not going to come close to approaching the number of concurrent tasks / threads that a modern PC O/S and applications demand by an order of magnitude

Certainly.

The big difference between PC's and consoles in regards to what is using the CPU and when is the ability to reserve cores for specific processes. Whereas PC is just everything that needs access to the CPU is just a big free for all.

True as well. But cores in the same module share some of the same cache and they all have to hit the same main memory. Which is what the discussion was about, CPU intensive applications that might be running in concert with a game. Kinect image processing would be one of those intensive CPU processes, and one that would need a decent amount of data from main memory.
 
Certainly.



True as well. But cores is in the same module share some of the same cache and they all have to hit the same main memory. Which is what the discussion was about, CPU intensive applications that might be running in concert with a game. Kinect image processing would be one of those intensive CPU processes.

Durango also reserves core for kinect and the OS + it uses DDR3 Ram.
 
A software company that has already had reliability issues with hardware in the past continues to have them in the present?
2Uaj7MY.jpg

Someone didn't own the original Xbox.

The launch PS2s were horribly unreliable. As are the fat PS3s.

Neither is on the RROD level, but still Sony are 2/3. Microsoft are 1/2. I'd be worried about launch PS4s if we're not going to choose to ignore history.
 
Thuway is reliable and if he says they're running into problems with hardware then I believe it. Thuway and crazy buttocks are two of the most reliable posters on this board when it comes to legit insider knowledge.

What has Thuway said about Microsoft that makes him reliable? When has he shown that he has inside information?
 
If the OS takes a core then it takes a core and nothing more.

You still have 7 others to run your game in. Which run nothing else.

*sigh*
Anyway, we're getting off topic.

The point is, Microsoft and Sony were forced to show their hand after the release of the Wii U. If Nintendo hadn't moved, then Microsoft and Sony would still be content with the Xbox 360 and the PS3. There's evidence of this everywhere. One big notable thing is the PS4 being rushed as it is.
 
I think MS expected a 2014 launch. Maybe Sony forced their hand. Bad if true. RROD if true.

the last thing i'd expect from microsoft is a hardware problem like the RROD. after the huge financial cost, plus slight PR damage, i'm sure they've taken enough steps to ensure hardware reliability. obviously there will be some problems in line with the average for consumer electronics but nothing on the scale of the original 360's.
keep in mind the 360's parts were probably the height of large, hot components and were required to swap lead-solder out at the last moment for lead-free. coupling this with a 'we must be first to market' mentality and it's pretty unsurprising it went so badly.

edit: as an aside, i can't wait for the 21st to put all this rumour-mongering behind us. obviously then will come the 'LOL @ M$/SONY' in-fighting but still, i think that may be an improvement over where we're currently at.
 
*sigh*
Anyway, we're getting off topic.

The point is, Microsoft and Sony were forced to show their hand after the release of the Wii U. If Nintendo hadn't moved, then Microsoft and Sony would still be content with the Xbox 360 and the PS3. There's evidence of this everywhere. One big notable thing is the PS4 being rushed as it is.

Yes theres a contention for resources but its nothing like what there is on a normal desktop machines, consoles are a closed platform you have absolute control over a large amount of the hardware.

How is there evidence of the PS4 being rushed, the wii u, being that it is not in anyway shape or form a competitor to either the next box nor the PS4 i don't really see the reason that it is making Sony and Microsoft show there hand.
 
What has Thuway said about Microsoft that makes him reliable? When has he shown that he has inside information?

Well he said that the games that will be shown at E3 for Durango will blow people away but they will be downgraded when they'll come out or something like that.

Let's wait and see how that turns out.
 
Someone didn't own the original Xbox.

The launch PS2s were horribly unreliable. As are the fat PS3s.

Neither is on the RROD level, but still Sony are 2/3. Microsoft are 1/2. I'd be worried about launch PS4s if we're not going to choose to ignore history.

or a surface if we are talking about recent hardwares

What has Thuway said about Microsoft that makes him reliable? When has he shown that he has inside information?

yep some should post what inside information he got right
 
Yes theres a contention for resources but its nothing like what there is on a normal desktop machines, consoles are a closed platform you have absolute control over a large amount of the hardware.

How is there evidence of the PS4 being rushed, the wii u, being that it is not in anyway shape or form a competitor to either the next box nor the PS4 i don't really see the reason that it is making Sony and Microsoft show there hand.

1) Uh huh

2) I told you already. Sony and Microsoft are releasing their next gen consoles this year. That is their hand.
 
*sigh*
Anyway, we're getting off topic.

The point is, Microsoft and Sony were forced to show their hand after the release of the Wii U. If Nintendo hadn't moved, then Microsoft and Sony would still be content with the Xbox 360 and the PS3. There's evidence of this everywhere. One big notable thing is the PS4 being rushed as it is.

No they weren't. Please stop saying this. If anything Sony and MS wanted to reveal earlier but due to hardware changes they shifted back a bit.
 
What has Thuway said about Microsoft that makes him reliable? When has he shown that he has inside information?

Listen, even mods are confirming that certain posters are on the mark in this thread. Start from the beginning and read through. There's even a play by play turn of events on several pages.
 
or a surface if we are talking about recent hardwares



yep some should post what inside information he got right

If the Surface team is designing the new Xbox, it will be an amazing device, both aesthetically and in build quality. Fingers crossed.

Surface is rock solid and stunning.
 
*sigh*
Anyway, we're getting off topic.

The point is, Microsoft and Sony were forced to show their hand after the release of the Wii U. If Nintendo hadn't moved, then Microsoft and Sony would still be content with the Xbox 360 and the PS3. There's evidence of this everywhere. One big notable thing is the PS4 being rushed as it is.

image.jpg
 
Top Bottom