I've been reading more about GDDR5 vs DDR3, and one thing that many other sites are talking about, and not discussing here is the latency. GDDR5 actually has higher latency than DDR3, at the cost of its increased bandwidth.
In fact, GDDR5 is actually based on DDR3 memory!
Remember that the "G" in GDDR5 stands for graphics. The reason why this type of memory is found in GPU's is because a GPU is typically performing lots of calculations in parallel, making the latency almost a non-issue. However, when you use that same memory when dealing with a CPU, it actually starts performing worse than DDR3, since CPU's act in a linear fashion (execute instruction X, then Y, then Z, etc...). Of course, you can split up your compute jobs between multiple cores, but each core still executes in a linear fashion. This means that when the CPU needs additional instructions to execute, GDDR5 will actually be slower to respond than DDR3. Bandwidth doesn't matter as much as latency in this scenario, because you aren't shifting a lot of data, but rather you want the next instruction to come as soon as possible.
This issue of latency is actually one of the reasons why they don't sell GDDR5 as system RAM. It's not that they can't, it's just that DDR3 is better. It's mostly useful for graphics cards when almost all of your work can be done in parallel, and DDR3 is more useful when dealing with CPU's when you're dealing with linear instructions and latency is more important.
Perhaps that's why Sony has extra compute units on the GPU, because they want to offload as much as they can from the CPU due to GDDR5's latency issues?
After looking at all of this, I'm actually not 100% sure that GDDR5 is always better than DDR3. It seems like it's an apples and orange comparison. You need to take the rest of the system into account, and not just focus on an individual piece (in this case, the type of RAM).
Thoughts?