Boss Man
Member
That seems a bit dated, lol.http://archive.arstechnica.com/paedia/b/bandwidth-latency/bandwidth-latency-1.html
Bandwidth is not the only important thing![]()
That seems a bit dated, lol.http://archive.arstechnica.com/paedia/b/bandwidth-latency/bandwidth-latency-1.html
Bandwidth is not the only important thing![]()
Suddenly everyone is an expert on RAM.
It's tough to tell from this angle but I think Sony has the high ground:
![]()
This is going to be an unpopular answer but despite there being a real tangible difference in performance, the types of games/gameplay and what will be possible on either platform wont really change to a significant degree.
I love how people bring in latency as a defence. Makes me wonder if it were such a detrimental factor, it would have never been mass produced and used for one of the more computationally intense operations, i.e. graphics.
Is that because MS will force their version to look equal to the competition? Meaning we'll only see a difference on PC? Wasn't this a rumour/fact going around?
CPUs and GPUs have different RAM requirements.
CPUs want RAM with low latency, so they can very quickly access and move small chunks of data around.
GPUs want RAM with high bandwidth, so they can move large chunks of data.
DDR3 is suited for CPUs. It is low latency, but also low bandwidth. It is the defacto RAM found in PCs and servers. You spend $10,000 on a server, and it will use DDR3.
GDDR5 is suited for GPUs. It is high latency, but also high bandwidth. Graphics cards above level entry will use GDDR5 for VRAM.
The Xbox 360 was the pioneer for using GDDR (in its case, GDDR3) for both system and VRAM. The PS4 is following suit. While this might be fine for dedicate gaming machines, for genral purpose computing and CPU intensive work, you want low latency RAM. Which is currently DDR3.
There is a reason the next Xbox has gone for the DDR3 + EDRAM approach. MS have designed the console for more than games. The non gaming apps want DDR3. The EDRAM is there to mitigate the low bandwidth main RAM to a certain degree. Sony seem to have designed the PS4 as a pure bread gaming console. Different priorities resulted in different RAM architectures.
TL;DR you don't want GDDR5 as system RAM in a PC. When DDR5 finally comes to market, it might have best of both worlds. Low latency for CPUs and high bandwidth for GPUs. Only then would you want it as system RAM.
Great explanation. However, I believe you meant to say "DDR4" instead of "DDR5" in your tl;dr. DDR4 just recently got wrapped up as a spec. Work hasn't begun on DDR5.
Some other things that I think are important to note:
1) The 3 and the 5 are the version numbers, but for separate things. DDR5 is not a thing yet (they're still working on DDR4 which should start releasing this year or next). It's very important that you have the "G" on there (which stands for graphics). It pains me when people see GDDR5 and DDR3 and think one is the obviously superior version. They are two separate products (imagine if the X360 was named Xbox 2. This is similar to someone saying PS3 > XB2, even though they're two separate product lines).
2) GDDR5 is actually based on DDR3 (as was GDDR4). They're basically two sides of the same coin. DDR3 is focused on low-latency, but with the tradeoff of lower-bandwidth, and GDDR5 has higher bandwidth, at the cost of higher-latency.
http://archive.arstechnica.com/paedia/b/bandwidth-latency/bandwidth-latency-1.html
Bandwidth is not the only important thing![]()
GPGPU is better with low latency memory.
Would the 32MB of super-fast RAM on Durango make their 8GB of DDR3 equivalent to the PS4's same amount of GDDR5 ?
Significantly more.as much as this gen or more you think?
Wow how old is that?
There is a reason the next Xbox has gone for the DDR3 + EDRAM approach. MS have designed the console for more than games. The non gaming apps want DDR3.
GPGPU is better with low latency memory.
Correct me if I'm wrong but in an efficient architecture, shouldn't a CPU primarily get its informations from cache, not from system ram? In that regard, difference in latency wouldn't be that important if most accesses are made from cache and not from system ram.Read these OP:
Guys, you keep forgetting about the ray tracing chip in Durango !
Correct me if I'm wrong but in an efficient architecture, shouldn't a CPU primarily get its informations from cache, not from system ram? In that regard, difference in latency wouldn't be that important if most accesses are made from cache and not from system ram.
If the developers are satisfied, I really fail to see the issue here. Would you like to go on a platform and tell the devs how wrong they are in supporting GDDR5?
Before it was a matter of half the RAM. Now it's a matter of latency. FFS...
Does anyone have hard latency figures for DDR3 (whatever Durango is using, 2133 I believe) and GDDR5?
Metro apps don't need 8GB.
Oh, sorry. With cache, I meant L1, L2 cache pools that are attached to the CPU, not the esram which will mostly benefit Durango's GPU.I really think they were planning this when they chose DDR3 (to keep costs down) and to be as efficient as possible.
I think people are underestimating the overall design and how they are trying to reach *near* gaming parity at a much lower cost and adding other features for long term growth.
ofcourse they don't =pMetro apps don't need 8GB.
Correct me if I'm wrong but in an efficient architecture, shouldn't a CPU primarily get its informations from cache, not from system ram? In that regard, difference in latency wouldn't be that important if most accesses are made from cache and not from system ram.
Honestly this is probably the answer closest to what will actually happened. Even though 8gigseverything of GDDR5 is incredible from a technical perspective, I think developers will be able to utilize the 8gigs of Durango's DDR3 to identical effect. I don't really think anyone outside of a few first parties will utilize the GDDR5 to the point where it it makes a noticeable difference in third party games on both systems.This is going to be an unpopular answer but despite there being a real tangible difference in performance, the types of games/gameplay and what will be possible on either platform wont really change to a significant degree.
http://www.tomshardware.com/reviews/quad-channel-ddr3-memory-review,3100.html
This has the timings of the high end quad channel DDR3.
http://www.sisoftware.net/?d=qa&f=gpu_mem_latency
This has some latency benchmarks for an AMD APU with DDR3 and stand alone GPU with GDDR5.
honestly i think quantity was the main issue not bandwidth.
developers didnt enjoy having to make a game go back and butcher their assets and massage their memory management so their game builds wouldnt crap out from out of memory addresses.
if timing lined up the durango could have gone with DDR4 but DDR4 is just being released late 2013 and wont be feasible from a cost point of view until 2014+
Yes. I fail to see how latency is a critical factor here, it will be very dependant on your algorithm but if you have to rely on the latency of your system ram this means your CPU is starving and that's not efficient at all. DDR3 over GDDR5 won't save you here, it's a matter of sucking bad versus sucking worse.Judging from these threads so far, it seems "latency latency latency" is going to be the mantra for GDDR5 detractors.
Do we know anything about the caches and/or local store built into the SoC architecture of this thing though?
I think without knowing about that, it's waaaaaaaaay premature to be talking about RAM latency affecting CPU operations.
After all, this is Sony, who helped design the PS3's Cell SPUs to each have their own little storage onboard to stream data.
Also, we don't know anything about the memory controller, and how it may be optimized to help the CPU.
Correct me if I'm wrong but in an efficient architecture, shouldn't a CPU primarily get its informations from cache, not from system ram? In that regard, difference in latency wouldn't be that important if most accesses are made from cache and not from system ram.
Right it does add 32MB to x box720![]()
Sure I do. Doesn't change the amount of memory the console has access to a frame.You don't really get the concept of embedded memory do you?
There are a lot of branch prediction algorithms for that. Of course it depends on your algorithm but, basically, the code is mostly sequential so once an instruction is accessed and goes into cache, the instructions that follow go into cache too because chances are that they are the next ones to execute. As a result the first access may have been performed in system ram, the following ones will be performed in cache and overall, your CPU feeds on cache, not from system ram. It's just a game of anticipation. Sometimes you'll lose the game and get a cache miss but overall, you win, your CPU is much more efficient and does not starve.Yes, but how do the instructions get to the cache?
Think of it like this.
You're a CPU, and you are drinking beer (aka, executing instructions). You reach into the six-pack that's right next to you (L1 cache), but you're all out (aka, cache miss). Then, you get up and head into your fridge (L2 cache) to see if you have another six pack you can bring to your couch. If you don't, then you hop in your car and head to the supermarket (RAM). When you're at the supermarket, you're going to bring all of the beer that can fit in your fridge (fill your L2 cache). Once you get home and put all the beer in your fridge, you then take a six pack and head back to your couch (fills your L1 cache). Then you start drinking the beer again, and the cycle goes on.
Since L2 cache is typically around 1-2MB, you can see why higher bandwidth doesn't really matter in this scenario. What's most important is the latency involved of getting things from the RAM. In my example, it doesn't matter if you use a mini van or a semi truck to get your beers from the store. You're limited by how much you can put in your fridge (L2 cache).
Now, a GPU on the other hand has a different architecture and a different set of problems. Since they're dealing with larger data sets (textures can be quite large in memory!), you want to maximize the amount of data you can push. Instead of the CPU using a mini-van to move the beer around (which is most efficient for its uses), the GPU would prefer to use a semi-truck, even though it would take a bit longer to get to its destination.
Can someone with more knowledge than myself talk about how the different RAM might affect porting between the 2 systems? Will it be difficult at all based on having to use esram in Durango vs a single pool in PS4? Thanks.
1. Better textures
2. More textures
3. More animations
4. More NPC's
5. More randomness
6. Larger worlds
7. More techniques (better types of lighting, DOF, AA)
In short, games will be made to accommodate less constraints. They will look fucking beautiful.
No. Only a speed benefit to the extent of 32mb. The 8GB DDR3 would however still remain the same lesser bandwidth.
While nothing has been confirmed this is what has been heavily rumored:All these threads about Durango vs. PS4 now... and many people thinking the PS4 is much more powerful than Durango.
We don't even know what the Durango's specs really are... if it ends up being more powerful than PS4 it would cause a massive damage in many people.
What I mean is that some people should chill and wait before saying out loud how powerful or better the console is.
While nothing has been confirmed this is what has been heavily rumored:
Memory:8GB DDR3(68GB/s)
GPU:12CU@800MHz(1.23TFLOPS), 32MB ESRAM(102GB/s), 4DMEs
CPU:8 Jaguar cores @ 1.6GHz
This. .. This post will NEVER be topped!We'll see if Durango can Dodge the Ram issue.
There are a lot of branch prediction algorithms for that. BAsically, the code is mostly sequential so once an instruction is accessed and goes into cache, the instructions that follow go too because chances are that they are the next ones to execute. The first access may have been performed in system ram, the following ones will be performed in cache and overall, you CPU feeds on cache, not from system ram. Sometimes you'll get a cache miss but overall, your CPU is much more efficient and does not starve.
But this is trivial stuff, there are much more advanced algorithms that ensure your CPU will mostly rely on its cache.
Sure I do. Doesn't change the amount of memory the console has access to a frame.
All these threads about Durango vs. PS4 now... and many people thinking the PS4 is much more powerful than Durango.
We don't even know what the Durango's specs really are... if it ends up being more powerful than PS4 it would cause a massive damage in many people.
What I mean is that some people should chill and wait before saying out loud how powerful or better the console is.
Ram amount is most important. PS4 has the edge since it has less devoted to OS.
Update your CPU specs.
Render targets don't take large amounts of memory though. But can require massive amounts of bandwidth. The whole idea of embedded memory on GPUs is to keep the most bandwidth hungry and latency dependant aspect of games rendering on chip away from main memory. You then leave main RAM free to deal with the less bandwidth hungry and latency dependant aspects such as game code and textures.