PS4 Ram vs. Durango Ram: How big of a difference does it make?

It's tough to tell from this angle but I think Sony has the high ground:

7446577620_f2b9075741.jpg

That picture is hilarious because the lower Ram is like "bitch I don't even care, I don't even wanna fuck that female over there."
 
This is going to be an unpopular answer but despite there being a real tangible difference in performance, the types of games/gameplay and what will be possible on either platform wont really change to a significant degree.

I agree completely
 
I love how people bring in latency as a defence. Makes me wonder if it were such a detrimental factor, it would have never been mass produced and used for one of the more computationally intense operations, i.e. graphics.

GPGPU is better with low latency memory.
 
Is that because MS will force their version to look equal to the competition? Meaning we'll only see a difference on PC? Wasn't this a rumour/fact going around?

Unless MS can force every dev to make their games run at a contant 30fps 100% of the time, which is impossible, PS4 version will have better framerate by default. High bandwidth is much more important for graphical tasks.
 
Read these OP:

CPUs and GPUs have different RAM requirements.

CPUs want RAM with low latency, so they can very quickly access and move small chunks of data around.
GPUs want RAM with high bandwidth, so they can move large chunks of data.

DDR3 is suited for CPUs. It is low latency, but also low bandwidth. It is the defacto RAM found in PCs and servers. You spend $10,000 on a server, and it will use DDR3.

GDDR5 is suited for GPUs. It is high latency, but also high bandwidth. Graphics cards above level entry will use GDDR5 for VRAM.

The Xbox 360 was the pioneer for using GDDR (in its case, GDDR3) for both system and VRAM. The PS4 is following suit. While this might be fine for dedicate gaming machines, for genral purpose computing and CPU intensive work, you want low latency RAM. Which is currently DDR3.

There is a reason the next Xbox has gone for the DDR3 + EDRAM approach. MS have designed the console for more than games. The non gaming apps want DDR3. The EDRAM is there to mitigate the low bandwidth main RAM to a certain degree. Sony seem to have designed the PS4 as a pure bread gaming console. Different priorities resulted in different RAM architectures.

TL;DR you don't want GDDR5 as system RAM in a PC. When DDR5 finally comes to market, it might have best of both worlds. Low latency for CPUs and high bandwidth for GPUs. Only then would you want it as system RAM.

Great explanation. However, I believe you meant to say "DDR4" instead of "DDR5" in your tl;dr. DDR4 just recently got wrapped up as a spec. Work hasn't begun on DDR5.

Some other things that I think are important to note:

1) The 3 and the 5 are the version numbers, but for separate things. DDR5 is not a thing yet (they're still working on DDR4 which should start releasing this year or next). It's very important that you have the "G" on there (which stands for graphics). It pains me when people see GDDR5 and DDR3 and think one is the obviously superior version. They are two separate products (imagine if the X360 was named Xbox 2. This is similar to someone saying PS3 > XB2, even though they're two separate product lines).

2) GDDR5 is actually based on DDR3 (as was GDDR4). They're basically two sides of the same coin. DDR3 is focused on low-latency, but with the tradeoff of lower-bandwidth, and GDDR5 has higher bandwidth, at the cost of higher-latency.
 

That is great from a PC CPU/DRAM 101 stand point. It is a little dated and not relevant to AMD APUs directly.

http://www.tomshardware.com/reviews/memory-bandwidth-scaling-trinity,3419.html

http://www.sisoftware.net/?d=qa&f=gpu_mem_latency


GPGPU is better with low latency memory.

I think you have that backwards. All GPGPU cards have GDDR5 because the data is streamed and the calculations are done in parallel, they are not fetching data a little bit here and there. Unless you want to argue the high end Tesla cards using cheap DDR3?

http://en.wikipedia.org/wiki/Nvidia_Tesla
 
Would the 32MB of super-fast RAM on Durango make their 8GB of DDR3 equivalent to the PS4's same amount of GDDR5 ?

The 32MB of "super-fast" ram that MS is using is significantly slower than GDDR5. The embedded SRAM they've gone with is supposed to be at ~100GB/s bandwidth. GDDR5 in the PS4 is confirmed to be at 176GB/s.

as much as this gen or more you think?
Significantly more.

The gap between PS3 and Xbox 360 in raw power was marginal, the PS3 had a significantly better CPU, a slightly weaker GPU, and less available memory than the Xbox 360. All combined the Cell (PS3's CPU) could pick up the slack for the GPU and then some, but it didn't make up for the missing ram or the divided memory architecture. Add a convoluted hardware design and you get what we got last generation. PS3 exclusives from Sony were the most graphically impressive games on the market by a noticeable amount because their first party teams had no choice but to fight the pain in the ass hardware, while 3rd parties stuck with the X360 as their target because it was 1. easier and 2. out sooner so it had more of an install base, resulting in the 360 generally getting the best multi-plats.

Now we're looking at a PS4 that was ground up designed by a highly successful development consultant in an industry that normally never hires consultants, with the soul focus of giving developers everything they need. At the same time most current rumors point to that system, the PS4, being significantly stronger on the GPU side, have much better memory bandwidth (that also requires no hoops to jump through like what caching will require), and a comparable CPU.

So Sony's biggest issue last generation, pain in the ass design, is gone. The hardware gap is no longer CPU heavy from Sony, GPU heavy from MS, it's now comparable on the CPU side and tangibly pro-Sony on the GPU side. On memory we had multiple sources saying that 4GB of GDDR5 was better than the 8GB DDR3 + 32MB ESRAM already, and that 4GB just got doubled.

Ultimately it is all about the games, but Sony has launched one hell of an opening pitch to every developer and publisher as to why the PS4 should be their default target platform. Its all about how much they buy in. The show on the 20th and the reaction since then sure sounds like a lot of them are at least VERY curious.
 
Wow how old is that?

Those are fundamental concepts in computer chip engineering based on simple physics laws, not iPhone revisions.

He's right and bandwidth is most certainly not the only thing that maters, latency is important too, but for video game stuff I'd always sacrifice the latter for the former, so Sony's design choice is superior, assuming Durango is DDR3.

Will there be corner cases where that disparity will play in favor of Durango? Maaaaaybe, but I doubt it will happen for game content. Microsoft's OS is probably gonna be much better and snappier than the PS4s, but that would be a given anyway.
 
Stupid comment, but going over the old system diagrams, it looked like the Durango CPU doesn't have access to the eSRAM, just the GPU does. Everything else had to go over the northbridge. Is the possible bandwidth limit on the CPU a minor setback? Or was this amended in later material?

Or, since the eSRAM is only accessible from the GPU, wouldn't it make things a bit tough to juggle certain aspects from the GPU to the CPU and back again?

Apologies if I'm completely off the mark here.
 
GPGPU is better with low latency memory.

If the developers are satisfied, I really fail to see the issue here. Would you like to go on a platform and tell the devs how wrong they are in supporting GDDR5?

Before it was a matter of half the RAM. Now it's a matter of latency. FFS...

Does anyone have hard latency figures for DDR3 (whatever Durango is using, 2133 I believe) and GDDR5?
 
Read these OP:
Correct me if I'm wrong but in an efficient architecture, shouldn't a CPU primarily get its informations from cache, not from system ram? In that regard, difference in latency wouldn't be that important if most accesses are made from cache and not from system ram.
 
Correct me if I'm wrong but in an efficient architecture, shouldn't a CPU primarily get its informations from cache, not from system ram? In that regard, difference in latency wouldn't be that important if most accesses are made from cache and not from system ram.

I really think they were planning this when they chose DDR3 (to keep costs down) and to be as efficient as possible.

I think people are underestimating the overall design and how they are trying to reach *near* gaming parity at a much lower cost and adding other features for long term growth.
 
If the developers are satisfied, I really fail to see the issue here. Would you like to go on a platform and tell the devs how wrong they are in supporting GDDR5?

Before it was a matter of half the RAM. Now it's a matter of latency. FFS...

Does anyone have hard latency figures for DDR3 (whatever Durango is using, 2133 I believe) and GDDR5?

http://www.tomshardware.com/reviews/quad-channel-ddr3-memory-review,3100.html

This has the timings of the high end quad channel DDR3.

http://www.sisoftware.net/?d=qa&f=gpu_mem_latency

This has some latency benchmarks for an AMD APU with DDR3 and stand alone GPU with GDDR5.
 
honestly i think quantity was the main issue not bandwidth.

developers didnt enjoy having to make a game go back and butcher their assets and massage their memory management so their game builds wouldnt crap out from out of memory addresses.

if timing lined up the durango could have gone with DDR4 but DDR4 is just being released late 2013 and wont be feasible from a cost point of view until 2014+
 
*kicks the doors open at Microsoft's hardware R&D office, while holding a boombox*

"All right, I'm going to need a lot of extra 360s and some duct tape. Let's get to work."

*plays "Push It to the Limit" from Scarface on the boombox with a montage of me writing a bunch of equations on a white board, crumpling up multiple schematics of the design, and arguing with guys in lab coats over where pieces of duct tape should be placed*
 
I really think they were planning this when they chose DDR3 (to keep costs down) and to be as efficient as possible.

I think people are underestimating the overall design and how they are trying to reach *near* gaming parity at a much lower cost and adding other features for long term growth.
Oh, sorry. With cache, I meant L1, L2 cache pools that are attached to the CPU, not the esram which will mostly benefit Durango's GPU.
 
It will only be noticeable if you compare 1st party Sony games with multi-plats and 1st party Microsoft titles. Only Sony's first party developers will seek to make the most of the GDDR5 memory, with multiplatform developers you'll see very little difference or none at all.

Sony's higher speed memory isn't that much of an advantage.
 
Judging from these threads so far, it seems "latency latency latency" is going to be the mantra for GDDR5 detractors.

Do we know anything about the caches and/or local store built into the SoC architecture of this thing though?

I think without knowing about that, it's waaaaaaaaay premature to be talking about RAM latency affecting CPU operations.

After all, this is Sony, who helped design the PS3's Cell SPUs to each have their own little storage onboard to stream data.

Also, we don't know anything about the memory controller, and how it may be optimized to help the CPU.
 
Metro apps don't need 8GB.
ofcourse they don't =p
Correct me if I'm wrong but in an efficient architecture, shouldn't a CPU primarily get its informations from cache, not from system ram? In that regard, difference in latency wouldn't be that important if most accesses are made from cache and not from system ram.

can't correct on what I am unsure of. It's just I read these recently and they sounded right.
 
This is going to be an unpopular answer but despite there being a real tangible difference in performance, the types of games/gameplay and what will be possible on either platform wont really change to a significant degree.
Honestly this is probably the answer closest to what will actually happened. Even though 8gigseverything of GDDR5 is incredible from a technical perspective, I think developers will be able to utilize the 8gigs of Durango's DDR3 to identical effect. I don't really think anyone outside of a few first parties will utilize the GDDR5 to the point where it it makes a noticeable difference in third party games on both systems.

The only system where even the uninformed public will see a clear, discernible difference in games is obviously Wii U, simply cause ALL of the hardware is gimped compared to PS4/Durango.
 
I'd say it's close to PS3 and 360 during the 1st year when PS3 OS was huge and all multiplats showed decided RAM advantage to 360. PS4 will have similar multiplat advantage over Durango IMO.
 
honestly i think quantity was the main issue not bandwidth.

developers didnt enjoy having to make a game go back and butcher their assets and massage their memory management so their game builds wouldnt crap out from out of memory addresses.

if timing lined up the durango could have gone with DDR4 but DDR4 is just being released late 2013 and wont be feasible from a cost point of view until 2014+

I think Sony was doing something strange to get all 8 gigs of ddr5 in the ps4. It was something to the effect that the modules required to make 8 gigs arent massed produced. So Sony had to step up thier game and make sure it was. Maybe Microsoft is doing something similar
 
Judging from these threads so far, it seems "latency latency latency" is going to be the mantra for GDDR5 detractors.

Do we know anything about the caches and/or local store built into the SoC architecture of this thing though?

I think without knowing about that, it's waaaaaaaaay premature to be talking about RAM latency affecting CPU operations.

After all, this is Sony, who helped design the PS3's Cell SPUs to each have their own little storage onboard to stream data.

Also, we don't know anything about the memory controller, and how it may be optimized to help the CPU.
Yes. I fail to see how latency is a critical factor here, it will be very dependant on your algorithm but if you have to rely on the latency of your system ram this means your CPU is starving and that's not efficient at all. DDR3 over GDDR5 won't save you here, it's a matter of sucking bad versus sucking worse.

But I'm not too adamant here, I'm at the far edges of my computer skills...
 
Correct me if I'm wrong but in an efficient architecture, shouldn't a CPU primarily get its informations from cache, not from system ram? In that regard, difference in latency wouldn't be that important if most accesses are made from cache and not from system ram.

Yes, but how do the instructions get to the cache?

Think of it like this.

You're a CPU, and you are drinking beer (aka, executing instructions). You reach into the six-pack that's right next to you (L1 cache), but you're all out (aka, cache miss). Then, you get up and head into your fridge (L2 cache) to see if you have another six pack you can bring to your couch. If you don't, then you hop in your car and head to the supermarket (RAM). When you're at the supermarket, you're going to bring all of the beer that can fit in your fridge (fill your L2 cache). Once you get home and put all the beer in your fridge, you then take a six pack and head back to your couch (fills your L1 cache). Then you start drinking the beer again, and the cycle goes on.

Since L2 cache is typically around 1-2MB, you can see why higher bandwidth doesn't really matter in this scenario. What's most important is the latency involved of getting things from the RAM. In my example, it doesn't matter if you use a mini van or a semi truck to get your beers from the store. You're limited by how much you can put in your fridge (L2 cache).

Now, a GPU on the other hand has a different architecture and a different set of problems. Since they're dealing with larger data sets (textures can be quite large in memory!), you want to maximize the amount of data you can push. Instead of the CPU using a mini-van to move the beer around (which is most efficient for its uses), the GPU would prefer to use a semi-truck, even though it would take a bit longer to get to its destination.
 
The differences you see between ports will be akin to Call of Duty Black Ops on 360 versus PS3. There is nothing that lowering resolution, effects, and framerates can't fix. This is amplified by the diminishing returns going in next generation. On a technical level, Orbis will have the superior ports; however, how that will manifest itself on screen is a whole other story.
 
Yes, but how do the instructions get to the cache?

Think of it like this.

You're a CPU, and you are drinking beer (aka, executing instructions). You reach into the six-pack that's right next to you (L1 cache), but you're all out (aka, cache miss). Then, you get up and head into your fridge (L2 cache) to see if you have another six pack you can bring to your couch. If you don't, then you hop in your car and head to the supermarket (RAM). When you're at the supermarket, you're going to bring all of the beer that can fit in your fridge (fill your L2 cache). Once you get home and put all the beer in your fridge, you then take a six pack and head back to your couch (fills your L1 cache). Then you start drinking the beer again, and the cycle goes on.

Since L2 cache is typically around 1-2MB, you can see why higher bandwidth doesn't really matter in this scenario. What's most important is the latency involved of getting things from the RAM. In my example, it doesn't matter if you use a mini van or a semi truck to get your beers from the store. You're limited by how much you can put in your fridge (L2 cache).

Now, a GPU on the other hand has a different architecture and a different set of problems. Since they're dealing with larger data sets (textures can be quite large in memory!), you want to maximize the amount of data you can push. Instead of the CPU using a mini-van to move the beer around (which is most efficient for its uses), the GPU would prefer to use a semi-truck, even though it would take a bit longer to get to its destination.
There are a lot of branch prediction algorithms for that. Of course it depends on your algorithm but, basically, the code is mostly sequential so once an instruction is accessed and goes into cache, the instructions that follow go into cache too because chances are that they are the next ones to execute. As a result the first access may have been performed in system ram, the following ones will be performed in cache and overall, your CPU feeds on cache, not from system ram. It's just a game of anticipation. Sometimes you'll lose the game and get a cache miss but overall, you win, your CPU is much more efficient and does not starve.

But this is trivial stuff, there are much more advanced algorithms that ensure your CPU will mostly rely on its cache.
 
Can someone with more knowledge than myself talk about how the different RAM might affect porting between the 2 systems? Will it be difficult at all based on having to use esram in Durango vs a single pool in PS4? Thanks.

1. Better textures
2. More textures
3. More animations
4. More NPC's
5. More randomness
6. Larger worlds
7. More techniques (better types of lighting, DOF, AA)


In short, games will be made to accommodate less constraints. They will look fucking beautiful.
 
1. Better textures
2. More textures
3. More animations
4. More NPC's
5. More randomness
6. Larger worlds
7. More techniques (better types of lighting, DOF, AA)


In short, games will be made to accommodate less constraints. They will look fucking beautiful.

This is all true, but it will only be really apparent in titles exclusive to the PS4. In those cases the developer will exploit every unique aspect of the console's environment without concern for porting issues (since they're exclusive).

In multiplats? We'll get slightly better looking titles, if that.
 
All these threads about Durango vs. PS4 now... and many people thinking the PS4 is much more powerful than Durango.

We don't even know what the Durango's specs really are... if it ends up being more powerful than PS4 it would cause a massive damage in many people.

What I mean is that some people should chill and wait before saying out loud how powerful or better the console is.
 
No. Only a speed benefit to the extent of 32mb. The 8GB DDR3 would however still remain the same lesser bandwidth.

Render targets don't take large amounts of memory though. But can require massive amounts of bandwidth. The whole idea of embedded memory on GPUs is to keep the most bandwidth hungry and latency dependant aspect of games rendering on chip away from main memory. You then leave main RAM free to deal with the less bandwidth hungry and latency dependant aspects such as game code and textures.
 
All these threads about Durango vs. PS4 now... and many people thinking the PS4 is much more powerful than Durango.

We don't even know what the Durango's specs really are... if it ends up being more powerful than PS4 it would cause a massive damage in many people.

What I mean is that some people should chill and wait before saying out loud how powerful or better the console is.
While nothing has been confirmed this is what has been heavily rumored:

Memory:8GB DDR3(68GB/s)

GPU:12CU@800MHz(1.23TFLOPS), 32MB ESRAM(102GB/s), 4DMEs

CPU:8 Jaguar cores @ 1.6GHz
 
There are a lot of branch prediction algorithms for that. BAsically, the code is mostly sequential so once an instruction is accessed and goes into cache, the instructions that follow go too because chances are that they are the next ones to execute. The first access may have been performed in system ram, the following ones will be performed in cache and overall, you CPU feeds on cache, not from system ram. Sometimes you'll get a cache miss but overall, your CPU is much more efficient and does not starve.

But this is trivial stuff, there are much more advanced algorithms that ensure your CPU will mostly rely on its cache.

A CPU never executes from RAM. Ever. It always grabs its instructions from the closest available cache (hopefully L1). Closest could mean that it has to go to RAM to get it, but it will bring back 1-2MB of data (way more than it needs), so for the next execution, it won't have to look that far. All CPU's perform this way.

Sorry if my analogy didn't make that clear (it is a bit simplified after all... it's about an alcoholic CPU). When it hits a cache miss though, it has to get the instruction from the next highest cache, until it finds the instruction. Eventually it will go out to RAM, but it still pulls in more data than what it immediately needs, because it's trying to predict what it will need next. If it has to go out to RAM, it may as well be as efficient as it can, right? In my analogy, the guy would never go to the store and come back with a single beer. He's going to fill the fridge. However, it will filter through the various caches, down to L1 before it executes the next instruction.

Yes, a lot of code is sequential, which is why the CPU always looks for the next instruction in the L1 cache first. If it's there, awesome, that's the best case scenario. If not, then it starts going further and further out.

Algorithms like branch prediction help by taking a guess at where the CPU will read next, and making the most likely path available in its cache. For example, if I'm doing a null pointer check at the beginning of a function, most of the time you won't enter the if statement if your code is behaving correctly (if you do, then you likely have an error and it will head back up the chain). In this case, it's more efficient for the CPU to grab the instructions that happen after the if statement, in the case that it returns false, instead of caching the ones that happen in the case it's true. However, these algorithms aren't always accurate. Plus, you will always have cache misses in your code (unless all of your code can sit inside L1... never going to happen).
 
Sure I do. Doesn't change the amount of memory the console has access to a frame.

It doesn't change the theoretical number no, but in practice it absolutely changes things. After all PS4 isn't going to be able to use that amount of memory per frame when a very large portion of its total bandwidth is spent addressing relatively small render targets (similar in size to Xbox3s embedded memory). I understand that you were only trying to compare a simple number. However explaining that would have been bettered, just replying that it'll add 32MB is misleading and too many people on this forum already don't seem to realise how significant a difference that embedded memory can make.
 
All these threads about Durango vs. PS4 now... and many people thinking the PS4 is much more powerful than Durango.

We don't even know what the Durango's specs really are... if it ends up being more powerful than PS4 it would cause a massive damage in many people.

What I mean is that some people should chill and wait before saying out loud how powerful or better the console is.

Nah. I hope MS will push the specs[like sony did]. GET ON THEM GAF!

People forget the xbox 360 went from 256mb to 512mb. Worth it for the x360, upgrades = everyone wins.
 
Ram amount is most important. PS4 has the edge since it has less devoted to OS.

I put the developer priorities as (relative to the leaked specs of the 2 systems)

1. bandwidth
2. unification
3. size
4. latency

That's for graphics whore applications. For someone writing an application that, say, manipulates large random graphs, they would be concerned first with size, and then with latency.
 
Render targets don't take large amounts of memory though. But can require massive amounts of bandwidth. The whole idea of embedded memory on GPUs is to keep the most bandwidth hungry and latency dependant aspect of games rendering on chip away from main memory. You then leave main RAM free to deal with the less bandwidth hungry and latency dependant aspects such as game code and textures.

This. The main consumer of bandwidth is render targets. They eat bandwidth, and with the move to next gen, we will see an increased use of deferred rendering using multiple render targets. The durango seems to be designed in a way that the majority of render op will be done on the eSRAM, but unlike the 360, you are not restricted to rendering to the eSRAM. So for games with a lot of render targets, you can render some of it to the eSRAM and some to the main memory and then combine them before display.

This is why the durango memory architecture makes sense, loading textures and geometry are not what that bandwidth is really going to be used for as they don't require nearly as much bandwidth as pixel/render target ops.
 
Top Bottom