Funny because PS4 has the same RAM split as Xbox One between Games and OS.
No thuway means the eSRAM + DDR3 is more complicated then just a pool of GDDR5.
Funny because PS4 has the same RAM split as Xbox One between Games and OS.
Then where did you hear that the os allocation is less complicated on xbone then ps4 because that makes no sense.
No thuway means the eSRAM + DDR3 is more complicated then just a pool of GDDR5.
No carnival of stupid in here please.
Interesting. Reading now.
Sounds like the lack of RAM is for that the whole multitasking app thing can work properly with all the games.
Hmm...
Well that isn't true.
I guess if were talking physical amounts of RAM yes, but for the purposes of this thread I was referring to the split between addressable/non-addressable game RAM and the RAM reserved for the OS.
My problem with all this secondary app stuff is that I already have all of this on my tablet, even while sitting in front of a TV. I just don't care about yet another device giving me access to boring, ubiquitous stuff like social networks, communication, and media.
Naturally, this has provoked a certain amount of upset among those who'd rather each and every byte of memory was set aside for the sole, exclusive purpose of (e.g.) rendering every fold in Batman's cape.
Those still disgruntled by the withholding of RAM that might be devoted exclusively to sharper gun renders
The Xbox One does have a unified RAM pool (as did the 360 and PS4 now).
What a non-unified RAM pool means is that it has separation between VRAM and RAM. For example, on the PS3 which had 512MB RAM, 256MB of that could be accessed by the GPU, and 256MB could be accessed by the CPU. Developers couldn't allocate resources to both depending on their needs as it was determined by the system. With a unified RAM system, going with the PS3 example, that means that developers have 512MB to allocate between CPU/GPU resources as they see fit for their title.
With the X1, developers simply have to think of it as being a 5GB unified RAM. They can allocate those resources between CPU/GPU as they see fit, just as they could on the 360. The system reserves 3GB for the OS and apps for multi-tasking, but that's invisible to the developer (taken care of by the hypervisor OS). It doesn't make development any harder.
Well that isn't true.
Look how they try to ridicule those of us who want the resource of the machine we buy to be usable for.....you know......actual games.
Cerny: "The #1 request of developers was a unified RAM pool"
Xbox mag: "Why splitting the RAM is good for developers"
I DON'T KNOW WHO TO BELIEVE ANYMORE.
Cerny: "The #1 request of developers was a unified RAM pool"
Xbox mag: "Why splitting the RAM is good for developers"
I DON'T KNOW WHO TO BELIEVE ANYMORE.
Can you feel that? They're coming.
![]()
Its not very difficult to understand.
- Adressable Game RAM = Developers can use as they want
- Non-addressable game RAM = OS handles that on behalf of the game, developers can request it but don't need to worry about it, its just virtual memory handled by the OS.
- OS RAM = Developers can't touch that at all.
You're probably right. I had forgotten about the ESRAM on the XB1.
No more complicated that what developers have been learning to develop for for the last 8 years. Isn't that how the 360 was set up?
The article tells me that some of the memory is used to run windows and some to run system wide Kinect features. Neither of those are features that appeal to me.
The carnival of stupid is what Microsoft is doing.
And how annoying that you see the need to post what we are to do or not do in this thread even before you could be bothered to read anything.
My problem with all this secondary app stuff is that I already have all of this on my tablet, even while sitting in front of a TV. I just don't care about yet another device giving me access to boring, ubiquitous stuff like social networks, communication, and media.
Interesting read.
Dinosaur coming confirmed.
Look how they try to ridicule those of us who want the resource of the machine we buy to be usable for.....you know......actual games.
Right, which I why I explained what I was referring to. It seems that others were having trouble understanding.
I'm telling you its not complicated. Developers have a unified pool of RAM they can address, in addition to that, they can request further assist RAM from the OS which is handled by the OS, they don't need to do anything which takes away the complication of addressing two virtually split pool of RAM. Oversimplification of course.I don't know. From what we've heard the RAM split on the Xbox One seems much less complicated than the RAM split on the PS4.
"There has been an explosion of devices," Multerer went on. "There are phones, there are tablets, the whole way that people interact and that they live with devices has fundamentally changed. I walk around with a phone all time, everybody I know walks around with phones. The expectation of the next gen gamer is that these things are just there. It's a rapidly changing ecosystem of applications that sit on a rapidly changing ecosystem of devices - fundamentally different to the consoles of the past."
I am not saying that the eSRAM setup is hard for developers but that its more complicated then the GDDR5 setup the ps4 has
In bf4?
Having such a large focus on OS features sheds light on why Microsoft went with DDR3 RAM rather than a more bandwidth-heavy RAM like GDDR5.
The fact is, PCs today still use DDR3 because of one thing - It is easier on the CPU, and thus easier on an OS. DDR3 + eSRAM allows Microsoft to easily incorporate OS-specific features while simultaneously allowing developers access to the higher bandwidth RAM they want, albeit in small bits. 32MB is plenty for storing a large framebuffer though.
In my experience, the RAM discrepancy between Xbox One and PS4 is being way overplayed. Xbox One is more difficult to develop for, yeah, but since the 360 had nearly the same setup, there is no steep learning curve to get adjusted to using eSRAM or eDRAM.
This is correct, people are making it seem harder than it actually is but PC is not very difficult since Directx handles memory management.In terms of an absolute scale of "difficulty" I'd wager that they're both extremely easy to develop for compared to last gen.
Here's an artistic rendition:
Difficult........................................................................................................................................Less Difficult
|-----------------PS3-------------360---------------------------PC----------------------------------XB1-PS4--|
Yet they spend a good chunk of the article explaining that, nowadays, most people buying new consoles want more than just games to run well on that machine for that price. A lot of words are spent saying the obvious...that the traditional console is in the past, and that most people want and demand more from their boxes. Old-style consoles that focused everything on games alone are done and have been for some time and that's why every current or new one that is coming are all more concerned with stuff that the box can do outside of games. It's not ridicule, it's a observation of the market reality that in order to succeed, your new box needs more than a pure games focus. People who aren't satisfied with that can still build gamer PCs if they like, but consoles are and have always been a compromise of price and ability, even before multitasking app-using gamers of 2013.
Probably none (also true for Sony). The split is the result of budgeting memory resources for a game and an unknown number of apps running simultaneously. It's a different modus operandi than with the current generation of consoles.An interesting question is how much of the 3GB of memory reserved for apps can still be assigned to games later in the console's life cycle.
checking Twitter while sitting through a Call of Duty cutscene, for instance.
"They're sitting watching a movie and they're texting all the time. "
My problem with all this secondary app stuff is that I already have all of this on my tablet, even while sitting in front of a TV. I just don't care about yet another device giving me access to boring, ubiquitous stuff like social networks, communication, and media.
The fact is, PCs today still use DDR3 because of one thing - It is easier on the CPU, and thus easier on an OS. DDR3 + eSRAM allows Microsoft to easily incorporate OS-specific features while simultaneously allowing developers access to the higher bandwidth RAM they want, albeit in small bits. 32MB is plenty for storing a large framebuffer though.
Its the great latency defence.Are you saying that GDDR5 is harder on the CPU to use than DDR3? Pretty sure the only reason CPU's use DDR3 and not GDDR5 is because DDR3 is way cheaper and GDDR5 is overkill as far as what a CPU needs. GDDR5 has no negative impact on CPU's or Os's.
The problem with this demand for ever prettier art assets is that it's increasingly hard to reconcile with the aforesaid player lust for volatility - both for new software and for the perpetual updating and expansion of that software. Games that are optimised for a certain memory setup may be threatened by the introduction of a new app, as Multerer proceeded to illustrate.
"We find ourselves in a position where if we want to change the [Application Programming Interfaces] and make a bigger buffer or talk to a service that has slightly different requirements, and we need 10 more bytes of RAM - 10 bytes - some game is going to start crashing. We have to be extremely careful and offer up a very predictable environment to the game developers to get the best games on your console."
Hence, Microsoft's decision to run apps and games in separate partitions, so that the two sets of requirements can coexist. "In the application world, what the next-gen gamer wants is lots of change. 'I want lots of apps, I want lots of services, I want to talk to services that may decide to change their APIs every couple of months or so, turn off old ones and expect all new ones' - but games don't work that way.
In terms of an absolute scale of "difficulty" I'd wager that they're both extremely easy to develop for compared to last gen.
Here's an artistic rendition:
Difficult........................................................................................................................................Less Difficult
|-----------------PS3-------------360---------------------------PC----------------------------------XB1-PS4--|
Are you saying that GDDR5 is harder on the CPU to use than DDR3? Pretty sure the only reason CPU's use DDR3 and not GDDR5 is because DDR3 is way cheaper and GDDR5 is overkill as far as what a CPU needs. GDDR5 has no negative impact on CPU's or Os's.
I don't see anyone (or, vast amounts of people) asking for this stuff. We like TV how it is. We like videogames how they are. Tablets and phones and shit can add to the experience and we have them already if that's what we want to do. But to spend energy on this stuff on your system seems so foolish. It was like when microsoft had a huge E3 hubbub about "We have twitter and facebook!" and the general reaction was "so what? I can already access those on 1000 other devices." They never gained traction as apps. It was a complete waste of time.
Its the great latency defence.