eSRAM question

eSRAM is a band-aid to the slow DDR3 RAM.

Cloud means scalable and cheaper dedicated servers.

Nothing related.


That's exactly what happened... Sony choose 4GB at start but near PS4 reveal the industry finally delivered 512Mbits GDDR5 modules... so they could increase to 8GB because they had luck.

Saying they just got lucky is simplifying it a bit. It was a calculated risk to some extent and I believe they actually worked with manufacturers closely to try make 8gb happen
 
I don't like linking to another forums, but here:

http://forum.beyond3d.com/showpost.php?p=1838682&postcount=9369

http://forum.beyond3d.com/showpost.php?p=1838379&postcount=9324

If you look at that forum you'll find many other posts regarding the subject, and the answer is always the same. Esram was the primordial point of the xbone design, then came other choices.

"The ESRAM allowed them to consider RAM choices that wouldn't have been feasible otherwise, which allowed them to increase the amount of RAM the box offered when it was determined that the previous reservation was going to be tighter than they'd hoped."

Sounds like were discussing which came first? The chicken or the egg? Because if you read that quote it is obvious that decided to go with ESRAM because they needed a larger amount of ram which is exactly what i said.
 
Saying they just got lucky is simplifying it a bit. It was a calculated risk to some extent and I believe they actually worked with manufacturers closely to try make 8gb happen
Yeap... you worded that better... it was a bet in the future... if the GDDR5 512Mbits modules didn't get ready in the schedule PS4 could be launched with 4GB.
 
The reason they went for ESRAM is for the 8GB and OS solution. Nothing else makes sense. They wanted 8GB couldn't risk on GDDR5 coming through so had to come up with a workaround solution.
This is the only thing that makes sense.
It's still a very pleasant almost shocking surprise that the modules came through in time for PS4 to bump up to 8GB. No doubt there was a lot of hard work involved, but it's still a rare occurrence.
 
The mainstream press was doing this years before we had the Internet. I think the bigger issue is the people who believe this stuff and share it around. Folks need to think more critically and be more open-minded.

Yes that is true. It's just something that has blown up massively with the internet and is exacerbated even more by social media. At least printed news bullshit is regularly kept in check by broadcast news outlets at least here in the UK as broadcast news is properly regulated. The internet on the other hand is a breeding ground for fan sites and wannabe journalists spouting a personal agenda without sources/facts.



Edit: Unless we are talking about Sky Sports News, which is an abortion of a channel. But this is all off topic so I'll shut up
 
This is the only thing that makes sense.
It's still a very pleasant almost shocking surprise that the modules came through in time for PS4 to bump up to 8GB. No doubt there was a lot of hard work involved, but it's still a rare occurrence.
You can guess Sony worked together with manufactures to make the modules ready in time... MS didn't want to take this risk because they didn't want to give up the 8GB... Sony was fine with 4GB so they could take the risk.
 
Nothing to do with the cloud. What makes cloud a possibility for xbone is availability.


No, it's not.

Esram was chosen before any other memory choice. It was a deliberately choice to be an evolution of the 360 design. Any other memory decision, inclusive capacity os type of the mais ram came after esram was chosen.
You've shot-gunned the kool aid.
 
I don't like linking to another forums, but here:

http://forum.beyond3d.com/showpost.php?p=1838682&postcount=9369

http://forum.beyond3d.com/showpost.php?p=1838379&postcount=9324

If you look at that forum you'll find many other posts regarding the subject, and the answer is always the same. Esram was the primordial point of the xbone design, then came other choices.

It's a very weird step from the X360 solution, which was eDRAM. Had they gone with eDRAM again, then the result would have been quite different (nearly an order of magnitude more bandwidth to the small block, for example).

Instead, they went with eSRAM, which may be considered on par with GDDR5 in terms of bandwidth (depending on who's numbers you go by), and very unusual as far as modern GPUs go.
 
Why didn't they go with eDRAM and would that have also gimped the GPU by taking up space on the die like the eSRAM did?
 
And for what it's worth I have no reason to distrust BK's account.
Considering some of the questions raised the choice to go with ESRAM makes a little more sense. Decisions makers were just off doing their own thing before the 180 of reality set in.
 
Why didn't they go with eDRAM and would that have also gimped the GPU by taking up space on the die like the eSRAM did?

I think there were issues with process size? Only intel is fabbing CPUs with large amounts of edram and the foundries making the APUs for MS couldn't do that, so they had to go with larger (and slower) esram

The alternative would have been a daughter die like 360, so the edram and CPU/GPU could be on different process sizes, but MS seemed keen on a single chip - presumably for cost reduction benefits longer term,
 
You can guess Sony worked together with manufactures to make the modules ready in time... MS didn't want to take this risk because they didn't want to give up the 8GB... Sony was fine with 4GB so they could take the risk.
There is absolutely no evidence of this whatsoever, it's fanboy spin of the highest order.
 
They went with it to get 8 gigs of ram in the system at a console price.

There is absolutely no evidence of this whatsoever, it's fanboy spin of the highest order.
The Killzone demo that they showed at the PS4 conference was developed for 4 gigs of ram. Seems like Sony would have went with 4 gigs of ram at one point.
 
There is absolutely no evidence of this whatsoever, it's fanboy spin of the highest order.

Sounds like reasoned argument, certainly not fanboy spin of the highest order.

Developers seemed surprised by Sony announcing 8GB in February, suggesting they were expecting less until the last minute. If we rule out 'Sony got lucky', then that would suggest sony's supply chain guys worked hard to secure the high density GDDR5 necessary to delver 8GB.
 
Why didn't they go with eDRAM and would that have also gimped the GPU by taking up space on the die like the eSRAM did?

The eDRAM, like mrklaw said, couldn't be done using foundries. The Sony alternative that they moved away from was eDRAM 128mb with 1TB/s bandwidth and 4GB GDDR5....that would have been a faster solution than today. They moved away from it because of cost and trying to make it developer friendly.
 
Here's his other article May 14, 2014 Opinion: Xbox One is losing its “Next Gen” vision

http://www.developer-tech.com/news/2014/may/14/opinion-xbox-one-losing-its-next-gen-vision/

Before I bought a “next-gen” console I wanted to be sold on how I can be given different experiences from the previous generation. The PS4 looked like a PS3 with enhanced graphics – nothing impressed me in the slightest. It remains a dumb PC in my eyes.

I’m both a PlayStation and an Xbox fan before I start a flame war – I’ve always bought every generation of each console.

Xbox One caught my attention right-off-the-bat for its talk of Cloud processing, the ability to digitally share games, and for cool Kinect functionality like auto sign-in via facial recognition.

He's an opinion writer that loves xbox.

If you follow the crowd and say “but games aren’t in 1080p” then first I’ll show you Forza 5, which is, then I'll show you Ryse and you can see for yourself how ridiculous you sound. Crytek’s title is widely-regarded as the most graphically advanced thus far, it's 900p, and it’s only available on Xbox One. Although if this is still really going to cause you an issue; remember by the time PS4 starts getting its real killer titles such as God of War and Uncharted in 2015 – DirectX 12 will be out which is expected to boost performance twice what it can achieve today through enhanced multi-core processing.
Bravo. You had me until the DX12 part.

edit: could've used a quote on that.
 
It's a very weird step from the X360 solution, which was eDRAM. Had they gone with eDRAM again, then the result would have been quite different (nearly an order of magnitude more bandwidth to the small block, for example).

Instead, they went with eSRAM, which may be considered on par with GDDR5 in terms of bandwidth (depending on who's numbers you go by), and very usual as far as modern GPUs go.
Modern GPUs usually have 32MB of SRAM? Or did I misunderstand? ... Ah, you're referring to bandwidth.

I think it's clear MS could not secure 32MB of on-die eDRAM, otherwise they'd have gone with that.
 
Thanks for the replies.

So they could have done something similar to 360 by separating the GPU and esram but cost savings were too good with one die for Xbox One.
 
Nothing about the choice was for "The Cloud" or any cunning future stuff as implied in your post OP - sorry to be Debbie Downer OP but there ain't some magic sauce in the decision as you seem to be hoping for (by implication in your post).

The choice was very much based in the now and involved providing faster memory in support of the slower main memory to provide better performance for gaming while still supporting the OS/media/TV and Kinect requirements that were very much central to XB1 design.

It's not going to "come into it's own" later and prove to be some hidden Trojan Horse of extra performance.

People need to accept the XB1 design is fine for what the device was designed for however it does leave it somewhat less well designed power-wise from a pure gaming point of view.

Sony decided to revert to a more pure gaming console angle post PS3 and embrace the idea that like most devices it would double up as yet another place you could access Netflix, etc and the design is very much about fast memory and ease of game development. MS chose a much more hybrid design with the HDMI in and an OS with a lot more features around media consumption and eSRAM is simply part of that differing design drive. That's all there is too it.

Cue GIF with the water funnels.
 
They went with it to get 8 gigs of ram in the system at a console price.


The Killzone demo that they showed at the PS4 conference was developed for 4 gigs of ram. Seems like Sony would have went with 4 gigs of ram at one point.

Sounds like reasoned argument, certainly not fanboy spin of the highest order.

Developers seemed surprised by Sony announcing 8GB in February, suggesting they were expecting less until the last minute. If we rule out 'Sony got lucky', then that would suggest sony's supply chain guys worked hard to secure the high density GDDR5 necessary to delver 8GB.
GDDR5 density isn't anything to do wih supply chain logistics, though, it's to do with maturation of fabrication technologies. And we have absolutely no reason to rule out 'Sony got lucky'. In fact it fits the facts more closely. The system was originally going to have 4GB GDDR5, and the upgrade was last minute and highly dependent on circumstances outside of Sony's control, hence developers didn't know.

I call it fanboy spin because it seems to suggest that luck isn't a good enough reason for the PS4 having 8GB of RAM, Sony must have worked really hard for it. In reality there's no reason to think luck isn't the chief factor. Certainly I know of no source that suggests otherwise. If anyone has one please feel free to share it.
 
Although i believe you i can´t understand the choice of ESRAM. It´s like building a racecar and the first thing the designer says "Well, i think we start with the chassis of a VW Beatle and build the racecar around it, everyone cool with that?" (RacingMattrick: "Yeaaaah, nice, lets do this!")

They wanted a follow up of 360 design, but with less limitations. The edram on 360 was only useful for render targets, and the only way render targets could be rendered were in the edram. For the next gen they want the developer to have this small/fast ram available for the developer to use as it saw fit, so the gpu could access it transparently as if it was the main ram.

There are also other advantages about having more memory on chip. People around here say gpu's are already super efficient in hiding latencies, but I'm not so convinced of that, specially with more complex shaders...
 
I don't like linking to another forums, but here:

http://forum.beyond3d.com/showpost.php?p=1838682&postcount=9369

http://forum.beyond3d.com/showpost.php?p=1838379&postcount=9324

If you look at that forum you'll find many other posts regarding the subject, and the answer is always the same. Esram was the primordial point of the xbone design, then came other choices.

The second link says that DDR3 was chosen even before they decided to go with 8GB RAM
Nope. For one, it was DDR3 before it was 8GB

So eSRAM was probably chosen because of DDR3 low bandwidth.

------------

There are also other advantages about having more memory on chip. People around here say gpu's are already super efficient in hiding latencies, but I'm not so convinced of that, specially with more complex shaders...

Are there some studies to prove that?
 
"The ESRAM allowed them to consider RAM choices that wouldn't have been feasible otherwise, which allowed them to increase the amount of RAM the box offered when it was determined that the previous reservation was going to be tighter than they'd hoped."

Sounds like were discussing which came first? The chicken or the egg? Because if you read that quote it is obvious that decided to go with ESRAM because they needed a larger amount of ram which is exactly what i said.

Again, cause and consequence. The esram allowed them to chose cheaper ram, and when the need to have 8GB came (and the esram choice was already made by then) it was a no brainer because of the ram choice.

The difference in what came first is that one is a well planed and though design, it might not be the best, but it was a deliberate choice. The other sounds like a hack design to comply with requirements that were forced by executives.
 
I'd be really curious to see the bizarro universe where the PS4 had the camera still included, had 4GB of RAM, and was $499. In that world, all of MS decisions with hardware and pricing probably made a ton of sense.

Unfortunately...
 
Nothing to do with the cloud. What makes cloud a possibility for xbone is availability.


No, it's not.

Esram was chosen before any other memory choice. It was a deliberately choice to be an evolution of the 360 design. Any other memory decision, inclusive capacity os type of the mais ram came after esram was chosen.

You might be right but the earam taking up more space on the apu then the gpu cores is the crutch.

Not sure what your first comment means.
 
If esram wasn't there in the box, you wouldn't even get 792p.
It is there as an evolution of the 360 design, but due to the die size and transistor count it must have been left at 32 mb.
Would have been great had they somehow managed 64 megs.
 
Are there some studies to prove that?

Using compute for non graphical tasks, yeah. Using compute shaders for replacing pixel shaders? Never seen anything direct, only papers like this:

http://developer.amd.com/wordpress/media/2012/10/Efficient Compute Shader Programming.pps

They talk about how the greater flexibility of compute shaders allows for more optimization, and knowing the hardware they are able to maximize thread throughput and cache access.

The results are very interesting, with performance gains of up to 3 times.

It's not directly stated, but if by just optimizing thread usage and cache access pattern they were able to maximize performance by this much, then trading up a little ALU for more on chip memory doesn't sound like such a bad design choice... If anything to allow a performance increase without the developer having to know so much about the underlying hardware...
 
You might be right but the earam taking up more space on the apu then the gpu cores is the crutch.

Not sure what your first comment means.

The one about the cloud? That the only thing special about cloud computing on xbone is that it is (or will) be seemingly available to any developer that want to use it.
 
Modern GPUs usually have 32MB of SRAM? Or did I misunderstand? ... Ah, you're referring to bandwidth.

I think it's clear MS could not secure 32MB of on-die eDRAM, otherwise they'd have gone with that.

Whoops, I meant unusual.

My mistake!
 
The first link says nothing except that maybe it was <8GB earlier in the development. It doesn't say anything about when the type of ram was chosen vs timing of esram - both were already there based on that quoted post.


Second link is the same - says DDR3 was chosen before 8GB. Which would have then meant esram to mitigate the bandwidth. That they then used esram, allowing them to increase the amount of DDR3 is a side issue

Look for other posts of Bkilian. He has addressed the issue before and explicitly said that. He does say in these quotes that esram was deliberately chosen by them, and it was not a second though after any other memory choices were made, though.
 
Again, cause and consequence. The esram allowed them to chose cheaper ram, and when the need to have 8GB came (and the esram choice was already made by then) it was a no brainer because of the ram choice.

The difference in what came first is that one is a well planed and though design, it might not be the best, but it was a deliberate choice. The other sounds like a hack design to comply with requirements that were forced by executives.

You are confusing the result with the reason. Bkilian only ever denied that MS originally targeted 8gb, but what they obviously DID target was "lots of inexpensive RAM". They knew that would never provide enough bandwidth on its own so a fast on chip memory would be required. In deciding between ESRAM or EDRAM they chose the former because it gave them far more flexibility on where the chip could be fabricated, and how quickly it could be shrunk to new processes. Had they gone with EDRAM it could have been 4-8 times as large and many times faster. GDDR5 was never a serious consideration for Microsoft. Remember, the initial version of PS4 only had 2GB of RAM, which was upgraded twice over the course of development. Microsoft assumed they would have an advantage there.
 
Look for other posts of Bkilian. He has addressed the issue before and explicitly said that. He does say in these quotes that esram was deliberately chosen by them, and it was not a second though after any other memory choices were made, though.

I can't square that circle, it just makes no sense. Deciding on esram after DDR3, or even at the same time (I.e if we have esram does that mean we can use cheaper DDR3?) both make sense. But the esram decision being first and other decisions coming afterwards? I don't buy it. Bkilian says he wasn't there at the start so that could be down to interpretation of his posts or lack of context.



Edit: 64-128MB edram with 1Tb/s bandwidth and 8GB DDR3 Vs PS4 with 4 or 8GB gddr5 would have been a very interesting thing to see. If that meant the edram was on a daughter die, you could have had similar raw GPU performance on both too...
 
I can't square that circle, it just makes no sense. Deciding on esram after DDR3, or even at the same time (I.e if we have esram does that mean we can use cheaper DDR3?) both make sense. But the esram decision being first and other decisions coming afterwards? I don't buy it. Bkilian says he wasn't there at the start so that could be down to interpretation of his posts or lack of context.

Perhaps it was made at the same time. What I meant to say is that esram + cheap memory was not a decision forced on the engineers, they chose to.
 
The one about the cloud? That the only thing special about cloud computing on xbone is that it is (or will) be seemingly available to any developer that want to use it.

Well anyone can use "cloud" computing for any game on any console but if cost is a concern then by the time ms get things going Sony might also have a option on the table as well.
 
I think people are in their right to believe in the cloud. Especially after the framerate demo for cloud compute at the last MS developers conference.
 
Perhaps it was made at the same time. What I meant to say is that esram + cheap memory was not a decision forced on the engineers, they chose to.

Seems like idle speculation based on posts from someone on Beyond3D, nothing hard evidence based. The end product seems like a misstep on their part, hard to believe it wasn't pushed due to trying to stay under a certain dollar amount per unit constructed tbh.
 
Perhaps it was made at the same time. What I meant to say is that esram + cheap memory was not a decision forced on the engineers, they chose to.

Yeah, they chose it based on the design directives they were working under which included TVTVTV, multitasking, Kinect and controlling costs. No one is saying they acted irrationally, but there is no question the strategic goals resulted in a weaker device.
 
Edit: 64-128MB edram with 1Tb/s bandwidth and 8GB DDR3 Vs PS4 with 4 or 8GB gddr5 would have been a very interesting thing to see. If that meant the edram was on a daughter die, you could have had similar raw GPU performance on both too...

The problem with a daughter die is that you end with the same limitations you had with 360.

You can't have the gpu to access the other die with the same bandwidth, so that limits it usefulness. You'd probably have to put fixed hardware in the daughter die again, most likely the ROPs. So for instance, the shader cores wouldn't access it, and having all that memory tied to the ROPs is already not a compelling design this generation, because compute shaders allows the code to bypass them.

Then probably, for the sake of die space (and since bandwidth is "cheap" for logic on the daughter die, they would probably go again with simpler ROPs, that lack compression for instance, since bandwidth is not a concern... Which means all that bandwidth is actually going to be waste most of the time...

By putting the ram on the same chip as everything else they make the design much more flexible. Not sure if it beats having a large pool of very fast ram, but I wouldn't say it was a bad design either...
 
Ok, What I remember reading about at around the announcement time - and I do think the logic makes sense here - was that they had decided that the cost was too prohibitive to go with 8gb of GDDR5 because of the memory target they had. So they decided to build in some eSRAM on the die for three reasons.

1) The speed and location of the memory would makeup for a lot of bandwidth loss when trading data between the CPU and GPU cores. The PS4 CPU and GPU have to access their data over a bus capable of supporting the GDDR5 (more chips = more money), while the eSRAM on die doesn't need separate control hardware, because direct access.

2) Because they were building it on die, the cost of the memory would go down over time due to advances in production (Think going from 45nm to 22nm, less heat and less power, more chips per die = lower cost per chip).

3) It was more or less a proven technology at the time. Microsoft, not wanting to repeat the mistakes of the last generation, were more careful in their hardware design leading to longer hardware change iterations. Meaning once they had selected an architecture, they would be unlikely to change it.

In general, integrated components - while initially more expensive to produce - get cheaper over time. The "gamble" they made was that GDDR5 was not going to come down in price in time for them to be able to make their launch window. Sony was "lucky" in that even though they had designed the system to use (for sake of argument) 4x 1GB GDDR5 chips, these 2GB chips came out that were at an acceptable price point and were a pin-for-pin replacement. (Take the old out, pop the new ones in, BAM, double the memory).

EDIT:
...
By putting the ram on the same chip as everything else they make the design much more flexible. Not sure if it beats having a large pool of very fast ram, but I wouldn't say it was a bad design either...

Yep, and most of the reports at the time were that even though the XONE's lower memory bandwith, the display targets that both consoles are trying to meet could be satisfied with even lower memory bandwith

EDIT: Woops!

Found that Eurogamer article that points out the bandwith: eSRAM 206GB/s, GDDR5 176GB/s, DDR3 68GB/s
Think of the XB1 as a suped-up Box truck working with a train vs a fleet of 18 wheelers. Both will get your data there, PS4 is "faster" when you consider the average over time of data transfer, but the XB1 is faster but only with a few MBs of data, otherwise it relies on its slower DDR3
 
I don't know a lot about this stuff so bear with me. I know there is a lot of controversy about MS going with eSRAM. My question is does eSRAM work better for the future "Cloud gaming"? Is that why they went with eSRAM? It sounds like Microsoft may be showing off the potential of the cloud at E3. I guess we will see. Thanks for reading.

I can't see how it would help cloud compute at all. What it does is precache things the GPU may need so that it has to access the slower main memory less. It's like what your RAM does for your hard drive, but in turn for RAM.
 
I'm here to clear up the misconceptions people have about esram. When someone asks, "so which console has the faster memory?" You have to immediately say Xbox one with its esram and it's not even up for debate. That's the single fastest piece of tech in either console. It's faster than anything in amazon's console as well. The truth is People like to downplay it because it's exclusive to Xbox one. The true developers like carmak know that when you need to do something fast in a game like shaders and bump maps, esram is the bees knees. This is why crytek chose to only bring Ryse to Xbox one and why call of duty was able to sustain a silky smooth. Why jump through unnecessary hoops?

The Cloud is a way to further future proof the console even though esram should be more than enough once Microsoft grants developers the access to code to the metal. It's the same reason people get insurance on their cars, you know? In case it's needed one day and not because the car won't work without it. We got a taste of how great cloud gaming can be on the pc with the launches of sim city and diablo 3. That's what Microsoft wants to bring to the comfy couches around the world and people hate them for it I don't know why. So imagine in five years when everyone wants the newest consoles and then Microsoft says bam you don't need one because every city now has google fiber one and pair it with Xbox one (hint hint) and you get a whole new console called xbox one cloud that will last you another five or so years. Nintendo just realized this so their next console will be a copy of Xbox cloud tech. It's too late for Sony to implement it haha.
 
Top Bottom