Kotaku Rumor: Microsoft 6 months behind in game production for X720 [Pastebin = Ban]

A certain well known developer has echoed to me that 9 GB of RAM is where the returns start diminishing.

Is this in reference to Durango specifically, or??


Because guess what, 12GB-3GB for OS=9GB. So an increase to 12GB would certainly be reasonable if you were referring durango
 
"Both AMD and Nvidia sell GPGPU accelerator boards that make use of GDDR5 memory that offers considerably higher bandwidth than the DDR3 memory that is used as main memory on most systems. However AMD director of software Margaret Lewis said that the latency of memory copy operations - the act of taking data from system memory and putting it on memory that is addressable by the GPU - was a far bigger problem than the relative bandwidth difference of using DDR memory to feed the GPU."

http://www.theinquirer.net/inquirer...aign=Twitterfeed&utm_term=INQ&utm_content=INQ

Nothing in your post has anything to do with anything in Brad's post.

Everything about hUMA has to do with a unified pool of memory courtesy of an APU. By definition, the ESRAM is its own pool. You certainly don't want it to get lost into the unified pool either; the whole point is that it's a different kind of low-latency high-bandwidth RAM.
 
GAF needs to calm down.

1. Microsoft is not switching over to GDDR5.
2. ES RAM + DDR3 was a decision made years ago to balance profitability and developer requests.
3. Increasing RAM count has little benefit, but can be done.
4. There will be no drastic changes in RAM type. You will not see DDR3 suddenly change into GDDR5 or stacked memory. Unless you want Microsoft to spend another $500 million and to launch in 2014.
5. Microsoft's machine is better designed in terms of profitability.

A certain well known developer has echoed to me that 9 GB of RAM is where the returns start diminishing.

The speed of RAM (bandwith) and the amount of RAM are mutually proportional when it comes to performance. You can't just throw 12 GB DDR3 at 60 GB/S and expect to be able to utilize it all at once. The "speed" of RAM is just as important as pool size. The faster your RAM is, the more you can utilize at once.

The battle for next-gen has little do with specs. People read things about CPU/GPU and unified memory and think they will see some sort of appreciable performance boost. These things have little to do with performance increase, and more to do with developer ease. These machines are both, thankfully, designed around making lives easier.

Thuway, I'm not totally sure but I feel MS is keeping one part of Durango a secret.

I don't know exactly what it is. But it has nothing to do with RAM.
 
no it doesn't

in theory, if it only ever accesses the esram, then the durango GPU will see a decent bandwidth. For some activities that will be fine (framebuffer stuff etc), but assuming your level data is larger than 32MB, you'll need to access external ram and your bandwidth will drop off a cliff - doesn't matter how much the data movers try and mitigate that, eventually things will slow down


The only consensus I got from reading through 100+ pages of the Beyond 3D durango thread was:

"The system is balanced for what we have seen on paper - just don;t expect it to set the world on fire or topple a high end PC".


Not sure why any one would want them to attempt that again anyway - all it gave us this gen was faulty hardware and a system that nearly bankrupt its parent and confounded developers.
 
Nothing in your post has anything to do with anything in Brad's post.

Everything about hUMA has to do with a unified pool of memory courtesy of an APU. By definition, the ESRAM is its own pool. You certainly don't want it to get lost into the unified pool either; the whole point is that it's a different kind of low-latency high-bandwidth RAM.


can it could work just virtualizating the memory?
 
can it could work just virtualizating the memory?

According to the VGL Durango GPU leak, the ESRAM will be virtually mapped for access.

However, while this makes it slightly easier to access in terms not having to deal with funky APIs, the hard part is still managing what's inside it at a given time (which was what Brad was talking about).
 
Thuway, I'm not totally sure but I feel MS is keeping one part of Durango a secret.

I don't know exactly what it is. But it has nothing to do with RAM.

Technically all the info that we have is a secret. You can't control the info that gets out with leaks.
 
GAF needs to calm down.

1. Microsoft is not switching over to GDDR5.
2. ES RAM + DDR3 was a decision made years ago to balance profitability and developer requests.
3. Increasing RAM count has little benefit, but can be done.
4. There will be no drastic changes in RAM type. You will not see DDR3 suddenly change into GDDR5 or stacked memory. Unless you want Microsoft to spend another $500 million and to launch in 2014.
5. Microsoft's machine is better designed in terms of profitability.

A certain well known developer has echoed to me that 9 GB of RAM is where the returns start diminishing.

The speed of RAM (bandwith) and the amount of RAM are mutually proportional when it comes to performance. You can't just throw 12 GB DDR3 at 60 GB/S and expect to be able to utilize it all at once. The "speed" of RAM is just as important as pool size. The faster your RAM is, the more you can utilize at once.

The battle for next-gen has little do with specs. People read things about CPU/GPU and unified memory and think they will see some sort of appreciable performance boost. These things have little to do with performance increase, and more to do with developer ease. These machines are both, thankfully, designed around making lives easier.

Amen for common sense.

People talking up any sort of dramatic spec change are just talking themselves up for a fall.

The clock speed could be altered but other that that, the fact is the lead times for building something like this are huge and significant change isn't as simple as saying 'we'll swap the 8 core chip for the 12 core chip' and it just happens by magic.

The best thing to do is assume MS have made their choices for a reason and see how it plays out.

being able to be aggressive on price means being able to be aggressive with publisher deals for content.

I think Durango is going to do just fine once people see a few games.
 
According to the VGL Durango GPU leak, the ESRAM will be virtually mapped for access.

However, while this makes it slightly easier to access in terms not having to deal with funky APIs, the hard part is still managing what's inside it at a given time (which was what Brad was talking about).

"All access to the GPU in Durango memory using virtual addresses, and therefore pass through a translation table before settled in the form of physical address. This layer of indirection solves the problem of fragmentation of memory hardware resources, a single resource can occupy several non-contiguous pages of physical memory without penalty.

Virtual addresses can take aim pages in the main RAM, in the ESRAM, or can not be mapped. The Shader read and writes the pages not mapped in well defined results, including optional error codes, rather than block the GPU. This ability is important for the support of resources in "tiles", which are partially resident in physical memory."


yeah
the cpu can access to the esram in durango?
 
"All access to the GPU in Durango memory using virtual addresses, and therefore pass through a translation table before settled in the form of physical address. This layer of indirection solves the problem of fragmentation of memory hardware resources, a single resource can occupy several non-contiguous pages of physical memory without penalty.

Virtual addresses can take aim pages in the main RAM, in the ESRAM, or can not be mapped. The Shader read and writes the pages not mapped in well defined results, including optional error codes, rather than block the GPU. This ability is important for the support of resources in "tiles", which are partially resident in physical memory."


yeah
the cpu can access to the esram in durango?

You can pass pointers between GPU and CPU. I don't know if the CPU can directly access the ESRAM but I would think so.
 
You can pass pointers between GPU and CPU. I don't know if the CPU can directly access the ESRAM but I would think so.


i dont know maybe im stupid but reading about huma and thinking again at the entire picture ddr3+esram+move engines

it sound like an evolution of huma..coz not only cpu and gpu see a unified pool of ram...but this take all the advantage of low latency ...of the esram...and the times that memory need to be copied ...move engine can do it alone without asking other resources...also when the cpu is busy

http://www.microsofttranslator.com/...013/02/08/durango-nos-hace-coger-el-delorean/

(i know is an old explanation but i think is still the best one)
 
"All access to the GPU in Durango memory using virtual addresses, and therefore pass through a translation table before settled in the form of physical address. This layer of indirection solves the problem of fragmentation of memory hardware resources, a single resource can occupy several non-contiguous pages of physical memory without penalty.

Virtual addresses can take aim pages in the main RAM, in the ESRAM, or can not be mapped. The Shader read and writes the pages not mapped in well defined results, including optional error codes, rather than block the GPU. This ability is important for the support of resources in "tiles", which are partially resident in physical memory."


yeah
the cpu can access to the esram in durango?

You can pass pointers between GPU and CPU. I don't know if the CPU can directly access the ESRAM but I would think so.

Actually, further down in the article:

The advantages of ESRAM are lower latency and lack of contention from other memory clients—for instance the CPU, I/O, and display output.

So I guess it cannot. Will be harder to manage if you have to stream things in through the command buffer.
 
Actually, further down in the article:



So I guess it cannot. Will be harder to manage if you have to stream things in through the command buffer.

But then why allow to share pointers? If one of those points to the ESRAM - what then?
Contention is removed because of the front buffer sitting in the DRAM.

That being said, I think most devs will use the ESRAM only to render.
 
GAF needs to calm down.

1. Microsoft is not switching over to GDDR5.
2. ES RAM + DDR3 was a decision made years ago to balance profitability and developer requests.
3. Increasing RAM count has little benefit, but can be done.
4. There will be no drastic changes in RAM type. You will not see DDR3 suddenly change into GDDR5 or stacked memory. Unless you want Microsoft to spend another $500 million and to launch in 2014.
5. Microsoft's machine is better designed in terms of profitability.

A certain well known developer has echoed to me that 9 GB of RAM is where the returns start diminishing.

The speed of RAM (bandwith) and the amount of RAM are mutually proportional when it comes to performance. You can't just throw 12 GB DDR3 at 60 GB/S and expect to be able to utilize it all at once. The "speed" of RAM is just as important as pool size. The faster your RAM is, the more you can utilize at once.

The battle for next-gen has little do with specs. People read things about CPU/GPU and unified memory and think they will see some sort of appreciable performance boost. These things have little to do with performance increase, and more to do with developer ease. These machines are both, thankfully, designed around making lives easier.

Well, according to Brad Grenz, Durango is messier/harder to work with due to different memory-setups..
 
But then why allow to share pointers? If one of those points to the ESRAM - what then?
Contention is removed because of the front buffer sitting in the DRAM.

That being said, I think most devs will use the ESRAM only to render.


maybe the move engine r there just for this reason to move the data between esram and main pool for the cpu!?!?!?

and still have a huma typo tech?!
 
But then why allow to share pointers? If one of those points to the ESRAM - what then?
Contention is removed because of the front buffer sitting in the DRAM.

That being said, I think most devs will use the ESRAM only to render.

Just because the virtual address space is shared doesn't mean everything has to be mapped with the same levels of access for both the GPU and the CPU. Dereferencing a pointer mapped to ESRAM on the CPU side would probably just be an access violation.
 
Just because the virtual address space is shared doesn't mean everything has to be mapped with the same levels of access for both the GPU and the CPU. Dereferencing a pointer mapped to ESRAM on the CPU side would probably just be an access violation.

So you'll have to keep track of your data and where it resides. IMHO that's painful to maintain.
 
So you'll have to keep track of your data and where it resides. IMHO that's painful to maintain.

Developers would have to do this regardless of whether or not the CPU can directly access the ESRAM. It just means they'll have some wrapper functions to copy things in and out rather than standard memcpy.
 
Developers would have to do this regardless of whether or not the CPU can directly access the ESRAM. It just means they'll have some wrapper functions to copy things in and out rather than standard memcpy.

Can't microsoft not deliver such a function like a dx11 Context that has memcpy function that reads and writes to esram.

But i think you need to use the esram as a scratch pad.
So pump data to esram with move engines(tiles perhaps) let the GPU work on that data probably the tiled Gbuffer data used in tiled deferred rendering once processed write it to the display planes(aren't these just glorified framebuffers?). Or am i wrong thinking this way?

The more i think about it, it seems like the cpu will for the most part be used to keep track of data and not really for calculations?
 
I mean something the leaks were overlooking.

None of this pastebin bullshit though.

That's just my guess. We'll find out soon enough.
Personally I do think that the GPU will get a bit of a bump. Not huge, ~1.5Tflops or so, but it'll help close the gap between PS4 somewhat. I also am actually expecting Kinect 2 to be pretty cool if it winds up being able to do a lot of the stuff from the Natal tech demos and that'll likely be a big sell for the system.
 
That's what I figured until I realized it's exactly like 360's setup

It's not. GPU can read from it.

It could be treated exactly like 360's eDRAM in cases where a game's read needs are maybe 40GB/s or so. You could put render targets in eSRAM and treat it like 360's eDRAM. This would be more or less as simple as 360's setup.

If you have higher needs than that, it's a different setup to 360. Maybe more like PS2's memory setup I guess.
 
Personally I do think that the GPU will get a bit of a bump. Not huge, ~1.5Tflops or so, but it'll help close the gap between PS4 somewhat. I also am actually expecting Kinect 2 to be pretty cool if it winds up being able to do a lot of the stuff from the Natal tech demos and that'll likely be a big sell for the system.

I agree, one may like or dislike Kinect, but it will be a great point of differentiation from the competition.
Even if, by the rumors, the ps4 will be more powerful, this hardly will add something to the gameplay, instead Kinect if pushed in the 3rd party games could add new ways to interact, adding something that the ps4 can't offer.

We have only to see if Microsoft will ship Kinect in every box, maybe Sony will bundle the new pseye too?
 
I agree, one may like or dislike Kinect, but it will be a great point of differentiation from the competition.

Yeah, and not to people on this board, but to everybody else. The Wii audience and families. And it could be quite a powerful sales force frankly. If you're standing in front of two core consoles and they're reasonably similar, similar price, but one has Kinect packed in, a lot of people are going to go for that one. Especially when Mom's or Dad's are involved in the purchasing decision around the holidays.

Sony could throw a bit of a spoiler in this with the dual eye stuff if they pack that in whatever it turns out to be, which likely wont be near the caliber of Kinect, but maybe they can confuse some of those Moms into seeing them as equivalent "Oh well this comes with Kinect but this comes with Sony's version of Kinect". But of course pack ins arent cheap and i think PS4 will already be very expensive.

What if PS4 has 50% more GPU flops, but Durango outsells it easily each month regardless?

The news Kinect stand alone apparently sold 140k in a recent NPD was kind of eye opening that Kinect still has some pull.
 
It's not. GPU can read from it.

It could be treated exactly like 360's eDRAM in cases where a game's read needs are maybe 40GB/s or so. You could put render targets in eSRAM and treat it like 360's eDRAM. This would be more or less as simple as 360's setup.

If you have higher needs than that, it's a different setup to 360. Maybe more like PS2's memory setup I guess.

The more you know :)

GAF needs to calm down.

1. Microsoft is not switching over to GDDR5.
2. ES RAM + DDR3 was a decision made years ago to balance profitability and developer requests.
3. Increasing RAM count has little benefit, but can be done.
4. There will be no drastic changes in RAM type. You will not see DDR3 suddenly change into GDDR5 or stacked memory. Unless you want Microsoft to spend another $500 million and to launch in 2014.
5. Microsoft's machine is better designed in terms of profitability.

A certain well known developer has echoed to me that 9 GB of RAM is where the returns start diminishing.

The speed of RAM (bandwith) and the amount of RAM are mutually proportional when it comes to performance. You can't just throw 12 GB DDR3 at 60 GB/S and expect to be able to utilize it all at once. The "speed" of RAM is just as important as pool size. The faster your RAM is, the more you can utilize at once.

The battle for next-gen has little do with specs. People read things about CPU/GPU and unified memory and think they will see some sort of appreciable performance boost. These things have little to do with performance increase, and more to do with developer ease. These machines are both, thankfully, designed around making lives easier.

Voice of reason I absolutely agree. And I highly doubt MS has a magic trick up their sleeve. Wish they had, but if they had it would have fallen out by now...

MS just has to deal with it. But I am sure they are quite pissed to know 8GB GDDR5 is available (achievable) it's a far more elegant solution, despite the advantages of their own solution. Also don't forget that 32MB eSRAM isn't cheap either.
 
Well, according to Brad Grenz, Durango is messier/harder to work with due to different memory-setups..

Compared to the PS4, it is. And if you want to claim the ESRAM provides low latency advantages, exploiting those characteristics make it more so. But I also said it's nowhere near as bad a Cell or the PS2. It just suffers in comparison to the clean, simple design of the PS4.
 
Compared to the PS4, it is. And if you want to claim the ESRAM provides low latency advantages, exploiting those characteristics make it more so. But I also said it's nowhere near as bad a Cell or the PS2. It just suffers in comparison to the clean, simple design of the PS4.

But then the question comes, "how much more difficult" and what if MS superior Tools and and knowledge about development makes that difference obsolete?
 
DDR3 is really inexpensive. Sony is easily spending 4 times as much to include the same amount of GDDR5.

So why would they go down the GDDR5 route then? Especially after the PS4, surely they'd be wanting to keep costs as low as possible.

I'm talking purely from a business POV here, not technical advantages. Especially if GDDR5 is 4x more expensive as you claim.
 
So why would they go down the GDDR5 route then? Especially after the PS4, surely they'd be wanting to keep costs as low as possible.

I'm talking purely from a business POV here, not technical advantages. Especially if GDDR5 is 4x more expensive as you claim.


because they don't have to worry about including a kinect in the box and therefore have a few more dollars to play with and still hit the same price range?
 
But everybody is saying that the ESRAM is espensive/complicated and it is not as cheap as people think.

So what gives?

I don't get it either. I thought some rumours were that MS were on the edge of manufacturability with the size of the SoC, which doesn't match with profitability

Maybe the cheaper ram gives it the edge over PS4, but then kinect will be expensive again.

confusing
 
But everybody is saying that the ESRAM is espensive/complicated and it is not as cheap as people think.

So what gives?

it will be a lot cheaper than gddr5 no matter what they say...

thats why they did it that way.

Two methods:

8GB DDR3+ESRAM (not as high performing or simple to program)
8GB GDDR5 (More expensive)

it goes without saying we may or may not see these savings as consumer. i rather expect ms to price near the ps4 initially, leaning on kinect 2 to justify it, only dropping if sales are not competitive and they are forced too.
 
So why would they go down the GDDR5 route then? Especially after the PS4, surely they'd be wanting to keep costs as low as possible.

I'm talking purely from a business POV here, not technical advantages. Especially if GDDR5 is 4x more expensive as you claim.

Because it's faster, lends itself to a simpler design and lets you spend more of your silicon budget on a more powerful GPU instead of embedded memory and functional blocks intended to keep the system from being bandwidth starved. Sony's also saving money elsewhere compared to the Durango. No HDMI passthrough, their camera should be cheaper to make than Kinect 2.0, and their APU may actually be cheaper to make since it doesn't have so much space devoted to ESRAM.

Keep in mind, Sony was originally targeting 4GB of GDDR5. That may have arguably still been a better trade-off performance wise than a segmented design like Durango, but if they found themselves in a position where they could give developers that 8GB of RAM they really wanted and still come in around the price point they targeted, that's a good decision. Like, if they wanted to sell these at $399 and with 4GB of GDDR5 they'd cost $330 each to build, but 8GB only puts that up to $390, the good will from developers and inability of MS to market 8GBs as an advantage is more valuable to the platform than pocketing the difference.
 
How much cheaper would DDR3 be then?
Because if its so much cheaper, then why didnt Sony go with that, if they also are concerned about costs?

I don´t think that Sony adds so much memory and that more expensive memory just to give "gamers and devs" what they want. If that was the case, they could have chosen a more exotic/better CPU/GPU solution.. (if they really wanted to go all out so to say).

My question here is that.. something is very strange. People seem to be commenting from their ass but acts like they know their stuff. (Im referring to this with cost of ESRAM and the complication/expenses with it. Some day it is a cheaper solution for MS, others say that it makes development more complicated and that it is more expensive. You would think that MS would like to have gaming development as easy as possible, so why not take that route?

so yeah, something is fishy here...
 
But everybody is saying that the ESRAM is espensive/complicated and it is not as cheap as people think.

So what gives?

It's a big investment at the front end, but it cost reduces very fast as the chip is shrunk to new nodes. So even if the ESRAM makes the APU bigger and marginally (or even significantly) more expensive to produce than the PS4 chip, that disparity shrinks to a negligible difference as soon as they more to 22nm. Meanwhile the RAM cost does not reduce rapidly like that. Sony will be paying much more for RAM in the PS4 than MS will for Durango the entire life of these systems.
 
It's a big investment at the front end, but it cost reduces very fast as the chip is shrunk to new nodes. So even if the ESRAM makes the APU bigger and marginally (or even significantly) more expensive to produce than the PS4 chip, that disparity shrinks to a negligible difference as soon as they more to 22nm. Meanwhile the RAM cost does not reduce rapidly like that. Sony will be paying much more for RAM in the PS4 than MS will for Durango the entire life of these systems.

Do we know or have some estimates/guestimates about cost difference?
 
Random question. Is there any onboard processing in Kinect 2.0? Is this used 100% for Kinect functionality or is some utilised for system processing too? I imagine it won't because of the USB bottleneck, but I don't think Sony's camera bar has onboard processing.
 
Random question. Is there any onboard processing in Kinect 2.0? Is this used 100% for Kinect functionality or is some utilised for system processing too? I imagine it won't because of the USB bottleneck, but I don't think Sony's camera bar has onboard processing.

Durango will not have USB bottleneck to Kinect2 as Kinect1 had..
 
How much cheaper would DDR3 be then?
Because if its so much cheaper, then why didnt Sony go with that, if they also are concerned about costs?

I don´t think that Sony adds so much memory and that more expensive memory just to give "gamers and devs" what they want. If that was the case, they could have chosen a more exotic/better CPU/GPU solution.. (if they really wanted to go all out so to say).

My question here is that.. something is very strange. People seem to be commenting from their ass but acts like they know their stuff. (Im referring to this with cost of ESRAM and the complication/expenses with it. Some day it is a cheaper solution for MS, others say that it makes development more complicated and that it is more expensive. You would think that MS would like to have gaming development as easy as possible, so why not take that route?

so yeah, something is fishy here...

You're not making an argument here. It's "fishy" but you don't understand any of the technical, economic or market reasons for any of the decisions? You are seeing contradictions where none exist. Each company did what they did because it gave what they hoped would be their best chance for success. For Sony that meant pleasing devs (and hardcore gamers) with the friendliest and most powerful design they could afford. For MS that meant ensuring costs were under control in the long term, while making sure they had the resources to accomplish their multimedia aspirations. Both had budgets, vague ideas of what tech would be available when, and targets for when they needed to ship a system. There's nothing fishy about that.
 
Because it's faster, lends itself to a simpler design and lets you spend more of your silicon budget on a more powerful GPU instead of embedded memory and functional blocks intended to keep the system from being bandwidth starved. Sony's also saving money elsewhere compared to the Durango. No HDMI passthrough, their camera should be cheaper to make than Kinect 2.0, and their APU may actually be cheaper to make since it doesn't have so much space devoted to ESRAM.

Keep in mind, Sony was originally targeting 4GB of GDDR5. That may have arguably still been a better trade-off performance wise than a segmented design like Durango, but if they found themselves in a position where they could give developers that 8GB of RAM they really wanted and still come in around the price point they targeted, that's a good decision. Like, if they wanted to sell these at $399 and with 4GB of GDDR5 they'd cost $330 each to build, but 8GB only puts that up to $390, the good will from developers and inability of MS to market 8GBs as an advantage is more valuable to the platform than pocketing the difference.

Well, a cost difference of $60 would cost them £600,000,000 for the first 10 million PS4 console sales alone, and a good few billion dollars over the lifecycle. We'll have to just wait and see whether that advantage was really that commercially valuable, in comparison to Durango's USPs (Kinect, DVR features etc). These cost differences don't seem much when we're talking about an individual unit, but when you extrapolate it to the volumes, you can see why every penny counts.
 
can someone tell me if this is true?

"But this can be applied in PS Orbis? Basically to implement virtual memory management it is not necessary that there are two levels of memory, taking the top sufficient storage capacity for the image (color, depth, and stencil) buffers and textures needed for the scene and there is a lower level. For PS Orbis the caches of the GPU do not have enough storage capacity for this and the GDDR5 is a single level of memory for all of the GPU. Obviously the ESRAM and all the mechanism implementation costs in the space that is a sacrifice in terms of computation capability. But the biggest advantage comes from the fact that this allows access to large amounts of memory per frame without having to rely on huge band widths from expensive high-wattage as the GDDR5 memory. The reason why Xbox 8/Durango dosnt uses GDDR5 is not by the fact that then the thing would be completely redundant, the GDDR5 exists on the GPUs of face to avoid the Texture Trashing by the use of a higher bandwidth, the use of virtual memory on the GPU and Virtual Texturing are another solution to the same problem that both come into conflict within a system."

coz this would be the answer coz they gone with ddr3+esram+move engines
 
Top Bottom