Kotaku Rumor: Microsoft 6 months behind in game production for X720 [Pastebin = Ban]

Durango is fine with its DDR3 because the GPU also sees >100GB/s bandwidth to esram. The CPU sees what it needs and the GPU sees comparable BW to the GPU in PS4, even if it's trickier for developers. The rumoured system seems balanced.

The "trickier for developers" is my greatest concern. I really hope 3rd parties make use of the ESRAM.
 
Durango is fine with its DDR3 because the GPU also sees >100GB/s bandwidth to esram. The CPU sees what it needs and the GPU sees comparable BW to the GPU in PS4, even if it's trickier for developers. The rumoured system seems balanced.
I completely agree.
 
The "trickier for developers" is my greatest concern. I really hope 3rd parties make use of the ESRAM.

My understanding is that Microsfot is likely to have specific D3D 11.D functions that make it easier to make use of the Durango hardware. If they are locking developers into the API like the rumors suggest how would it be hard to program for?
 
My understanding is that Microsfot is likely to have specific D3D 11.D functions that make it easier to make use of the Durango hardware. If they are locking developers into the API like the rumors suggest how would it be hard to program for?

Because you still need to manage two separate pools of memory.
 
Its not, but its still more annoying then having one single pool.

There was basically only one thing you could do with the 360's EDRAM since the ROPs were built into it. Devs can choose how to use the ESRAM in Durango. If you want to take advantage of it being low latency (which people like to continually bring up as an advantage of the design), you have to make a choice to use it for that instead of something else, and make sure your data is in the right place at the right time, and also make sure you're not boning yourself in terms of bandwidth budgets elsewhere. It's not a Cell level headache, but it's messier than anything you have to do on PS4.
 
There was basically only one thing you could do with the 360's EDRAM since the ROPs were built into it. Devs can choose how to use the ESRAM in Durango. If you want to take advantage of it being low latency (which people like to continually bring up as an advantage of the design), you have to make a choice to use it for that instead of something else, and make sure your data is in the right place at the right time, and also make sure you're not boning yourself in terms of bandwidth budgets elsewhere. It's not a Cell level headache, but it's messier than anything you have to do on PS4.

What if...

captura-de-pantalla-2013-05-07-a-las-11-23-26.png
 
That doesn't fix the problem since the fact that the memory pools are physically different and have different performance characteristics is the thing you're trying to exploit. Everything being in the same virtual memory space doesn't help you get the data you need into the low latency pool when you need it.
 
Everything being in the same virtual memory space doesn't help you get the data you need into the low latency pool when you need it.

Well, I think virtual memory can help for things like Partially resident textures. And things related to HSA:

CPU-GPU-640x347.jpg


Maybe I'm wrong, but I doubt Microsoft and AMD have not implement some HSA support in Durango.
 
My understanding is that Microsfot is likely to have specific D3D 11.D functions that make it easier to make use of the Durango hardware. If they are locking developers into the API like the rumors suggest how would it be hard to program for?

It's harder to program than the PS4, but not necessarily harder to program as in it's the second coming of Cell.
 
Well, I think virtual memory can help for things like Partially resident textures. And things related to HSA:

CPU-GPU-640x347.jpg


Maybe I'm wrong, but I doubt Microsoft and AMD have not implement some HSA support in Durango.


No, you're conflating terms and technologies. Only reading the portion of a texture you're actually rendering doesn't require a virtualized memory space.

And the HSA slide you've posted is about reading data without moving it, when moving data to the ESRAM is what actually makes it useful.
 
There was basically only one thing you could do with the 360's EDRAM since the ROPs were built into it. Devs can choose how to use the ESRAM in Durango. If you want to take advantage of it being low latency (which people like to continually bring up as an advantage of the design), you have to make a choice to use it for that instead of something else, and make sure your data is in the right place at the right time, and also make sure you're not boning yourself in terms of bandwidth budgets elsewhere. It's not a Cell level headache, but it's messier than anything you have to do on PS4.

Oh yes. I believe ERP referred to it as "just enough rope for developers to hang themselves with" or something like that :)

It could lead to a PS2 like situation where a few intrepid developers are able to squeeze a lot more than thought possible out of the system, while the vast majority would achieve lower paint by numbers performance.

PS3 vs 360, 360 was still considered the simpler as it was unified where PS3 was not, not too mention the Cell headache. This time around both are unified, but PS3 doesn't have the ESRAM complication. ps3 is pretty much less complicated than any console in history, ever. easily. geez, i cant imagine how much devs are going to love the x86 jaguar cores next gen alone...
 
No, you're conflating terms and technologies. Only reading the portion of a texture you're actually rendering doesn't require a virtualized memory space.

Carmack don't think the same.

With page tables, address fragmentation isn't an issue, and with the
graphics rasterizer only causing a page load when something from that
exact 4k block is needed, the mip level problems and hidden texture
problems just go away. Nothing sneaky has to be done by the
application or driver, you just manage page indexes.

And AMD:

For instance, the method can include partitioning a texture and associated mipmaps into memory tiles, where the memory tiles are associated with a virtual memory system. The method can also include mapping a first subset of the memory tiles to respective address spaces in a physical memory system. Further, the method can include accessing the physical memory system during a rendering process of a graphics scene associated with the first subset of memory tiles

But I guess I'm wrong.

ps3 is pretty much less complicated than any console in history, ever. easily. geez, i cant imagine how much devs are going to love the x86 jaguar cores next gen alone...

What? PS3 less complicated? xD

I'm sure complication are not hardware related, it is more a documentation issue.
 
There was basically only one thing you could do with the 360's EDRAM since the ROPs were built into it. Devs can choose how to use the ESRAM in Durango. If you want to take advantage of it being low latency (which people like to continually bring up as an advantage of the design), you have to make a choice to use it for that instead of something else, and make sure your data is in the right place at the right time, and also make sure you're not boning yourself in terms of bandwidth budgets elsewhere. It's not a Cell level headache, but it's messier than anything you have to do on PS4.

forgive my layman's understanding... Aren't durangos move engines there for that very purpose?
 
Carmack don't think the same.



And AMD:



But I guess I'm wrong.



What? PS3 less complicated? xD

I'm sure complication are not hardware related, it is more a documentation issue.

It doesnt require it, it might require you to format the data in a special way but other then that.
 
It doesnt require it, it might require you to format the data in a special way but other then that.

Surprise:

AMD’s next-gen APU unifies CPU/GPU memory, should appear in Kaveri, Xbox 720, PS4

AMD even spoke, at one point, about the idea of using an embedded eDRAM chip as a cache for GPU memory — essentially speaking to the Xbox Durango’s expected memory structure. The following quote comes from AMD’s HSA briefing/seminar:

“Game developers and other 3D rendering programs have wanted to use extremely large textures for a number of years and they’ve had to go through a lot of tricks to pack pieces of textures into smaller textures, or split the textures into smaller textures, because of problems with the legacy memory model… Today, a whole texture has to be locked down in physical memory before the GPU is allowed to touch any part of it. If the GPU is only going to touch a small part of it, you’d like to only bring those pages into physical memory and therefore be able to accommodate other large textures.

With a hUMA approach to 3D rendering, applications will be able to code much more naturally with large textures and yet not run out of physical memory, because only the real working set will be brought into physical memory.”

This is broadly analogous to hardware support for the MegaTexturing technology that John Carmack debuted in Rage.

http://www.extremetech.com/gaming/1...u-memory-should-appear-in-kaveri-xbox-720-ps4
 
Makes it easer and better I guess.

And about the esram... How do you know Durango is hard to develop or harder than PS4?

Because managing multiple pools of ram, or even really having to worry about multiple pools of ram makes a design more complex, now when a design is more complex that generally means that its harder to code for.

On the PS4 you don't really need to worry about if your eSRAM is filled up and you suddenly have next to no bandwidth for your remaining textures.

EDIT: I'm realistic. No belief in special sauce or power gremlins here.
 
Because managing multiple pools of ram, or even really having to worry about multiple pools of ram makes a design more complex, now when a design is more complex that generally means that its harder to code for.

On the PS4 you don't really need to worry about if your eSRAM is filled up and you suddenly have next to no bandwidth for your remaining textures.

EDIT: I'm realistic. No belief in special sauce or power gremlins here.

Being realistic doesn't mean each post you must say something negative. I am realistic, I know Durango will not be the most powerfull console, but I'm not saying only negative things.

A more complex design not necessary mean harder to code, if you have good documentation and good dev tools, the developer will not have issues.

Haven't been following the thread closely.. What does Pastebin=ban means and is the rumor reliable or bull?

If you post something from pastebin you will be banned.
 
Because managing multiple pools of ram, or even really having to worry about multiple pools of ram makes a design more complex, now when a design is more complex that generally means that its harder to code for.

On the PS4 you don't really need to worry about if your eSRAM is filled up and you suddenly have next to no bandwidth for your remaining textures.

EDIT: I'm realistic. No belief in special sauce or power gremlins here.

I think MS is keep a minor spec surprise close to the chest.

The special sauce is what we don't know.
 
What if...

captura-de-pantalla-2013-05-07-a-las-11-23-26.png

You'll notice in all the hUMA slides there is one unified pool of memory for the GPU and the CPU. ESRAM is not part of the unified pool, and thus will probably be accessed through a special API. It looks like it's where you'll keep your main render targets. However, 32MB isn't very big, especially if you want 32-bits a channel for HDR (1920x1080x16Bpp = ~31.6MB!).

The ESRAM looks great for little compute shenanigans though.
 
I can assure that at least in early 2012 MS was going with DDR3 Ram... Unless Durango is MS' Argo.

so Vustadumas work at Blizzard and he saying that durango has 8gb gddr5 and sony matched their specs?....

isnt this a megaton?!


also if i always thinked that they was going to ddr3 and esram for latency and coherent memory...
 
Because managing multiple pools of ram, or even really having to worry about multiple pools of ram makes a design more complex, now when a design is more complex that generally means that its harder to code for.

On the PS4 you don't really need to worry about if your eSRAM is filled up and you suddenly have next to no bandwidth for your remaining textures.

EDIT: I'm realistic. No belief in special sauce or power gremlins here.

aren't the move engines meant to assist with this though?
 
Durango is fine with its DDR3 because the GPU also sees >100GB/s bandwidth to esram. The CPU sees what it needs and the GPU sees comparable BW to the GPU in PS4, even if it's trickier for developers. The rumoured system seems balanced.

no it doesn't

in theory, if it only ever accesses the esram, then the durango GPU will see a decent bandwidth. For some activities that will be fine (framebuffer stuff etc), but assuming your level data is larger than 32MB, you'll need to access external ram and your bandwidth will drop off a cliff - doesn't matter how much the data movers try and mitigate that, eventually things will slow down
 
so Vustadumas work at Blizzard and he saying that durango has 8gb gddr5 and sony matched their specs?....

isnt this a megaton?!


also if i always thinked that they was going to ddr3 and esram for latency and coherent memory...

As I said - early 2012 they were going for DDR3 RAM + ESRAM.
 
This is because you may be the type of gamer who doesn't game over Live frequently or doesn't have many games at all. The fact that you only are talking about exclusives clearly shows you missed the point of BC. This isn't about some silly argument of whether a game in your collection is not on the PS3. This is about every game you buy for your console.

360 hardcore gamers play games especially multi long after the game hits the stores. Last gen wasn't even close to now and the fact that you could play Halo 2 on the 360 at launch with your friends was huge for us lajnch game 360 owners

Fast forward a gen and people will still be playing the last 3 CODs, Halo, Gears,Minecraft,etc. If like last gen you can still pop your game in and play with you friends lost on the Nextbox it will be huge. Case in point 360 gamers still play GTA IV online since it launched an it still is top 20 on Live.

No one is talking about the gamer who doesn't get a Nextbox at launch. But he like millions buy GTA 4, COD; Ghosts, Destiny etc. gamers who are still catching crazy deals over Live.

Will they have incentive to buy a new console if they Lose every bit of that when they go next gen?

Does an entire collection of games bought through XBLA become vaporware unless you keep you old console sitting under the TV forever.

This may be our reality as BC may not be possible. If it is then it will be a megaton for the people who know the reasons as Gaffers who frequent this site know. But the average gamer has no clue that is going to happen.

Just look up any forum when BC is mentioned that it might not be for one console of the other, prior to be realizing that it wouldn't likely be possible on either system. The reactions are very different from now.

Well written post my friend.
 
honestly i hope they gone with hsa and low latency ...than gddr5

Why your acting like GDDR5 is 10x slower then DDR3 is, it isn't, its marginally slower.

so Vustadumas work at Blizzard and he saying that durango has 8gb gddr5 and sony matched their specs?....

isnt this a megaton?!


also if i always thinked that they was going to ddr3 and esram for latency and coherent memory...

He works as a artist, not even allowed to see the hardware that has some of the worlds strict NDA's attached to it.
 
so Vustadumas work at Blizzard and he saying that durango has 8gb gddr5 and sony matched their specs?....

isnt this a megaton?!


also if i always thinked that they was going to ddr3 and esram for latency and coherent memory...

Would the megaton actually not be then that Blizzard has been working on Durango?

I'd be shocked if it isn't DDR3+ESRAM at this stage, but then that's because we've had all those rumours hammered in to us for so long now.
 
We know Blizzard is working on more than one console. They've hinted at that when they came out and said they're not really exclusive. The 8 gig rumor sounds like he got his info confused. Probably meant that Sony copied the amount of memory Microsoft was rumored to be going with, not the type of memory. 2 weeks.
 
Why your acting like GDDR5 is 10x slower then DDR3 is, it isn't, its marginally slower.



He works as a artist, not even allowed to see the hardware that has some of the worlds strict NDA's attached to it.

there is in % pretty much the difference between the ddr3 and gddr5 bw no? 40 50%?
 
We know Blizzard is working on more than one console. They've hinted at that when they came out and said they're not really exclusive. The 8 gig rumor sounds like he got his info confused. Probably meant that Sony copied the amount of memory Microsoft was rumored to be going with, not the type of memory. 2 weeks.

well i think diablo3 will go on durango too..so
 
The special sauce will be Usher xD



"ps3 is pretty much less complicated than any console in history"

He is comparing with any console, even Xbox 360 or PC.

Well... To be fair, besides the Xbox OG and Xbox 360 he's right. The PS3 was any Harder to program for then say, the Game Cube. The Game cube was just more familiar at that point.
 

Your thesis was that low latency ESRAM would provide performance benefits seamlessly through hUMA. This link just says thanks to hUMA the GPU can try to address data that isn't loaded into memory at all. hUMA doesn't solve the problem of making sure a particular piece of data is physically in the low latency when you want it. It just means both CPU and GPU can address any part of memory, physical or virtual, whenever they want. Making sure data is where it should be, when it should be is a problem that will have to be solved by programmers themselves, or some automating technology MS may provide, with whatever drawbacks that may entail.
 
GAF needs to calm down.

1. Microsoft is not switching over to GDDR5.
2. ES RAM + DDR3 was a decision made years ago to balance profitability and developer requests.
3. Increasing RAM count has little benefit, but can be done.
4. There will be no drastic changes in RAM type. You will not see DDR3 suddenly change into GDDR5 or stacked memory. Unless you want Microsoft to spend another $500 million and to launch in 2014.
5. Microsoft's machine is better designed in terms of profitability.

A certain well known developer has echoed to me that 9 GB of RAM is where the returns start diminishing.

The speed of RAM (bandwith) and the amount of RAM are mutually proportional when it comes to performance. You can't just throw 12 GB DDR3 at 60 GB/S and expect to be able to utilize it all at once. The "speed" of RAM is just as important as pool size. The faster your RAM is, the more you can utilize at once.

The battle for next-gen has little do with specs. People read things about CPU/GPU and unified memory and think they will see some sort of appreciable performance boost. These things have little to do with performance increase, and more to do with developer ease. These machines are both, thankfully, designed around making lives easier.
 
Your thesis was that low latency ESRAM would provide performance benefits seamlessly through hUMA. This link just says thanks to hUMA the GPU can try to address data that isn't loaded into memory at all. hUMA doesn't solve the problem of making sure a particular piece of data is physically in the low latency when you want it. It just means both CPU and GPU can address any part of memory, physical or virtual, whenever they want. Making sure data is where it should be, when it should be is a problem that will have to be solved by programmers themselves, or some automating technology MS may provide, with whatever drawbacks that may entail.


"Both AMD and Nvidia sell GPGPU accelerator boards that make use of GDDR5 memory that offers considerably higher bandwidth than the DDR3 memory that is used as main memory on most systems. However AMD director of software Margaret Lewis said that the latency of memory copy operations - the act of taking data from system memory and putting it on memory that is addressable by the GPU - was a far bigger problem than the relative bandwidth difference of using DDR memory to feed the GPU."

http://www.theinquirer.net/inquirer...aign=Twitterfeed&utm_term=INQ&utm_content=INQ
 
Top Bottom