A good technical analysis of the Durango leaked documents (from DME's to GPU) ... Its translated from Spanish BUT if you're technical you may be able to follow along!
http://www.microsofttranslator.com/...013/02/08/durango-nos-hace-coger-el-delorean/
IMO I don't believe those rumors. I think someone wanted that information to sound negative, but in reality it's the same thing all the consoles are doing and have done in the past.
That article is a really good read. Good explanation and makes sense. it's starting to look like the reason MS went with DDR3 is making sense. If you are using tile based rendering to an extent, you wouldn't need as much bandwidth as a traditional renderer.
That article is a really good read. Good explanation and makes sense. it's starting to look like the reason MS went with DDR3 is making sense. If you are using tile based rendering to an extent, you wouldn't need as much bandwidth as a traditional renderer.
That article is a really good read. Good explanation and makes sense. it's starting to look like the reason MS went with DDR3 is making sense. If you are using tile based rendering to an extent, you wouldn't need as much bandwidth as a traditional renderer.
He's saying that since the Orbis has a single large pool of memory with high bandwidth it doesn't need DMEs or see any benefits from virtual texturing.
What about those things With Mark Grossman and Carmack?
(Prts)?
Virtual Texturing?
connection to 3DLabs ?
Are those things positive in any way when it comes to grafix and performance?
thanks for your response![]()
Partial Resident Textures are a standard feature of the GCN architecture so Orbis will support that. Virtual textures are only useful if you have textures too large to fit in the memory you want to texture from. For Durango the 32MB embedded memory pool could be too small, and the 8GB memory pool too slow, so by using virtual textures you can copy just the piece you need at a given time to the small, fast memory and use it. On Orbis your large memory pool is also your fast memory pool so it isn't a problem to just read the part of the texture (using PRT) whenever you want.
That's all true to an extent, but being able to somewhat conveniently juggle data between two separate pools still requires more developer effort than simply just having 1 single pool (which is as fast as those 2 pools combined) in the first place.The memory pools that allow a program to read and write to different areas of memory at the same time without stalls. IMO that's what game programmers always want to achieve but can't when they run into system stalls.
This is improvements over an architecture devs are already familiar with. With many of the new editions being automatic or hidden from the user. It's still x86 and a GCN graphic chip.
He's saying that since the Orbis has a single large pool of memory with high bandwidth it doesn't need DMEs or see any benefits from virtual texturing.
That's all true to an extent, but being able to somewhat conveniently juggle data between two separate pools still requires more developer effort than simply just having 1 single pool (which is as fast as those 2 pools combined) in the first place.
That's all true to an extent, but being able to somewhat conveniently juggle data between two separate pools still requires more developer effort than simply just having 1 single pool (which is as fast as those 2 pools combined) in the first place.
The whole discussion started with oldergamer saying "There's no evidence that the new xbox is harder to work with." vis-a-vis PS4. I just pointed out that there is such evidence, even though it need not be a significant difference. Of course, PS3 had 8 user-managed memory pools, and somehow some ports still turned out well.This is in no way discounting the fact that ps4's single fast pool, just pointing out that it isn't necessary to always draw parallel with the ps4. To me it is just unnecessary and its tiring.
Don't confuse megatextures with virtual textures.
The whole discussion started with oldergamer saying "There's no evidence that the new xbox is harder to work with." vis-a-vis PS4. I just pointed out that there is such evidence, even though it need not be a significant difference. Of course, PS3 had 8 user-managed memory pools, and somehow some ports still turned out well.
Don't confuse megatextures with virtual textures.
Its not going to require any significant effort from developers. There has been a system with an embedded memory of one form or the other in at least each of the past 3 generations. Developers are used to it and it has its benefits, and it definitely isn't comparable to the split memory of the ps3. This is in no way discounting the fact that ps4's single fast pool, just pointing out that it isn't necessary to always draw parallel with the ps4. To me it is just unnecessary and its tiring.
I'm not. I am talking about virtual textures, not mega textures. Orbis might have enough bandwidth to access the entire ram per frame, but that doesn't mean you can't save it by accessing only what's really needed and using the bandwidth somewhere else, like a really expensive alpha blending across a large portion of the screen.
Unless I'm indeed making a confusion, that's what virtual texturing is. I know amd call it partially resident textures, but I thought the broader name was virtual texturing... Anyway, that's what the spanish guy is talking about, right? And that would definitely benefit orbis too...Do you not remember how much bitching there was from devs about the PS2's 4MB of embedded memory? Or noticed how few devs went to the effort of tiling to get a full 720p framebuffer with MSAA on 360? And that was in a system where the embedded memory was restricted to basically one use.
That doesn't require virtual textures.
Do you not remember how much bitching there was from devs about the PS2's 4MB of embedded memory? Or noticed how few devs went to the effort of tiling to get a full 720p framebuffer with MSAA on 360? And that was in a system where the embedded memory was restricted to basically one use.
Tiling was automatic for XNA created games, I'm sure MS provided similar things to devs of real games. I believe the bitching was due to the performance hit.
MS has actually forced devs to only use approved dev libraries for any kind of of X360 development. On the other hand, on PS3 Sony has been encouraging them to use LibGCM as much as possible (closest to down-to-metal as it can be on today's machines) and circumvent regular OpenGL path. From what I know it's the same exact deal on Vita, so it would be pretty consistent if it stays like that with PS4.IMO I don't believe those rumors. I think someone wanted that information to sound negative, but in reality it's the same thing all the consoles are doing and have done in the past.
MS has actually forced devs to only use approved dev libraries for any kind of of X360 development. On the other hand, on PS3 Sony has been encouraging devs to use LibGCM as much as possible (closest to down-to-metal as it can be on today's machines) and circumvent regular OpenGL path. From what I know it's the same exact deal on Vita, so it would be pretty consistent if it stays like that with PS4.
Ask yourself why MS would "force" anything on devs when they are known for bending over backwards to make the dev community happy. Seriously....MS has actually forced devs to only use approved dev libraries for any kind of of X360 development. On the other hand, on PS3 Sony has been encouraging devs to use LibGCM as much as possible (closest to down-to-metal as it can be on today's machines) and circumvent regular OpenGL path. From what I know it's the same exact deal on Vita, so it would be pretty consistent if it stays like that with PS4.
I've read it at some point on Digital Foundry, and it was not debunked there, so I thought they knew what they were talking about. What's the actual story then?That's not true. It has been debunked several times by developers on B3D.
To ensure that everything can be easily emulated on future hardware, is the line of thinking.Ask yourself why MS would "force" anything on devs when they are known for bending over backwards to make the dev community happy. Seriously....
Automatic is code for non-optimal. When devs have full control of the data pipeline they get best results, though they may not care enough..
I've read it at some point on Digital Foundry, and it was not debunked there, so I thought they knew what they were talking about. What's the actual story then?
To ensure that everything can be easily emulated on future hardware, is the line of thinking.
MS has actually forced devs to only use approved dev libraries for any kind of of X360 development. On the other hand, on PS3 Sony has been encouraging them to use LibGCM as much as possible (closest to down-to-metal as it can be on today's machines) and circumvent regular OpenGL path. From what I know it's the same exact deal on Vita, so it would be pretty consistent if it stays like that with PS4.
That's all true to an extent, but being able to somewhat conveniently juggle data between two separate pools still requires more developer effort than simply just having 1 single pool (which is as fast as those 2 pools combined) in the first place.
Absolutely.
If Microsoft have solved bandwidth issues (and all this points to a yes) while simultaneously providing 8GB of RAM, then it'll be interesting to see how this plays out in a few years when devs get used to coding for Durango and start utilizing that extra RAM. I'm starting to feel more confident about this system and we still don't know anything about the three display planes.
Thanks for that. There's a very good explanation there from ERP and Dominik. There's also a link there to a link to the very article I read few years back that I mentioned above.
Suspicions were first aroused by a tweet by EA Vancouver's Jim Hejl who revealed that addressing the Xenos GPU on 360 involves using the DirectX APIs, which in turn incurs a cost on CPU resources. Hejl later wrote in a further message that he'd written his own API for manual control of the GPU ring, incurring little or no hit to the main CPU.
"Cert would hate it tho," he added mysteriously.
Thanks for that. There's a very good explanation there from ERP and Dominik. There's also a link there to a link to the very article I read few years back that I mentioned above.
http://www.eurogamer.net/articles/digitalfoundry-directx-360-performance-blog-entry
So it could be that they don't allow you to overridee what they already have in those slim libraries, but those libraries are already pretty close to metal as it is.
If Microsoft have solved bandwidth issues (and all this points to a yes) while simultaneously providing 8GB of RAM, then it'll be interesting to see how this plays out in a few years when devs get used to coding for Durango and start utilizing that extra RAM. I'm starting to feel more confident about this system and we still don't know anything about the three display planes.
I remember this myth being spread as something that would give a huge edge in performance for PS3 as the generation went on... which strangely never happened.Approved libraries get support and regular updates. Unapproved libraries, well you run into trouble and you could be shit out of luck. Yes MS has always forced developers on their consoles to use directX, but keep in mind it's a tailored version of direct X for the hardware in the console and supports some functions that you wouldn't find on the PC. Yeah MS gives you less freedom of choice but it's still just as custom or low level as whatever sony themselves are applying. Again, not any different then the last two generations imo. I'm still not certain why this is being discussed at all.
It's a non issue which is why i said it's being put forth like a negative but in reality it's not.
I remember this myth being spread as something that would give a huge edge in performance for PS3 as the generation went on... which strangely never happened.
I’m not completely sure yet which direction we’re going to go, but the plan of record is that it’s going to be more the Microsoft model right now where we’ve got the game and the renderer running as two primary threads and then we’ve got targets of opportunity for render surface optimization and physics work going on the spare processor, or the spare threads, which will amenable to moving to the CELL, but it’s not clear yet how much the hand feeding of the graphics processor on the renderer, how well we’re going to be able to move that to a CELL processor, and that’s probably going to be a little bit more of an issue because the graphics interface on the PS3 is a little bit more heavyweight. You’re closer to the metal on the Microsoft platform and we do expect to have a little bit lower driver overhead.
3 display planes? where did you read that?
3 display planes? where did you read that?
Interesting I must have missed that post. I'm assuming you could render things to two different display planes and the hardware composites them before pushing to the display.
Hmm, that would mean it's possible to render each plane at a different resolution. The more I hear, the more this sounds similar to the Talisman proposed graphics hardware that MS researched many moons ago.
It too could render various screen elements at different resolutions. however this was before 3D accelerators became prominent.
Interesting I must have missed that post. I'm assuming you could render things to two different display planes and the hardware composites them before pushing to the display.
Hmm, that would mean it's possible to render each plane at a different resolution. The more I hear, the more this sounds similar to the Talisman proposed graphics hardware that MS researched many moons ago.
It too could render various screen elements at different resolutions. however this was before 3D accelerators became prominent.
A little nostalgia: Talisman was my first job at Microsoft, back in 1996. It was a hardware initiative where objects were rendered to offscreen surfaces and depth-buffers similar to what we call Z-Sprites nowadays. The hardware would then composite these layers prior to or even during the video signal out. With literally all the objects in a scene rendered into their own 2D layer, software would decide whether a layer could be reused – moved and transformed in 2D – or needed to be re-rendered. This framerate-independent rendering aspect of Talisman was a cool idea but seems to have been lost with time. Talisman never made it to market, but now you know why DirectX went from DX3 to DX5. As DirectX 4 was for Talisman, and without Talisman, it was skipped. However, one lovely feature that remained was SetRenderTarget(), a totally new concept designed for Talisman…something that was challenging for hardware at the time, as render target memory and texture memory were separate in the hardware of the era. A tangent yes, but now you know
Mixed-resolution rendering seems to be in vogue. Specifically for translucency effects, like a smoke or fireball effect that fills the screen and has tons of overdraw.
Automatic is code for non-optimal. When devs have full control of the data pipeline they get best results, though they may not care enough..
As I understand it, the move engines are not really to save bandwidth, but to save GPU cycles. No matter how you slice it, they are still moving data around at a peak 102GB/s.