• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS3 games list & SPE usages

MikeB

Banned
commariodore64 said:
At what cost? Also - having textures stored in RAM is MUCH quicker and has no hit on the CPU, correct? So, while texture streaming is a great thing (Many current 360 and PS3 engines implement it to varying degrees) it is not as efficient or as desirable as having physical RAM to spare due to the hits taken elsewhere in engine performance (where the spe's come in handy - but require RAM as well to use)

Correct me if I'm wrong (and I've made my share of mistakes) but The PS3 has 22.4 GB/s of GDDR3 bandwidth and 25.6 GB/s of RDRAM bandwidth for a total system bandwidth of 48 GB/s where Xbox 360 has 22.4 GB/s of GDDR3 bandwidth and a 256 GB/s of EDRAM bandwidth for a total of 278.4 GB/s total system bandwidth.

This has been addressed earlier within this thread. That's internal bandwidht for the Xenos' daughter chip, similar the Cell has an even greater internal bandwidth. They can be used for similar purposes (of course the Cell has many other purposes in addition), but the approach in doing so is very different. The 360 approach is far easier to get into for developers using legacy Windows orientated game engines, the PS3 approach is far more powerful.

With regard to shared memory vs 'dedicated memory', for the PS3's CPU (Cell) the 'dedicated memory' approach is crucial especially with regard to latencies. The XDR has much lower lantecies, for the GPU (RSX) latencies are less important (also larger than usual caches), so the GDDR3 is just fine.

With regard to bandwidth note the PS3 GPU can use GDDR3 and XDR RAM simultaneously, such an approach increases graphics memory as well as bandwidth!

On the XBox 360 GPU ideally you don't have to resort to tiling (meaning you have to constantly feed parts of data from the main RAM to the EDRAM for adding effects like AA, tiling is when you only fully render a portion of a frame/screen at a time), which drains a lot of performance when you want to achieve solid framerates (but sadly can't be done in high resolutions like even 720p, due to its 10 MB size). The EDRAM is used for adding effects like AA, without tiling this can be done with very little peformance penalty, but if you have to do tiling it becomes performance draining. Hence even 3rd generation exlcusive 360 games like Halo 3 or Forza 2 have no AA, yet cheap (performance wise) AA is one of the EDRAM's most touted advantages. For Halo 3 they added HDR instead, but also reduced the rendering resolution to 640p.

Because they are so different, basically games fully designed for the 360's architecture can easily result in weak PS3 ports. Multi-platform games lead on the PS3 (taking into account the 360's abilities) can be expected to result in both good PS3 as well as 360 versions. But PS3 exclusives can be pushed far beyond anything the 360 will be able to achieve.
 

vdo

Member
SRG01 said:
No, like I said, both have their advantages and disadvantages.

There was one disadvantage (or maybe a better word to use would be consideration) to the unified memory architecture I remember hearing a while back (keep in mind this was a comment from only one dev) that seemed to make sense on the surface.

It had to do with memory requirements for textures to be larger and also the desire for it to be stored in continuous blocks, while the other non texture items stored in memory by the game engine had smaller size needs (and not as much advantage being stored in continuous blocks).

With both of these types of requirements sharing the same memory, there is a need to manage how and where you store them in relation to each other that can be more complicated than with separate memory banks. If you have a large texture that you are needing to store, you need to have made sure that your other data has left large enough blocks of memory to store that and as you are swapping in and out both types of data that you do not end up with many small bits of data with small gaps between them that make it necessary to rearrange them to then fit larger blocks needed. Management of this type is required for non unified as well, but it sounded like it was more significant for the unified memory with having the two very different types together.

The analogy that seemed to fit was similar to hard drive fragmentation, where over time, after you have been storing and removing data, if you don't watch it, you can end up having small bits of space that are not really useable and that can add up to a significant amount of total unusable space until you defrag. So to keep the edge the 360 has in extra memory means that you want to not end up with having too much fragmentation in memory usage.

The one comment that the dev made that was interesting was that he had run into memory usage problems on the 360 because of this but had, at that time at least, not run out of memory on the PS3 for the same types of things.

Again, this was one dev (at least he claimed to be a dev - he posted a decent amount and the posters and mods seemed to accept that he was one and used him as a reference) and it was a while ago, but it seems the concept would still hold true even now as, if true, it would be inherent to the one architecture vs. the other and it would be up to the skill of the developer to mitigate those effects as best as possible.
 

SRG01

Member
vdo said:
There was one disadvantage (or maybe a better word to use would be consideration) to the unified memory architecture I remember hearing a while back (keep in mind this was a comment from only one dev) that seemed to make sense on the surface.

It had to do with memory requirements for textures to be larger and also the desire for it to be stored in continuous blocks, while the other non texture items stored in memory by the game engine had smaller size needs (and not as much advantage being stored in continuous blocks).

With both of these types of requirements sharing the same memory, there is a need to manage how and where you store them in relation to each other that can be more complicated than with separate memory banks. If you have a large texture that you are needing to store, you need to have made sure that your other data has left large enough blocks of memory to store that and as you are swapping in and out both types of data that you do not end up with many small bits of data with small gaps between them that make it necessary to rearrange them to then fit larger blocks needed. Management of this type is required for non unified as well, but it sounded like it was more significant for the unified memory with having the two very different types together.

The analogy that seemed to fit was similar to hard drive fragmentation, where over time, after you have been storing and removing data, if you don't watch it, you can end up having small bits of space that are not really useable and that can add up to a significant amount of total unusable space until you defrag. So to keep the edge the 360 has in extra memory means that you want to not end up with having too much fragmentation in memory usage.

The one comment that the dev made that was interesting was that he had run into memory usage problems on the 360 because of this but had, at that time at least, not run out of memory on the PS3 for the same types of things.

Again, this was one dev (at least he claimed to be a dev - he posted a decent amount and the posters and mods seemed to accept that he was one and used him as a reference) and it was a while ago, but it seems the concept would still hold true even now as, if true, it would be inherent to the one architecture vs. the other and it would be up to the skill of the developer to mitigate those effects as best as possible.

That's actually a very good point. Thanks for bringing it up.

That does bring up a very interesting question though: with lots of memory management techniques available, why are developers resorting to large contiguous blocks of memory for textures?
 

qirex99

Banned
MikeB said:
This has been addressed earlier within this thread. That's internal bandwidht for the Xenos' daughter chip, similar the Cell has an even greater internal bandwidth. They can be used for similar purposes (of course the Cell has many other purposes in addition), but the approach in doing so is very different. The 360 approach is far easier to get into for developers using legacy Windows orientated game engines, the PS3 approach is far more powerful.

With regard to shared memory vs 'dedicated memory', for the PS3's CPU (Cell) the 'dedicated memory' approach is crucial especially with regard to latencies. The XDR has much lower lantecies, for the GPU (RSX) latencies are less important, so the GDDR3 is just fine.

With regard to bandwidth note the PS3 GPU can use GDDR3 and XDR RAM simultaneously, such an approach increases graphics memory as well as bandwidth!

On the XBox 360 GPU ideally you don't have to resort to tiling (maining you have to constantly feed data from the main RAM to the EDRAM), which drains a lot of performance when you want to achieve solid framerates (but sadly can't be done in high resolutions, due to its size). The EDRAM is used for adding effects like AA, without tiling this can be done with very little peformance penalty, but if you have to do tiling it becomes performance draining. Hence even 3rd generation exlcusive 360 games like Halo 3 or Forza 2 have no AA, yet cheap AA is one of the EDRAM's most touted advantages. For Halo 3 they added HDR instead, but also reduced the rendering resolution to 640p.

Because they are so different, basically games fully designed for the 360's architecture can easily result in weak PS3 ports. Multi-platform games lead on the PS3 (taking into account the 360's abilities) can be expected to result in both good PS3 as well as 360 versions. But PS3 exclusives can be pushed far beyond anything the 360 will be able to achieve.

Thanks for the objective, non-biased tech info!

Maybe next year when PS3 has been the lead platform for a while, the multplatform releases will start looking and playing better on PS3.

However, depsite all the tech jargon, excuses from (forum fappers and SOny) I find it amazing that for all its purported superiorty - that so many multi release arent significantly better (and are usually WORSE) on PS3.

Even more alarming is the fact that devs are planning to make the PS3 the lead platform - yet nobody seems concerned re: the 360 "handling it".

Im still holding out for my "full HD" next gen gaming - screw 640p!
 

SRG01

Member
qirex99 said:
Thanks for the objective, non-biased tech info!

Maybe next year when PS3 has been the lead platform for a while, the multplatform releases will start looking and playing better on PS3.

However, depsite all the tech jargon, excuses from (forum fappers and SOny) I find it amazing that for all its purported superiorty - that so many multi release arent significantly better (and are usually WORSE) on PS3.

Even more alarming is the fact that devs are planning to make the PS3 the lead platform - yet nobody seems concerned re: the 360 "handling it".

Im still holding out for my "full HD" next gen gaming - screw 640p!

As it was mentioned in some of the Cell development course slides I posted about a week or so ago, it's partially because traditional programming -- Java, DirectX API, or even C++ -- is generally removed from directly interacting with the hardware. You can't get away from hardware considerations in this kind of environment, and I think that's where some traditional programmers have difficulty compared to low-level (ie. embedded) programmers.
 

Raistlin

Post Count: 9999
vdo said:
There was one disadvantage (or maybe a better word to use would be consideration) to the unified memory architecture I remember hearing a while back (keep in mind this was a comment from only one dev) that seemed to make sense on the surface.

It had to do with memory requirements for textures to be larger and also the desire for it to be stored in continuous blocks, while the other non texture items stored in memory by the game engine had smaller size needs (and not as much advantage being stored in continuous blocks).

With both of these types of requirements sharing the same memory, there is a need to manage how and where you store them in relation to each other that can be more complicated than with separate memory banks. If you have a large texture that you are needing to store, you need to have made sure that your other data has left large enough blocks of memory to store that and as you are swapping in and out both types of data that you do not end up with many small bits of data with small gaps between them that make it necessary to rearrange them to then fit larger blocks needed. Management of this type is required for non unified as well, but it sounded like it was more significant for the unified memory with having the two very different types together.

The analogy that seemed to fit was similar to hard drive fragmentation, where over time, after you have been storing and removing data, if you don't watch it, you can end up having small bits of space that are not really useable and that can add up to a significant amount of total unusable space until you defrag. So to keep the edge the 360 has in extra memory means that you want to not end up with having too much fragmentation in memory usage.

The one comment that the dev made that was interesting was that he had run into memory usage problems on the 360 because of this but had, at that time at least, not run out of memory on the PS3 for the same types of things.

Again, this was one dev (at least he claimed to be a dev - he posted a decent amount and the posters and mods seemed to accept that he was one and used him as a reference) and it was a while ago, but it seems the concept would still hold true even now as, if true, it would be inherent to the one architecture vs. the other and it would be up to the skill of the developer to mitigate those effects as best as possible.

I love this thread ... hadn't heard this before, but it makes sense. Very interesting.


SRG01 said:
That's actually a very good point. Thanks for bringing it up.

That does bring up a very interesting question though: with lots of memory management techniques available, why are developers resorting to large contiguous blocks of memory for textures?

I would assume for performance reasons? Wouldn't jumping around in memory for a given texture slow stuff down when doing operations with them?

Also, its possible the GPU's are designed with this expectation (just a guess)? If you're doing a shader operation with a texture, I'm guessing the shader just gets its address in memory ... it would be a debacle if it had to keep jumping around to find parts of it.
 

vdo

Member
SRG01 said:
That's actually a very good point. Thanks for bringing it up.

That does bring up a very interesting question though: with lots of memory management techniques available, why are developers resorting to large contiguous blocks of memory for textures?

Contiguous - that was the word I was looking for!

Yes, I am not sure if the needs were to have the entire texture be completely contiguous or if he meant that the smallest amount that they wanted to divide the textures into still needed a larger amount of contiguous blocks than the other non texture needs. And again, I don't know how many other developers would agree with this point or not.
 

Raistlin

Post Count: 9999
qirex99 said:
However, depsite all the tech jargon, excuses from (forum fappers and SOny) I find it amazing that for all its purported superiorty - that so many multi release arent significantly better (and are usually WORSE) on PS3.

The same thing happened with the DC versus PS2 as well during the first year.
 

MikeB

Banned
Onix said:
The same thing happened with the DC versus PS2 as well during the first year.


I still remember the heated CD vs "costly and useless (other than for movie enthusiast)" DVD debates, never owned a PS2 myself, but my vote went to Sony's choice for DVD. ;-)
 

MikeB

Banned
MikeB said:
This has been addressed earlier within this thread. That's internal bandwidht for the Xenos' daughter chip, similar the Cell has an even greater internal bandwidth. They can be used for similar purposes (of course the Cell has many other purposes in addition), but the approach in doing so is very different. The 360 approach is far easier to get into for developers using legacy Windows orientated game engines, the PS3 approach is far more powerful.

With regard to shared memory vs 'dedicated memory', for the PS3's CPU (Cell) the 'dedicated memory' approach is crucial especially with regard to latencies. The XDR has much lower lantecies, for the GPU (RSX) latencies are less important (also larger than usual caches), so the GDDR3 is just fine.

With regard to bandwidth note the PS3 GPU can use GDDR3 and XDR RAM simultaneously, such an approach increases graphics memory as well as bandwidth!

On the XBox 360 GPU ideally you don't have to resort to tiling (meaning you have to constantly feed parts of data from the main RAM to the EDRAM for adding effects like AA, tiling is when you only fully render a portion of a frame/screen at a time), which drains a lot of performance when you want to achieve solid framerates (but sadly can't be done in high resolutions like even 720p, due to its 10 MB size). The EDRAM is used for adding effects like AA, without tiling this can be done with very little peformance penalty, but if you have to do tiling it becomes performance draining. Hence even 3rd generation exlcusive 360 games like Halo 3 or Forza 2 have no AA, yet cheap (performance wise) AA is one of the EDRAM's most touted advantages. For Halo 3 they added HDR instead, but also reduced the rendering resolution to 640p.

Because they are so different, basically games fully designed for the 360's architecture can easily result in weak PS3 ports. Multi-platform games lead on the PS3 (taking into account the 360's abilities) can be expected to result in both good PS3 as well as 360 versions. But PS3 exclusives can be pushed far beyond anything the 360 will be able to achieve.

I read in a past Beyond3D article 2xAA can be done at 1024x600 and fit within the 10MB eDRAM. I think that's the reason why COD4 is 600p (60 FPS!), in 720p + AA you will have to resort to tiling already, considerably impacting framerates. I tried to lure the devs into making a statement earlier within the thread, sadly he didn't take the bait. ;-)
 

Raistlin

Post Count: 9999
MikeB said:
I read in a past Beyond3D article 2xAA can be done at 1024x600 and fit within the 10MB eDRAM. I think that's the reason why COD4 is 600p (60 FPS!), in 720p + AA you will have to resort to tiling already, considerably impacting framerates. I tried to lure the devs into making a statement earlier within the thread, sadly he didn't take the bait. ;-)

I haven't been paying attention to COD4 ... didn't realize the 360 version is at 600p. Is that the same for the PS3 version?
 

MikeB

Banned
Onix said:
I haven't been paying attention to COD4 ... didn't realize the 360 version is at 600p. Is that the same for the PS3 version?

Yes, the game has been designed to run identical on both platforms. But 600p is really a 360 orientated resolution allowing devs to get more out of eDRAM's advantages. Oblivion and various other games on the 360 are also 600p, the PS3 version of Oblivion is improved.
 

spwolf

Member
SRG01 said:
That's actually a very good point. Thanks for bringing it up.

That does bring up a very interesting question though: with lots of memory management techniques available, why are developers resorting to large contiguous blocks of memory for textures?


as you said in post after it, i would say it is because it is easier. I wonder if this will change as Sony shares more and more of its tools with 3rd party developers...
 

test_account

XP-39C²
commariodore64 said:
oh what the hell....I'll bite.

My question wasnt a troll question or a question to start PS3 VS 360 debate if that is what you mean with "i'll bite" (as in taking the troll bait). It was a question i asked since i really wanted to know more about how the hardware stuff works :)


SRG01 said:
Let me try explaining it again:


Code:
360:

          CPU                    GPU
            |                      |
            \/                     \/
|---------------------------------------------|


PS3:

          CPU                    GPU
            |                      |
            \/                     \/
|---------------------| |----------------------|

Sorry for the horrible ASCII art, but I hope that makes the point clear.

Thanks for the explanation, but i kinda understood that is how it worked. I think i missunderstood what you said. When you said unified memory banks, did you mean PS3 or 360 then? I thought you ment 360 (sorry, english isnt my first language :\).
 

Raistlin

Post Count: 9999
spwolf said:
as you said in post after it, i would say it is because it is easier. I wonder if this will change as Sony shares more and more of its tools with 3rd party developers...

I have a suspicion it is needed due to how shaders work.

Even if this is not the case, the segmentation issue isn't really a problem on the PS3 since it has dedicated video memory. Its a problem on 360.


test_account said:
Thanks for the explanation, but i kinda understood that is how it worked. I think i missunderstood what you said. When you said unified memory banks, did you mean PS3 or 360 then? I thought you ment 360 (sorry, english isnt my first language :\).

He did mean the 360. It was one 'chunk' of memory that is shared between the CPU and GPU ... and only one can get access to the bus at a time.
 

qirex99

Banned
Onix said:
I haven't been paying attention to COD4 ... didn't realize the 360 version is at 600p. Is that the same for the PS3 version?

yeah, they're both the same.

pretty funny too - nobody said anything, yet when halo3 was revealed to be "only" 660p the whole internets grouind to a halt...:lol
 

Hesemonni

Banned
qirex99 said:
yeah, they're both the same.

pretty funny too - nobody said anything, yet when halo3 was revealed to be "only" 660p the whole internets grouind to a halt...:lol
People talked about it. But I guess the biggest difference is that the other one is a multiplatform title and the other one is a, I don't know, one of the most anticipated games of recent years and a flagship title of its platform?

And it's 640p.
 

Raistlin

Post Count: 9999
qirex99 said:
yeah, they're both the same.

pretty funny too - nobody said anything, yet when halo3 was revealed to be "only" 660p the whole internets grouind to a halt...:lol

Don't know if people talked about it a lot or not (I haven't been in the CoD4 threads) ... but the situation is obviously different. One uses AA and acheives satisfactory IQ ... while the other, not so much :p

Beyond that, Halo3 was obviously the more hyped title regardless of CoD4's press exposure.


edit: also, see the post above me :p
 

test_account

XP-39C²
Onix said:
He did mean the 360. It was one 'chunk' of memory that is shared between the CPU and GPU ... and only one can get access to the bus at a time.

Ah ok, so the Xbox 360 has to swap between the GPU and CPU accessing the RAM all the time, while on the PS3 it can access both GPU and CPU RAM at the same time due to 2 seperate RAMs?
 
test_account said:
Ah ok, so it has to swap between the GPU and CPU accessing the RAM all the time, while on PS3 it can access both GPU and CPU RAM at the same time due to 2 seperate RAMs?

Yes, that's the difference. I think I remember Insomniac saying that - eventually - the split memory will be an advantage. Something along the lines of "bandwidth will play a big role in next-gen games"...
 

test_account

XP-39C²
BruceWayneIII said:
Yes, that's the difference. I think I remember Insomniac saying that - eventually - the split memory will be an advantage. Something along the lines of "bandwidth will play a big role in next-gen games"...

Ah ok, then i understand. Thanks for the help and info everyone :)
 

cedric69

Member
Leidenfrost said:
However, I'm not really sure why you seem so angry about the 'waste' of all this. Kojima isn't cutting stuff out to include 7.1pcm. The whole point of blu-ray is that developers can spend an enormous bitbudget without having to compromise in other areas.
I am not angry at all. I'm just saying that *if* a claim is made for a game to require 50GB of space and *if* later that game would use that space mostly for uncompressed PCM, then and only then I would call the first statement (the game requires 50GB of space) as an ignorant one.

MikeB, what you say regarding people buying better equipment is mainly irrelevant. People's brain work in a certain way, regardless of the equipment they use to listen to music. Psychoacoustics is not psychobubble, it's a science. The grand majority of people claiming that uncompressed PCM is the only way to enjoy a game is kidding themselves and just falling for marketing hype.

I'd also like to point out that I came here to read about SPE usage and I found people discussing about PS3 superiority in texture handling and storage capacity. Maybe this thread ought to be renamed to "PS3 above all" and be done with the SPE facade, ne c'est pas?
 

cedric69

Member
Durante said:
I think you can't equate these two issues. The difference is that even self-proclaimed audiophiles will fail to distinguish say a 256 kbit audio file compressed with a modern codec from the uncompressed source in a double blind test. Self-proclaimed grphics whores (like me) on the other hand will always easily distinguish HD and SD in such a setup. (Well, if you don't show a single-color plane ;))
Thank you for bringing some sound reasoning to the table.
 

MikeB

Banned
cedric69 said:
MikeB, what you say regarding people buying better equipment is mainly irrelevant. People's brain work in a certain way, regardless of the equipment they use to listen to music. Psychoacoustics is not psychobubble, it's a science.

I am unconvinced until I read some in depth study showing a majority of people can't distinguish the difference.

The grand majority of people claiming that uncompressed PCM is the only way to enjoy a game is kidding themselves and just falling for marketing hype.

I notice the difference in combination with a good audio setup, that's not to claim anything less isn't enjoyable. I use cheap earphones for my Nintendo DS, but I have different expectation for DS games to begin with.

I'd also like to point out that I came here to read about SPE usage and I found people discussing about PS3 superiority in texture handling and storage capacity.

That's normal, this is a multi-platform forum and people make comparisons all of the time. I noticed this in endless discussions at other forums as well, actually I got to learn a lot more about the 360 than would otherwise have been the case. ;-)
 

cedric69

Member
MikeB said:
I am unconvinced until I read some in depth study showing a majority of people can't distinguish the difference.
I suggest you read Hydrogenaudio.org.

It's illuminating, really.
 
MikeB said:
I read in a past Beyond3D article 2xAA can be done at 1024x600 and fit within the 10MB eDRAM. I think that's the reason why COD4 is 600p (60 FPS!), in 720p + AA you will have to resort to tiling already, considerably impacting framerates. I tried to lure the devs into making a statement earlier within the thread, sadly he didn't take the bait. ;-)

Fable 2 is tiling with 4XMSAA last I looked.

A fable 2 programmer says the tiling issues have been resolved with the Xbox 360 and there are many successful implementations avaiable devs only need to use them.
 

test_account

XP-39C²
cedric69 said:
I'd also like to point out that I came here to read about SPE usage and I found people discussing about PS3 superiority in texture handling and storage capacity. Maybe this thread ought to be renamed to "PS3 above all" and be done with the SPE facade, ne c'est pas?

I was the one who started the RAM discussion. I didnt ment to go off-topic, sorry for that. But i actually was suppose to ask a following up question about the SPE usage and RAM, but i forgot :\ But the question is, can the SPEs to some extend be a replacement for the RAM?
 

MikeB

Banned
CowboyAstronaut said:
Fable 2 is tiling with 4XMSAA last I looked.

A fable 2 programmer says the tiling issues have been resolved with the Xbox 360 and there are many successful implementations avaiable devs only need to use them.

The tiling issues will remain (impacting framerates). A direct comparison with COD4 (60 FPS) may not be valid as they are two different games. A lot can still be achieved on the 360 in 720p at 30 FPS, a lot will depend on the complexity and size of the environment (and many other factors including movement speed and framerate target). IMO for example a game like Gran Turismo 5 will be well out of reach for the 360, but on the PS3 there will still be a lot of headroom for Gran Turismo 6.
 
vdo said:
There was one disadvantage (or maybe a better word to use would be consideration) to the unified memory architecture I remember hearing a while back (keep in mind this was a comment from only one dev) that seemed to make sense on the surface.

It had to do with memory requirements for textures to be larger and also the desire for it to be stored in continuous blocks, while the other non texture items stored in memory by the game engine had smaller size needs (and not as much advantage being stored in continuous blocks).

With both of these types of requirements sharing the same memory, there is a need to manage how and where you store them in relation to each other that can be more complicated than with separate memory banks. If you have a large texture that you are needing to store, you need to have made sure that your other data has left large enough blocks of memory to store that and as you are swapping in and out both types of data that you do not end up with many small bits of data with small gaps between them that make it necessary to rearrange them to then fit larger blocks needed. Management of this type is required for non unified as well, but it sounded like it was more significant for the unified memory with having the two very different types together.

The analogy that seemed to fit was similar to hard drive fragmentation, where over time, after you have been storing and removing data, if you don't watch it, you can end up having small bits of space that are not really useable and that can add up to a significant amount of total unusable space until you defrag. So to keep the edge the 360 has in extra memory means that you want to not end up with having too much fragmentation in memory usage.

The one comment that the dev made that was interesting was that he had run into memory usage problems on the 360 because of this but had, at that time at least, not run out of memory on the PS3 for the same types of things.

Again, this was one dev (at least he claimed to be a dev - he posted a decent amount and the posters and mods seemed to accept that he was one and used him as a reference) and it was a while ago, but it seems the concept would still hold true even now as, if true, it would be inherent to the one architecture vs. the other and it would be up to the skill of the developer to mitigate those effects as best as possible.

Maybe I'm misreading this, but I think I can see what you were being told and that memory fragmentation is a huge issue, which it always has been, but I'm thinking they shouldn't be using memory in that fashion to begin with.

Let me see if I can break it down. So what you are trying to use as an example is whenever you allocate memory, you're just grabbing a chunk of memory that's available. On the 360 you'll just say allocateMem() and grab the next available amount of memory. On the PS3, you'll call like allocateSystemMem() or allocateGPUMem() depending on what it was. But because you're pulling from one pool of mem or another, you're limited to how badly you can fragment it right?

If that's the case, then they shouldn't be doing it like that to begin with. What they should be doing is creating pools of memory into several heaps and pulling from different heaps. This way you segment the memory into smaller pools which in turn reduces fragmentation. If done right, there shouldn't be much of a difference with fragmentation when working between the two systems.
 

Raistlin

Post Count: 9999
Durante said:
I think you can't equate these two issues. The difference is that even self-proclaimed audiophiles will fail to distinguish say a 256 kbit audio file compressed with a modern codec from the uncompressed source in a double blind test. Self-proclaimed grphics whores (like me) on the other hand will always easily distinguish HD and SD in such a setup. (Well, if you don't show a single-color plane ;))

I would disagree, depending on the source-material.
 

Raistlin

Post Count: 9999
CowboyAstronaut said:
Fable 2 is tiling with 4XMSAA last I looked.

A fable 2 programmer says the tiling issues have been resolved with the Xbox 360 and there are many successful implementations avaiable devs only need to use them.


No matter whether they've improved libraries for handling tiling or not, it still takes longer to do than not.

So depending on the resource requirements of a given game, it may or may not be able to use tiling (or more than one tiling pass).
 

Truespeed

Member
vdo said:
There was one disadvantage (or maybe a better word to use would be consideration) to the unified memory architecture I remember hearing a while back (keep in mind this was a comment from only one dev) that seemed to make sense on the surface.

It had to do with memory requirements for textures to be larger and also the desire for it to be stored in continuous blocks, while the other non texture items stored in memory by the game engine had smaller size needs (and not as much advantage being stored in continuous blocks).

With both of these types of requirements sharing the same memory, there is a need to manage how and where you store them in relation to each other that can be more complicated than with separate memory banks. If you have a large texture that you are needing to store, you need to have made sure that your other data has left large enough blocks of memory to store that and as you are swapping in and out both types of data that you do not end up with many small bits of data with small gaps between them that make it necessary to rearrange them to then fit larger blocks needed. Management of this type is required for non unified as well, but it sounded like it was more significant for the unified memory with having the two very different types together.

The analogy that seemed to fit was similar to hard drive fragmentation, where over time, after you have been storing and removing data, if you don't watch it, you can end up having small bits of space that are not really useable and that can add up to a significant amount of total unusable space until you defrag. So to keep the edge the 360 has in extra memory means that you want to not end up with having too much fragmentation in memory usage.

The one comment that the dev made that was interesting was that he had run into memory usage problems on the 360 because of this but had, at that time at least, not run out of memory on the PS3 for the same types of things.

Again, this was one dev (at least he claimed to be a dev - he posted a decent amount and the posters and mods seemed to accept that he was one and used him as a reference) and it was a while ago, but it seems the concept would still hold true even now as, if true, it would be inherent to the one architecture vs. the other and it would be up to the skill of the developer to mitigate those effects as best as possible.

The problem of RAM fragmentation is an issue on any system regardless of whether the RAM is contiguous or not. There are books devoted to memory management algorithms alone. But, on a console I don't think it's as serious as you seem to make it out since the game basically takes over the system and has full reign over whatever it wants to do with little intervention from the O/S. The reason why I believe so is because the memory usage map for a well developed game is highly optimized and I would be surprised to learn about chunks of memory not being garbage collected and placed back into the heap.

Also, it's important to remember that data does not need to be contiguous and often it's not.
 

Truespeed

Member
SRG01 said:
As it was mentioned in some of the Cell development course slides I posted about a week or so ago, it's partially because traditional programming -- Java, DirectX API, or even C++ -- is generally removed from directly interacting with the hardware. You can't get away from hardware considerations in this kind of environment, and I think that's where some traditional programmers have difficulty compared to low-level (ie. embedded) programmers.

Well, yes and no. Compilers from RapidMind that are optimized for Cell and the parallel programming paradigm really do all of the low level work for you so you don't have to. Check out their work. It's really quite impressive and I'm surprised more developers aren't using these tools.
 

chris0701

Member
External fragmentation is inevitable. Even in unified memory system or dedicated memory system.

An operation system textbook in CS field would tell you the truth.Do not spread the FUD.
 

vdo

Member
Truespeed said:
The problem of RAM fragmentation is an issue on any system regardless of whether the RAM is contiguous or not. There are books devoted to memory management algorithms alone. But, on a console I don't think it's as serious as you seem to make it out since the game basically takes over the system and has full reign over whatever it wants to do with little intervention from the O/S. The reason why I believe so is because the memory usage map for a well developed game is highly optimized and I would be surprised to learn about chunks of memory not being garbage collected and placed back into the heap.

Also, it's important to remember that data does not need to be contiguous and often it's not.

Yes, I don't think it was necessarily that the data absolutely had to be contiguous - I probably emphasized that a little too much. Also, it was the game programmer taking over and having full reign over the system that I was talking about and that is why I mentioned that the game programmer had to do the management of this.

And I mentioned that the fragmentation was an issue with both:

Management of this type is required for non unified as well, but it sounded like it was more significant for the unified memory with having the two very different types together.

The main take away I got from it was that it was the two types of uses of the memory, one for smaller size blocks and more frequent access and the other for larger blocks and less frequent access that made the management of a unified memory architecture (UMA) more complicated. Again, not that there no management of this kind at all with a non unified memory architecture (NUMA), but that it reduced this need.

And like I mentioned a couple times, this is only from one dev. I am sure other devs would have different opinions and could show some sophisticated algorithms to deal with it.

I know this is supposed to be a thread that is more about the SPE usage, but I only mentioned it because there was a comment about advantages/disadvantages to both, and most of the time I see UMA being automatically assumed as being the more advantageous and I thought I would put out that one consideration that UMA had that I had come across. It still may be that there or more advantages that UMA has that outweighs this, but with some of the comments above about bandwidth in NUMA, it could help keep the compromises both have to make, have them come out about even.
 

chris0701

Member
Modern operation system takes care about memory allocation for process.

The program(precisely say -- a process),always to be "contiguous". But in fact it only is contiguous in logical address. Paging technology set a relation between logical to physical.

It is very common and basic issue that every modern computer system encounters,so 1)it could not be only issue on 360 unified memory system for texture usage 2)PS3 has the same problem,too 3)The problem is trivial.
 
commariodore64 said:
Correct me if I'm wrong (and I've made my share of mistakes) but The PS3 has 22.4 GB/s of GDDR3 bandwidth and 25.6 GB/s of RDRAM bandwidth for a total system bandwidth of 48 GB/s where Xbox 360 has 22.4 GB/s of GDDR3 bandwidth and a 256 GB/s of EDRAM bandwidth for a total of 278.4 GB/s total system bandwidth.


omg, you didn't just accumulate the bandwidth of two different types of memory to one total?! :lol

very clever:

hitlertz8.jpg
 

Raistlin

Post Count: 9999
chris0701 said:
Modern operation system takes care about memory allocation for process.

The program(precisely say -- a process),always to be "contiguous". But in fact it only is contiguous in logical address. Paging technology set a relation between logical to physical.

It is very common and basic issue that every modern computer system encounters,so 1)it could not be only issue on 360 unified memory system for texture usage 2)PS3 has the same problem,too 3)The problem is trivial.

are you sure shaders support paging?
 

chris0701

Member
Onix said:
are you sure shaders support paging?

Shaders should be loaded into memory for usage.It just like a program/process needs more memory,operation system find a memory section for them.

Fragmentation is OS field issue,not a important issue on programming field,though.
 

diddlyD

Banned
chris0701 said:
Modern operation system takes care about memory allocation for process.

The program(precisely say -- a process),always to be "contiguous". But in fact it only is contiguous in logical address. Paging technology set a relation between logical to physical.

It is very common and basic issue that every modern computer system encounters,so 1)it could not be only issue on 360 unified memory system for texture usage 2)PS3 has the same problem,too 3)The problem is trivial.

yeah but no. windows on your computer does tricks to hide memory not being physically contiguous and help avoid fragmentation problems, and its great for web browsers and word processors but no console does this because its slow and sloppy. fragmentation is still a major issue in console game development, and even for pc games that are concerned with getting the best performance.
 

vdo

Member
chris0701 said:
Modern operation system takes care about memory allocation for process.

The program(precisely say -- a process),always to be "contiguous". But in fact it only is contiguous in logical address. Paging technology set a relation between logical to physical.

It is very common and basic issue that every modern computer system encounters,so 1)it could not be only issue on 360 unified memory system for texture usage 2)PS3 has the same problem,too 3)The problem is trivial.

OK, I searched and found the original post I was talking about from back then. Now instead of me trying to remember what the post said in my own words, you can see it in original form. He mentions the small blocks and contiguous blocks like I thought, but it looks like from his second to last sentence that he says it's not the actual memory fragmentation that gets worse (which is what I had been recalling), but other problems more easily, which kind of confuses me as the previous sentence seems to be about fragmentation and probably what caused that to stick in my mind. Actually, him mentioning the devices vying for the same pool goes with what some others said about the differences above in this thread, as well. Hopefully you can get what he is trying to say by reading it in his words.

You can get a feel from his second sentence, the type of dev that he is. (Ignore the 96 MB references - that was referring to OS memory footprint questions). Whether or not you agree with any of it, is another matter, but you can at least see what was stated:

cpiasminc from E-Empire forums said:
I'm still curious where this figure came from. Granted, I'm in the land of art tools and shaders and offline computations, but this stuff ultimately has to come out and run on the platforms, and I've yet to see this 96 MB RAM figure anywhere. In fact, while I've dealt with datasets that have run out of memory on the 360, this has never once happened on the PS3. Yes, that's partially the UMA vs. NUMA difference (memory fragmentation), but that alone isn't going to make up for a difference of 64 MB.

Crossbar from E-Empire forums said:
Why is memory fragmentation a larger problem on UMA than NUMA in this case? Please elaborate.
Quite simply because you've got two devices vying for the same memory pool on the same bus, and neither one is really going to know what the other needs, and they will often invade each others address space. This is especially magnified by the fact that one device is a CPU, which tends to have much more variety in the types of accesses it makes (but quite often dealing with small chunks of data often in small data collections), while the other is a GPU, which very frequently operates on small sections of VERY big datablocks.

So often times, it means that even though you work in small blocks, you want to leave big contiguous blocks available, which is no fun. To be accurate, the problem isn't so much that memory fragmentation gets worse with UMA -- it's that it causes other problems more easily. With NUMA at least, the two devices don't fight so much.

http://www.forums.e-mpire.com/showthread.php?p=1280432#post1280432
 
chris0701 said:
Shaders should be loaded into memory for usage.It just like a program/process needs more memory,operation system find a memory section for them.

Fragmentation is OS field issue,not a important issue on programming field,though.

Memory Fragmentation is an important issue when you program "to the metal".

Its not an important issue for programmer's who just rely on an OS and expect there to be lots of RAM.

I should know I program to the Metal and high performance requirements can be quite tricky.

Writing and accessing memory during CPU clock cycles is not something a typical Office Programmer worries about but for someone doing a tight network stack? Yes it is very important.

Game Programmers also quite often program to the metal so what you are saying is not quite accurate.
 

tfur

Member
commariodore64 said:
RSX only has a 22.4GB/s link to its local memory bandwidth, which is less than 60% of the memory bandwidth of the GeForce 7800 GTX. In other words, it needs that additional memory bandwidth from the Cell’s memory controller to be able to handle more texture-bound games. If a good portion of the 15GB/s downstream link from the Cell processor is used for bandwidth between the Cell’s SPEs and the RSX, the GPU will be texture bandwidth limited...

I always took from this that the bandwidth issue was a catch22? Please explain how this is not?

The article you cut+pasted from is an old Anandtech article from 2005 ( http://www.anandtech.com/video/showdoc.aspx?i=2453&p=9 ). It seems to be lacking in any information regarding the Rambus flexio system.

Most of these articles (2005) were born from people attending IBM press conferences, and then summarizing. Thus the lack of detail in the article diagram, and the missing bus portion.
 
chris0701 said:
Modern operation system takes care about memory allocation for process.

The program(precisely say -- a process),always to be "contiguous". But in fact it only is contiguous in logical address. Paging technology set a relation between logical to physical.

It is very common and basic issue that every modern computer system encounters,so 1)it could not be only issue on 360 unified memory system for texture usage 2)PS3 has the same problem,too 3)The problem is trivial.

As others have stated, Windows and other OS based systems is not the same when you're working on a console. Memory fragmentation is a very important issue that is constantly cropping up in game development. Most development teams write their own memory management into their game or use a pre-existing one that comes with an engine. When working on both the PS3 and 360 you have to be concerned about memory fragmentation all the time.
 

Argyle

Member
chris0701 said:
Modern operation system takes care about memory allocation for process.

The program(precisely say -- a process),always to be "contiguous". But in fact it only is contiguous in logical address. Paging technology set a relation between logical to physical.

It is very common and basic issue that every modern computer system encounters,so 1)it could not be only issue on 360 unified memory system for texture usage 2)PS3 has the same problem,too 3)The problem is trivial.

Not everything goes through the memory mapping unit - so this problem is often not as trivial as you may think. There are also performance implications when you virtualize memory (TLB misses, for example).

As for textures, as I understand it there are differences between the systems in the way that the different miplevels are stored, one is more efficient (less dead space) than the other. The miplevels need to have this space in there because the GPU is expecting it (for efficiency reasons).
 

MikeB

Banned
For some reason the original post seems to be lost? This morning it was still there... Anyone still got it in cache? If so please post the content here...
 
MikeB said:
I read in a past Beyond3D article 2xAA can be done at 1024x600 and fit within the 10MB eDRAM. I think that's the reason why COD4 is 600p (60 FPS!), in 720p + AA you will have to resort to tiling already, considerably impacting framerates. I tried to lure the devs into making a statement earlier within the thread, sadly he didn't take the bait. ;-)

Beyond is a great forum - and the above info is from the forum - but I rarely see anyone touting the RSX over Xenos.
Perhaps you can explain why are there some games on 360 (NBA Street) That run at 1080p w/4xMSAA while their counterparts on the PS3 run with no MSAA? Is it all the devs fault or a limitation of the RSX?
 
Top Bottom