RAM thread of Next Generation

No, we know from Nintendo themselves that 1GB is OS only, 1GB is games. Games can not use the 1GB dedicated to the OS, Nintendo themselves said this, why would they be wrong and this site with questionable sources be right? What your site may be referring to was that the PS360 don't have quite so much as 512 and 256/256 because some is used by the OS, but those use far slimmer operating systems at 45-60MB.

Honestly what the site is saying is laughable, when you launch a game can you shut down Windows and keep playing it? Games need the OS. That's how computing devices work. Even if the IO chip handles the OS, that still means the OS has to be stored in RAM.

Yeah, I said I didn't know if the site was reliable. However, you can run a "lite" version of the OS which uses less ram while playing games.
 
Yeah, I said I didn't know if the site was reliable. However, you can run a "lite" version of the OS which uses less ram while playing games.

It's still Nintendos word against whoever says it does not dedicate 1GB to OS, leaving 1GB for games. Why would Nintendo publicly lie about that? Why say a lie which hurts them, as it would be better if all 2GB was available?
 
If MS uses the blitter , esram and dsp do they really need all that much badwidth for everything else? From what I know most of the bandwidth is needed for the frame buffer. It seems like sony needs gddr5 because they want to target 1080p 60fps 3d. 128mb of esram should be able to do that fine.
 
Performance will have to be different. Ironically right now, at face value, I think the Ps4 sounds faster somehow.

But MS went through the troubles of designing this complex piece of hardware, that they must have something up their sleeve. How will the Gpus compare? CpU? Overall Ram performance?

So many questions.
Can't the same be said about the PS3? They went through all the trouble with Cell, but did it matter that much in the end?
 
It's still Nintendos word against whoever says it does not dedicate 1GB to OS, leaving 1GB for games. Why would Nintendo publicly lie about that? Why say a lie which hurts them, as it would be better if all 2GB was available?

Well, it was probably an old information reported before Nintendo officially annunced the OS details.
 
If MS uses the blitter , esram and dsp do they really need all that much badwidth for everything else? From what I know most of the bandwidth is needed for the frame buffer. It seems like sony needs gddr5 because they want to target 1080p 60fps 3d. 128mb of esram should be able to do that fine.

I guess you want a $900 system? Most talk of 1T SRAM is 32-50MB, 64MB is probably too expensive or too many transistors.
 
That is complete bullocks. :p
NO its a fact.
Is this a serious post? lol

yes

Regardless if the PS4 has faster RAM than the 720, and the advantages that brings, having extra RAM to use, slow or not, with the 720 can only be a good thing for developers right? I mean it's nice that PS4's will be faster, but how is more slow RAM a bad thing?

And 4GB seems low now. We still have to worry about the OS. I think MS is smart going with 8GB of slow RAM.
"Usable memory amount is very much tied to available memory bandwidth. More bandwidth allows the games to access more memory. So it's kind of counter-intuitive to swap faster smaller memory to a slower larger one. More available memory means that I want to access more memory, but in reality the slower bandwidth allows me to access less. So the percentage of accessible memory drops radically."

You can only access 1.08GB per frame at 60 fps of slow ram compared to 3.2GB of fast ram.

That quote has posted by dev over at beyond3d. Great read BTW... http://forum.beyond3d.com/showthread.php?t=62108
 
NO its a fact.


yes


"Usable memory amount is very much tied to available memory bandwidth. More bandwidth allows the games to access more memory. So it's kind of counter-intuitive to swap faster smaller memory to a slower larger one. More available memory means that I want to access more memory, but in reality the slower bandwidth allows me to access less. So the percentage of accessible memory drops radically."

You can only access 1.08GB per frame at 60 fps of slow ram compared to 3.2GB of fast ram.

That quote has posted by dev over at beyond3d. Great read BTW... http://forum.beyond3d.com/showthread.php?t=62108

Read my answer on your last post.

Your interpretation is wrong.
 
Yes because most of that 8GB is not usable for gaming on a 65GB/s bus. Only 1.08 GB is readable per frame at 60 fps.

Compare 3.2GB usable per frame of the 192 GB/s.

So it like the ps4 has 3x the ram of the x720.

However, you have to consider that the frame buffer operations would be done in the eDRAM like in the Xbox360, thus compensating the lower bandwidth. The PS4 most likely will not have eDRAM.

From what I've read if you take 3d out of the picture 32 - 50mb should be fine for 1080p 60fps.

WiiU does have 32MB of eDRAM, meaning we can expect the Xbox720 to have at least 32MB if not more.
 
it's about getting stuff off the memory to your APU.Roughly: basically you fill the 8 gigs with data and can stream 1 gig of it every frame. (in theory) Truth is: you won't need that amount of (new)data every frame. But It's good to have some headroom here. You also can stream stuff in parallel if unified ram is used. stupid example: 500 MB to your CPU, 500 MB to the GPU.

Read my answer on your last post.

Your interpretation is wrong.

What "interpretation"? I posted facts.

However, you have to consider that the frame buffer operations would be done in the eDRAM like in the Xbox360, thus compensating the lower bandwidth. The PS4 most likely will not have eDRAM.
Yes but this does not change the amount of ram the xbox 720 can use. Are we sure it going to have edram? Makes sense with this slow ram and MS is really good at building a balance system. Seems silly to go with 8GB of slow ram.
 
Can't the same be said about the PS3? They went through all the trouble with Cell, but did it matter that much in the end?

PS3 is a really powerful system. But they called the plug on using 2 Cells too late, and the RSX ended up being a poor companion to Cell.

Also it had poor documentation and poor tools. Devs didn't know what to do with it. Still the results in some titles are undeniable.
 
NO its a fact.


yes


"Usable memory amount is very much tied to available memory bandwidth. More bandwidth allows the games to access more memory. So it's kind of counter-intuitive to swap faster smaller memory to a slower larger one. More available memory means that I want to access more memory, but in reality the slower bandwidth allows me to access less. So the percentage of accessible memory drops radically."

You can only access 1.08GB per frame at 60 fps of slow ram compared to 3.2GB of fast ram.

That quote has posted by dev over at beyond3d. Great read BTW... http://forum.beyond3d.com/showthread.php?t=62108

That you for that info and link. That explained it perfectly. Now i understand.
 
Best quote from that link:

My conclusion: Usable memory amount is very much tied to available memory bandwidth. More bandwidth allows the games to access more memory. So it's kind of counterintuitive to swap faster smaller memory to a slower larger one. More available memory means that I want to access more memory, but in reality the slower bandwidth allows me to access less. So the percentage of accessible memory drops radically.
 
Best quote from that link:
interesting, but yeah, I think most people would say they'd rather have the 4gb that sony might use over the 8gb that MS might use. But this is all without the knowledge of whether or not it's true, and how much is used for the operating systems.
 
What "interpretation"? I posted facts.


Yes but this does not change the amount of ram the xbox 720 can use. Are we sure it going to have edram? Makes sense with this slow ram and MS is really good at building a balance system. Seems silly to go with 8GB of slow ram.

I also think it would be better to go with 4GB DDR5, but maybe MS thought it would be a too much expensive solution?

An interesting quote about eDRAM from B3D

Relatively large manual high speed "caches" such as the Xbox 360 EDRAM are very good for reducing redundant bandwidth usage (especially for GPU rendering). EDRAM removes all the memory bandwidth waste you get from blending, overdraw, MSAA and z-buffering. Basically you get all these for free. The bandwidth free overdraw of course also helps with shadowmaps as well, but since Xbox 360 cannot sample from EDRAM, you have to eventually copy the shadowmap to main memory (consumes memory bandwidth) and sample it from there (consumes memory bandwidth just like any static texture). Same is true for g-buffer rendering and sampling (must be copied eventually to main memory and sampled from there consuming memory bandwidth).

However no matter how excellent EDRAM is, it cannot increase the maximum total accessible unique memory per frame. It can "only" (drastically) reduce the waste for double (or even higher) access counts to same memory regions, and thus get us more near to the theoretical maximum (= 200 MB unique memory per frame, assuming we still use the current highest end desktop APU unified memory systems as our "system of choice").

http://forum.beyond3d.com/showpost.php?p=1653992&postcount=23
 
Maybe Microsoft expects that there will be a means to reduce the size footprint of information being written back and forth. And that their unified pool will be more flexible than the RAM solution in the 360.
 
I posted my opinion on how to make sure PS3 is using all of its high bandwidth RAM for its purpose and not for storing the OS in PS4 thread:

The APU with its single high bandwidth GDDR5 bus still has to communicate with the southbridge, which can have LPDDR stacked for some low bandwidth memory at minimal cost:

kaigai02l.gif

The bus between the CPU and the Southbridge is 5GB/s in the PS3. iPhone 4S/iPad 2 have 6.4GB/s bandwidth and can run many "OS" apps just fine, that bandwidth on iDevices is for the total system including GPU.

I'd say 512MB/1GB of LPDDR2 stacked onto the southbridge would accomplish this task with minimal cost without having an extra memory bus and you have full 4GB of GDDR5 available for games. Sony is already stacking LPDDR chips with the Vita.

Developers would not have any access to this memory at all. It'd be reserved for the PS OS development team and used solely for PS3 OS/security (impossible to do geohot hack if there are no traces on the board between memory and cpu)...
 
What "interpretation"? I posted facts.

Yes you posted facts - but you simply misinterpreted them when you said, that the 8GB can't be used for gaming which is simply wrong or do you really think that there will be always 3,5GB/frame in movement on the PS4?

To put it simply: even 65 GB/s are a lot. You could basically move a current gen game 8 times around per second. Why would you do that? On the other hand you are limited by the clock speed of the single elements that use this memory (CPU/GPU/DSP).
You also have to take care of synchronism and pay attention on how many busses your ram has (what would be pins on ram moduls in PCs). It's much more complex than that simple description from that B3D guy. Even when you are constantly reading and writing to the RAM you barely reach the bandwidth treshold. More RAM is basically a good idea as you don't need to reload assets from disc or drive that often. I think that MS is going the right way here.

But I guess you want to believe that 150 GB/s is better for gaming.
 
Maybe Microsoft expects that there will be a means to reduce the size footprint of information being written back and forth.

There's already talk out there about a blitting unit so you can reference that more directly if you want... ;) And yes, it would make sense to include something like that if it reduces the overhead of juggling around with the eDRAM.
 
Ok newcomers that come late, take your time to rest and sense the power of GDDR5.
There's a lot unused chairs out there.

hwyur.jpg


You'll feel like you are a whole new human being, fresh and freed from RAM desire.
I know you think more is better, but sometimes less is more and better.
 
What "interpretation"? I posted facts.


Yes but this does not change the amount of ram the xbox 720 can use. Are we sure it going to have edram? Makes sense with this slow ram and MS is really good at building a balance system. Seems silly to go with 8GB of slow ram.

I've been looking for that link for ages, thanks for posting it.

Even though that example doesn't include edram, you'd still be limited to xxMB per frame of unique data read from main ram, even if edram means you don't need to write it back out again for processing or framebuffer (as an aside, maybe one of the custom chips mentioned about the 720 is a ROPS unit on die so you don't need to write the framebuffer out to main memory to display it?)

so for a 2133MHZ DDR3 with 256 bit bus (best case example from the rumours it seems - 128 bit might be more likely) gives you the 68GB/s bandwidth figure. Thats where USC-fan gets the 1GB per frame number

Now thats maximum memory transferred from/to main memory. In most cases you'll use less than that (as per the example) but lets keep it at 1GB for best case.

if Sony have 192GB/s, they can in theory transfer about 3x that, so 3GB per frame. Assuming some overhead for OS, they could transfer practically all of their rumoured memory to the GPU per frame.



complications arise though:

1) with a streaming engine (or any other engine) you need some cache to avoid reading from the HDD when you spin around or move through the world. That is likely to significantly reduce the amount of memory available on Sony's example. So although it might be able to transfer 3GB per frame, you wouldn't be able to because you would need to be storing data for the surrounding area. changes per frame would be up to 300MB (delta)

2) lack of EDRAM on Sony. This means lots more read/writes to main memory than the MS with EDRAM, which starts to level the playing field between the two.

2) how much data can the target GPUs crunch on? Even if you can provide it, the GPUs might not be able to cope. Faster is better but you might hit a wall way before 1GB/frame, which levels the playing field even more between the two approaches.



So, while the headline figures might mean Sony looks better, MS is perhaps more able to reach its memory limits Vs Sony
 
I've been looking for that link for ages, thanks for posting it.

Even though that example doesn't include edram, you'd still be limited to xxMB per frame of unique data read from main ram, even if edram means you don't need to write it back out again for processing or framebuffer (as an aside, maybe one of the custom chips mentioned about the 720 is a ROPS unit on die so you don't need to write the framebuffer out to main memory to display it?)

so for a 2133MHZ DDR3 with 256 bit bus (best case example from the rumours it seems - 128 bit might be more likely) gives you the 68GB/s bandwidth figure. Thats where USC-fan gets the 1GB per frame number

Now thats maximum memory transferred from/to main memory. In most cases you'll use less than that (as per the example) but lets keep it at 1GB for best case.

if Sony have 192GB/s, they can in theory transfer about 3x that, so 3GB per frame. Assuming some overhead for OS, they could transfer practically all of their rumoured memory to the GPU per frame.



complications arise though:

1) with a streaming engine (or any other engine) you need some cache to avoid reading from the HDD when you spin around or move through the world. That is likely to significantly reduce the amount of memory available on Sony's example. So although it might be able to transfer 3GB per frame, you wouldn't be able to because you would need to be storing data for the surrounding area. changes per frame would be up to 300MB (delta)

2) lack of EDRAM on Sony. This means lots more read/writes to main memory than the MS with EDRAM, which starts to level the playing field between the two.

2) how much data can the target GPUs crunch on? Even if you can provide it, the GPUs might not be able to cope. Faster is better but you might hit a wall way before 1GB/frame, which levels the playing field even more between the two approaches.




So, while the headline figures might mean Sony looks better, MS is perhaps more able to reach its memory limits Vs Sony

Assuming that the GPUs are clocked rather low (~1000Mhz) while your RAM is clocked higher you have to look for what amount of data is readable/writable for the GPU within one cycle. I'm too lazy to do the math.
 
After reading thuway's original post, I think there's a couple of points to consider: a game's resolution, frames per second, geometry, etc, all impact how much memory you need at any given time.

While there are no 8GB DDR3 graphics cards out there, I did find a review comparing two Nvidia GT 440 cards which were identical except for the ram configuration: 1GB DDR3 vs 512MB GDDR5.

Throughout the tests, they all showed the GDDR5 card to perform anywhere between 4-13% faster than the DDR3 variant, which suggests that having less but faster RAM is favorable to having more but slower RAM.

This however doesn't paint the whole picture though, as none of the games were tested at 1920x1080; the highest they went was up to 1680x1050.

So where does this leave us with regards to playing games at full HD resolution? I found this other article comparing the Nvidia GTX 680 2GB card against its 4GB sibling. The conclusion is that, even at 2560x1440 resolution, there is no measurable difference between the two cards; in other words, none of the games tested benefited by the additional 2GB GDDR 5 memory. 2GB is enough for today's games.

What does that mean for the next generation Xbox and Playstation? I'm not sure, as there are many more variables we aren't currently privy to, such as overall architecture, system memory architecture, cache, OS overhead, bus architecture, physics processing, etc, but it does seem that while it is generally preferable to have lower amounts of faster RAM, having 4GB is overkill for all of today's most demanding PC games.

In other words, it seems to me that both the next generation Playstation and Xbox will be well equipped to handle the visual demands of today's most demanding games. Having said that, who knows what kind of games we will have five years down the line?
 
I've been looking for that link for ages, thanks for posting it.

Even though that example doesn't include edram, you'd still be limited to xxMB per frame of unique data read from main ram, even if edram means you don't need to write it back out again for processing or framebuffer (as an aside, maybe one of the custom chips mentioned about the 720 is a ROPS unit on die so you don't need to write the framebuffer out to main memory to display it?)

so for a 2133MHZ DDR3 with 256 bit bus (best case example from the rumours it seems - 128 bit might be more likely) gives you the 68GB/s bandwidth figure. Thats where USC-fan gets the 1GB per frame number

Now thats maximum memory transferred from/to main memory. In most cases you'll use less than that (as per the example) but lets keep it at 1GB for best case.

if Sony have 192GB/s, they can in theory transfer about 3x that, so 3GB per frame. Assuming some overhead for OS, they could transfer practically all of their rumoured memory to the GPU per frame.



complications arise though:

1) with a streaming engine (or any other engine) you need some cache to avoid reading from the HDD when you spin around or move through the world. That is likely to significantly reduce the amount of memory available on Sony's example. So although it might be able to transfer 3GB per frame, you wouldn't be able to because you would need to be storing data for the surrounding area. changes per frame would be up to 300MB (delta)

2) lack of EDRAM on Sony. This means lots more read/writes to main memory than the MS with EDRAM, which starts to level the playing field between the two.

2) how much data can the target GPUs crunch on? Even if you can provide it, the GPUs might not be able to cope. Faster is better but you might hit a wall way before 1GB/frame, which levels the playing field even more between the two approaches.



So, while the headline figures might mean Sony looks better, MS is perhaps more able to reach its memory limits Vs Sony
Essentially the 8 GB figure from Microsoft is a better option for games?
 
After reading thuway's original post, I think there's a couple of points to consider: a game's resolution, frames per second, geometry, etc, all impact how much memory you need at any given time.

While there are no 8GB DDR3 graphics cards out there, I did find a review comparing two Nvidia GT 440 cards which were identical except for the ram configuration: 1GB DDR3 vs 512MB GDDR5.

Throughout the tests, they all showed the GDDR5 card to perform anywhere between 4-13% faster than the DDR3 variant, which suggests that having less but faster RAM is favorable to having more but slower RAM.


This however doesn't paint the whole picture though, as none of the games were tested at 1920x1080; the highest they went was up to 1680x1050.

So where does this leave us with regards to playing games at full HD resolutions? I found this other article comparing the Nvidia GTX 680 2GB card against its 4GB sibling. The conclusion is that, even at 2560x1440 resolution, there is no measurable difference between the two cards; in other words, none of the games tested were benefited by the additional 2GB GDDR 5 memory. 2GB is enough for today's games.

What does that mean for the next generation Xbox and Playstation? I'm not sure, as there are many more variables we aren't currently privy to, such as overall architecture, system memory architecture, cache, OS overhead, bus architecture, physics processing, etc, but it does seem that while it is generally preferable to have lower amounts of faster RAM, having 4GB is overkill for all of today's most demanding PC games.

In other words, it seems to me that both the next generation Playstation and Xbox will be well equipped to handle the visual demands of today's most demanding games. Having said that, who knows what kind of games we will have five years down the line?

That's 512 MB vs. 1024MB and not 4GB vs 8GB - that's a huge incluence when it comes to bandwidth.
 
Yes because most of that 8GB is not usable for gaming on a 65GB/s bus. Only 1.08 GB is readable per frame at 60 fps.

Compare 3.2GB usable per frame of the 192 GB/s.

So it like the ps4 has 3x the ram of the x720.
Oh dear.
 
Will be interesting to see how much of the RAM is reserved for OS and apps. If I look at the rumored Xbox 3 specs, everything screams "multitasking": an 8 core CPU, 8 GB DDR3 RAM - I wouldn't be surprised if a not quite small amount of this will be reserved for non-gaming usage. And if Sony uses additional, stacked LPDDR2 RAM for these functions, this comparison will look a lot different.
 
Essentially the 8 GB figure from Microsoft is a better option for games?

No.

1.) Of course more memory can hold more STATIC items - but the cars driving by, bird flying around, NPCs talking, explosions, etc. are not static. It is not like you just can buffer the disc into the main memory and that is it. I can't give an exact figure because this is not my field of expertise but caching everything is only a viable option if there is not much change and the slow bandwidth can keep up with whats happening dynamically. So more bandwidth helps in having better and more sophisticated visuals because if something changes (eg. big explosion) you don't have to reduce the quality to deliever the dynamic data because you have enough bandwidth.

2.) Depends on the eDRAM - 64MB eDRAM at 256GB/s might not level the playing field as much as some hope. It helps a lot but it is not a miracle - you can only fit so much "image quality" into that buffer you pay for.

3.) I always thought modern GPUs are bandwidth starved and not vice versa. Again I am not an expert and please feel free to correct me where I am wrong. What I took away from some talks and conventions is that the underlying BUS system is often the bottleneck.
 
Im not asking which is better, but does this mean one of the platform gonna get shafted again in multiplatform games like ps3 in this gen depending on the lead platform?
 
1) with a streaming engine (or any other engine) you need some cache to avoid reading from the HDD when you spin around or move through the world. That is likely to significantly reduce the amount of memory available on Sony's example. So although it might be able to transfer 3GB per frame, you wouldn't be able to because you would need to be storing data for the surrounding area. changes per frame would be up to 300MB (delta)

This is true for many games but we need also to consider that amount of data for next gen games will be increased.

Witcher 2 assets showed that now fast HDD and fast ram is crucial to not have those bad pop ins. If anything next gen will only increase need of high speed ram.

Also i'm not that educated so i want to ask. What is the point of eDram in PS4 if GDDR5 will be used ? I mean 150 to 190 is not that big difference. Timings ?
 
Im not asking which is better, but does this mean one of the platform gonna get shafted again in multiplatform games like ps3 in this gen depending on the lead platform?

Well I wouldn't say shafted. But it is probable that there will be differences, even if they are smaller details.
 
Im not asking which is better, but does this mean one of the platform gonna get shafted again in multiplatform games like ps3 in this gen depending on the lead platform?

Let's just say 256/256 split RAM was a bad idea for ambitious multiplats.

Both setups should be better.
 
Im not asking which is better, but does this mean one of the platform gonna get shafted again in multiplatform games like ps3 in this gen depending on the lead platform?

for multiplatforms, I would expect least common denominator. Memory usage based on PS4, assets/image complexity based on 720 (assuming a slightly slower GPU).

Unless its relatively trivial to bump eg texture res on the 720, then you might see a small advantage there.

You'll want to look to the first party games to really see either machine pushed.
 
People really need to stop trying to apply 2005 console's scenario to 2013, it's just foolish.



You do know that in open world games the entire world isn't stored in memory right? And GDDR5 allows for ALOT more than simply AF...which really at this point in time doesn't even register as a performance hit on hardware, let alone VRAM.

Right, so there are other things that effect open world games more heavily than a linear game, there is just more stuff to process in an open world game, more characters on screen, more objects, more textures (each object would have their own after all) and iirc Tessellation also uses up memory to work it's magic, so there is that too... Still we say slow memory, but that is only in relation to PC GPUs and the rumor'd PS4 GDDR5, it's still ~3 times faster than xbox 360's memory and again, iirc that memory is locked at ~11GB/s write and ~11GB/s read. There is also expected embedded ram seated with the GPU, which would allow for better IQ than the DDR3 RAM speculated here, would generally allow.

So yes, open world games would be handled better with more ram, even if they stream, it's pointless to bring up GTA:SA as well (not that you did, but it was brought up earlier) because the resolution and IQ not to mention visual fidelity are pointless to mention when talking about next gen game requirements.

Basically Developers will find use from both set ups that will differ from each other, and if you don't think developers will enjoy more ram at least as much as fast ram with half the capacity, then I don't know what to tell you. Again, it's really hard to say what this will mean for the XB3 when we still don't know the size or speed of the embedded RAM being used, it could well be enough to closely match PS4's IQ benefits. Good news is PS4 would likely not have any issues with pop ins, and it would likely keep a higher frame rate, though I figure XB3 will be the lead platform, so I'm not sure the later point will matter.
 
From the sounds of it developers need to target a 4 GB memory limit and a 60 GB/S transfer rate. I feel as if the PS4 version will have a better chance at mantaining the 60 FPS we all so adore, and the next Box will be key in having more things on screen?
 
What if PS4 would use fast SSD ? I know it is not as fast as dedicated ram but now SSD are very fast with low timings. Especially when they updated speed of reading small files.

I mean for a time i was using HDD (sick) to use high textures in GTA4 (modded) where i didn't have much Vram or ram. There were terrible popins but it was doable. For SSD something like that would only work xx times better especially if it is better with small files.

This would mean small installs like PS3 has.

This may sound stupid but i am just shooting blindly (without deeper knowledge)
 
From the sounds of it developers need to target a 4 GB memory limit and a 60 GB/S transfer rate. I feel as if the PS4 version will have a better chance at mantaining the 60 FPS we all so adore, and the next Box will be key in having more things on screen?

Why would they have to limit it to 60 GB/S? And increasing texture resolution/shadow resolution etc are much easier than what this past gen has showed.

PC version for example, devs don't seem to have any problem allowing better image quality, better stuff like shadows, better AA...

Sure, they can't let the core game change, but visually they can allow differences.
 
Why would they have to limit it to 60 GB/S? And increasing texture resolution/shadow resolution etc are much easier than what this past gen has showed.

PC version for example, devs don't seem to have any problem allowing better image quality, better stuff like shadows, better AA...

Sure, they can't let the core game change, but visually they can allow differences.

Slightly OT: MS capped the bandwidth for the 360S to perform like a fat360.
 
I find it weird that a next gen console focusing on performance won't have GDDR...
Are there even any performance graphic cards on the market that come with anything but GDDR?
 
Why everybody is assuming the 8gbs of Xbox RAM is not stacked?. All people know about Durango says it is a monster. I don´t think MS would fail in the RAM department. Stacking allow to have something like a 512 bits bus width interface that would give it near 150 GB/s of bandwidth. ESRAM inside the APU is for other things, not like in X360´s EDRAM that was a framebuffer.
 
This statement is kinda contradictory. It can't use such a huge amount of ram and be heavily optimized. The Xbox360 OS is what I'd call heavily optimized as it uses 32MB of RAM. I think the PS4 will have a similar OS to the PS3, and thus will use something around 50-64MB of RAM.

The idea is to be able to open the system clock and play a song at the same time without running out of memory this time....

The OS itself might only require 100 MB ram but they still need to reserve tons of extra ram for when you have your browser open, some music playing , patching a game in the background while voice chatting.
Being able to multitask while gaming (or just multitask in the OS) will finally lift the consoles out of the OS and UI ghetto. Now consoles will be able to do what pcs could do 15 years ago, hopefully:p
My current firefox window alone is using up 250 MB of ram.
 
I find it weird that a next gen console focusing on performance won't have GDDR...
Are there even any performance graphic cards on the market that come with anything but GDDR?

Well Durango will have eDRAM which will slice away the overhead of framebuffer operations from the DDR3. So that's not really apples-to-apples.

From a memory bandwidth point of view, the rumoured Durango setup should:

- offer a higher upper bound on bandwidth for buffer output vs Orbis
- if x is the pipeline input bandwidth requirements for a game, Durango's bandwidth setup should have a beneficial impact on performance relative to Orbis when x <= 56GB/s and buffer output requirements are > 180-xGB/s *

Orbis setup should:

- offer a higher upper bound on bandwidth for pipeline input vs Durango
- if x is the pipeline input bandwidth requirements for a game, Orbis's bandwidth setup should have a beneficial impact on performance relative to Durango when x > 56GB/s and buffer output requirements are <= 180-xGB/s *
- be more flexible than Durango's setup (you can trade bandwidth for any type of task against any other)
- be simpler to manage than Durango's (one pool vs one pool + edram + additional memory management unit to handle?)

* Assuming CPU is saturating 12GB/s. Paper figures, but ratios ought to hold. There are other scenarios where one would be better than the other but these two scenarios are the most clear-cut.
 
edram covering for potential bandwidth bottleneck?

One wonders why graphic cards don't do something similar... :/

No. I find it hard to believe that xbox 3 will be lumbered with slow memory.
 
Top Bottom