VGLeaks: Details multiple devkits evolution of Orbis

theoretically they can be, a fact a lot of people seem rushing to gloss over.

although after os reserves it should be 5gb to 3.5gb currently. still a very big difference comparable to the flops difference in favor of orbis. it would be the same ratio if 360 had 500mb this gen and ps3 only had 350.


5GB vs 3.5GB won't matter on third party games. They will use the lowest common denominator. I would argue a GPU advantage is better because no matter what, it will get used. Even if it's just ensuring an extra margin so the frame rate doesn't drop below 60 or 30, whatever they're targeting. They would have to create different sets of assets to take advantage of more memory. That seems unlikely on third party games. Although MS exclusives could use the extra ram to do something "impossible" on PS4.
 
I feel like a lot of people are pressing the assumption that a large bulk of it will go towards the OS. And secondly that MS won't close the bandwidth gap through other means such as esRAM.

Lets assume the vast majority of that RAM goes to games 3.5 GB in PS4, 7.5 GB in 360. Isn't Sony suddenly at a huge disadvantage?

The chances of MS only reserving 512MB for the OS are extremely low. There's a reason they went with 8GB, and it's not for games.

PS4 was designed with input from third parties. If Sony thought devs would be using more than 4GB for games, they would have gone with a similar RAM setup as MS.
 
If they feel like delaying the console 6 months and making it much more expensive.

Yeah, but 8 GB of GDDR5 could be expensive.

Yea figured it was late in the game for that. Still think, Microsoft studios will do some magic and take advantage of their strengths. However, Sony's first party studios are just in a different league than everyone else and they too will do wonders with the strengths of the Orbis.
 
The chances of MS only reserving 512MB for the OS are extremely low. There's a reason they went with 8GB, and it's not for games.

PS4 was designed with input from third parties. If Sony thought devs would be using more than 4GB for games, they would have gone with a similar RAM setup as MS.

Source?
 
The chances of MS only reserving 512MB for the OS are extremely low. There's a reason they went with 8GB, and it's not for games.

PS4 was designed with input from third parties. If Sony thought devs would be using more than 4GB for games, they would have gone with a similar RAM setup as MS.

There has been talk of the OS being inflatable or deflatable (depends on PoV) depending upon whether you are playing a game or not, akin to PS Vita.


Pertaining to third party input, Kagari confirmed it and we all know that some of Sony's first party studios have an exceptional talent of churning out astounding visuals.
 
There has been talk of the OS being inflatable or deflatable (depends on PoV) depending upon whether you are playing a game or not, akin to PS Vita.



Pertaining to third party input, Kagari confirmed it and we all know that some of Sony's first party studios have an exceptional talent of churning out astounding visuals.

What about Durango? designed for non third party gaming? xD Microsoft ask developers when Xbox 360 was in design.
 
I don't know what I was expecting, but seeing HDD is disappointing.

When I think about it logically, an SSD would make the console prohibitively expensive to give any meaningful amount of space to the consumer in advance of a nearly 100% DD transition by the end of the console's life... But realizing that I'm going to be stuck with a magnetic disk drive for the next 7 years (pending YLOD) when all other forms of tech have abandoned it is sad.
 
PS4 was designed with input from third parties. If Sony thought devs would be using more than 4GB for games, they would have gone with a similar RAM setup as MS.

Well ICE team are godlike so if the PS4 had input from them, I'm more excited about it than Durango.

Like everyone else said; Sony first party developers are probably some of the best out there now.
 
I'll be sorely disappointed if the next Xbox gets left in the dust because a) it will water down the quality of most 3rd party offerings (lowest common denominator) and b) I really, really like Halo and Crackdown (two of my faves from this gen).
 
Yes, but it looks like "Sony design PS4 with third party in mind and Microsoft don't", "if devs wants 8GB, then Sony would have put it in PS4".

No, you are just twisting words. Both manufacturers would have had an idea of what they wanted to do in terms of power and economic feasibility. After that, it is a matter of consulting both first and third parties for input so that the basic needs could be outlined. Hence, the saying, lowest common denominator, has to be easily attainable by third parties yet the bar needs to be high enough to convincingly beat WiiU and look "next gen".
 
it's not 3GB/s it's 3GB a frame at 60FPS & no the extra ram wouldn't be useless because it could be used for faster loading times or you can keep data in the memory so you wouldn't have to see any loading screens.

Thanks, I got the same answer pretty much at B3D.

...although faster loading times is great, it's not as important as using 3gb per frame. If Sony has 4 gb of gddr5, I think I'm satisfied with that now. I was worried that Sony would need more ram, but it seems it's only a bonus unless they increased bandwidth. Hopefully this will show a big difference for real world results.
 
I don't know what I was expecting, but seeing HDD is disappointing.

When I think about it logically, an SSD would make the console prohibitively expensive to give any meaningful amount of space to the consumer in advance of a nearly 100% DD transition by the end of the console's life... But realizing that I'm going to be stuck with a magnetic disk drive for the next 7 years (pending YLOD) when all other forms of tech have abandoned it is sad.

I would love a hybrid. 16 gb of flash storage for OS and extras, while the rest is for storing media and games which would be cheaper.
 
but there a serious bottleneck when accessing from the HDD or BRD. The ram caches the data and can be fed to the GPU. The more ram, the more efficiently this can be done.

You don't understand how games work. When people say there is 1GB of data per frame, that doesn't mean that they fetch 1 GB of texture data in that frame. Modern graphical effects require multiple passes, like tesselation, That 1GB is also shared between reading of data and writing into memory, and the CPU shares the same bandwidth as the GPU. All these adds up. The PS3 has ~750 MB of bandwidth per frame at 30 FPS and was STILL bandwidth limited, so you better believe that bandwidth makes a difference.
 
lol wtf, then what is the point of high bandwidth? Not even a gtx 680 can process that? what if sony has some special sauce or a custom gpu, would it be possible?

Basically the bandwith is used to access the same data most of the time and or just sitting idle waiting for the GPU. But when the GPU wants data, there is a burst of bandwidth needed. Much of the time the data is very localized so that 3GB of different data is never passed over.

Basically adding a lot of bandwidth has gains with very diminished returned.

You get seriously starved of bandwidth at below about 100GB/s on a GPU of the ps4 performance but you will gain very little moving north of 160GB/s.
 
Do people not realize that the gpu can't physically go through 3 GB of date in 1 frame?

One of the things that we had heard about the PS4 chip, or should we say PS4 SoC, is that Sony is really keen on the idea of TSVs. The other bit is that they are going to have lots of extras, we have heard about sensors, but that could just be part of the other odd bit, FPGAs. Yeah, there is a lot of weird talk coming out of Sony engineers, and programmable logic, aka an FPGA, is just one of the things. Additional media processing blocks, DSPs, and similar blocks are all part of the concept.

To do all of this, and I do realize how odd it sounds, you would need some monumental memory bandwidth for it not so starve.

http://semiaccurate.com/2012/03/02/...l-be-an-x86-cpu-with-an-amd-gpu/#.UQIJAR1EGSo
 
You don't understand how games work. When people say there is 1GB of data per frame, that doesn't mean that they fetch 1 GB of texture data in that frame. Modern graphical effects require multiple passes, like tesselation, That 1GB is also shared between reading of data and writing into memory, and the CPU shares the same bandwidth as the GPU. All these adds up. The PS3 has ~750 MB of bandwidth per frame at 30 FPS and was STILL bandwidth limited, so you better believe that bandwidth makes a difference.

I am aware of multiple passes, you need to access the frame buffer multiple times but its the same data and the data isn't large enough to even reach peak bandwidth. The bandwidth doesn't magically allow faster access of the frame buffer, there is also latency there, even if you have infinite bandwidth, you'd still be limited by latency.
 
Basically the bandwith is used to access the same data most of the time and or just sitting idle waiting for the GPU. But when the GPU wants data, there is a burst of bandwidth needed. Much of the time the data is very localized so that 3GB of different data is never passed over.

Basically adding a lot of bandwidth has gains with very diminished returned.

You get seriously starved of bandwidth at below about 100GB/s on a GPU of the ps4 performance but you will gain very little moving north of 160GB/s.

Based on your last paragraph, would you say Orbis' rumoured specs are quite a good balance in bandwidth then?

Also, can you explain why or how GDDR5 can access 3gb per frame at 60fps at 176 GB/s and why DDR3 in the same instance can only access 1gb at its 68 GB/s bandwidth? I get that it's very simple math, but my question is, why would they put in so much Ddr3 if they could only access 1gb per frame?
 
so 3d stacking really is the best solution for consoles as it would produce high bandwidth and low latency...damn and to think it's only one year away and we would have gotten a beast but instead we may be stuck with a rushed product for the next decade.
 
Can someone explain to me why 4GB of faster ram is better than 8GB of slower ram? Do the speeds solely have to do with loading times? But of course that's also affected by blu-ray/hard drive read rates.

Maybe I don't know enough about tech, but 8GB of DDR3 sounds better than 4GB of GDDR5
 
Based on your last paragraph, would you say Orbis' rumoured specs are quite a good balance in bandwidth then?

Also, can you explain why or how GDDR5 can access 3gb per frame at 60fps at 176 GB/s and why DDR3 in the same instance can only access 1gb at its 68 GB/s bandwidth? I get that it's very simple math, but my question is, why would they put in so much Ddr3 if they could only access 1gb per frame?

Yep, the new 176 is much more reasonable than 192 and should be completely within reason.

The ram serves as a buffer to the game on the Bluray or HDD, which reads at about ~100MB/s or less. Which means its slower than the game is needed a lot of the time, there has to be sufficient assets stored in the ram to render the game for a good while until new data can be put into the ram from the HDD. This is what the ram is doing for the most part in any computer. Any time the CPU or GPU wants data, it goes to the ram, if its not there its really really slow to get.

XB3 HDD =(100MB/s)=> 8GB DDR3 =(68GB/s)=> 32MB SRAM =(102GB/s)=> Processing
PS4 HDD =(100MB/s)=> 4GB GDDR5 =(176GB/s)=> processing

each stage caches for the previous.
 
I am curious about what happens in PC. Does the GPU receive data from system RAM because I do know that the system RAM is filled by the HDD?
 
but there a serious bottleneck when accessing from the HDD or BRD. The ram caches the data and can be fed to the GPU. The more ram, the more efficiently this can be done.

Couldn't 16GB of flash help with that? That was a rumour initially, but I'm not sure at all if there's any truth to it.
 
Couldn't 16GB of flash help with that? That was a rumour initially, but I'm not sure at all if there's any truth to it.

It could but flash is still many of magnitudes slower than RAM, its barely faster than a 7200RPM HDD IIRC.

I am curious about what happens in PC. Does the GPU receive data from system RAM because I do know that the system RAM is filled by the HDD?
The GPU is fed by the Vram which I think comes directly from the HDD through the PCI express.
 
so 3d stacking really is the best solution for consoles as it would produce high bandwidth and low latency...damn and to think it's only one year away and we would have gotten a beast but instead we may be stuck with a rushed product for the next decade.

Eh... you can only fit so much food in your mouth at a single time. Ordering more food from faster chefs isnt going to increase how much you can tank down.

EGBOK. E3 will silence everyone's fears. Both Orbis and Durango will impress.
 
Can someone explain to me why 4GB of faster ram is better than 8GB of slower ram? Do the speeds solely have to do with loading times? But of course that's also affected by blu-ray/hard drive read rates.

Maybe I don't know enough about tech, but 8GB of DDR3 sounds better than 4GB of GDDR5
From my understanding, it's like having a bigger fuel tank vs having a more fuel efficient car. Just having a bigger tank doesn't mean you'll fill up less.
 
Yep, the new 176 is much more reasonable than 192 and should be completely within reason.

The ram serves as a buffer to the game on the Bluray or HDD, which reads at about ~100MB/s or less. Which means its slower than the game is needed a lot of the time, there has to be sufficient assets stored in the ram to render the game for a good while until new data can be put into the ram from the HDD. This is what the ram is doing for the most part in any computer. Any time the CPU or GPU wants data, it goes to the ram, if its not there its really really slow to get.

XB3 HDD =(100MB/s)=> 8GB DDR3 =(68GB/s)=> 32MB SRAM =(102GB/s)=> Processing
PS4 HDD =(100MB/s)=> 4GB GDDR5 =(176GB/s)=> processing

each stage caches for the previous.

Cool.
 
Yep, the new 176 is much more reasonable than 192 and should be completely within reason.

The ram serves as a buffer to the game on the Bluray or HDD, which reads at about ~100MB/s or less. Which means its slower than the game is needed a lot of the time, there has to be sufficient assets stored in the ram to render the game for a good while until new data can be put into the ram from the HDD. This is what the ram is doing for the most part in any computer. Any time the CPU or GPU wants data, it goes to the ram, if its not there its really really slow to get.

XB3 HDD =(100MB/s)=> 8GB DDR3 =(68GB/s)=> 32MB SRAM =(102GB/s)=> Processing
PS4 HDD =(100MB/s)=> 4GB GDDR5 =(176GB/s)=> processing

each stage caches for the previous.

We don´t know how the memory works in Durango.
 
I'm not a tech person, but that doesn't sound quite right. SSDs are faster than even the fastest HDDs, aren't they?

It's a bit unfair to call SSDs barely faster than 7200rpm drives. But the fastest ssd drive at 740 MB/s is still a ways off 192 GB/s or whatever.

EDIT: Didn't catch the ssd/flash differentiator.
 
are ssd's pretty much guaranteed for next gen? more specifically some 16 or 32 gb of it just for the OS atleast to make it fly.
 
SSDs have amazing controller in them to make them fast. Normal flash does not.

Not every computer has a SSD so that's beside the point. If the GPU streams its data from HDD as well then it is obvious that (and even you would know) that not the entire 1 or 2GB GDDR5 RAM is used in one frame, flushed and then refilled with a whole new set of information because the data streaming rate would be nowhere near as fast. Thus, most of the data that will be in constant use is already cached and bits that change are far smaller and as aforementioned, are continually calculated in or out with pertinent information streamed in to constantly compensate.

At the end of the day, I really do not see visual data exceeding 2GB (for now) especially considering the resolution (1080p at most) & fps (30 or 60). And a pitcairn level GPU performs best with bandwidth north of 150GBps (going by the stock models). In that regard, MS's decision to pair cape verde esque GPU makes perfect sense given the stock's bandwidth requirement matches very closely to the 256 bit 2133MHz DDR3's bandwidth.

Both consoles are equipped to take advantage of their specific design quirks.
 
are ssd's pretty much guaranteed for next gen? more specifically some 16 or 32 gb of it just for the OS atleast to make it fly.
Unlikely, maybe some flash for storing the OS but unlikely any SSD controller which would cost money to license.


Not every computer has a SSD so that's beside the point. If the GPU streams its data from HDD as well then it is obvious that (and even you would know) that not the entire 1 or 2GB GDDR5 RAM is used in one frame, flushed and then refilled with a whole new set of information because the data streaming rate would be nowhere near as fast. Thus, most of the data that will be in constant use is already cached and bits that change are far smaller and as aforementioned, are continually calculated in or out with pertinent information streamed in to constantly compensate.

At the end of the day, I really do not see visual data exceeding 2GB (for now) especially considering the resolution (1080p at most) & fps (30 or 60). And a pitcairn level GPU performs best with bandwidth north of 150GBps (going by the stock models). In that regard, MS's decision to pair cape verde esque GPU makes perfect sense given the stock's bandwidth requirement matches very closely to the 256 bit 2133MHz DDR3's bandwidth.

Both consoles are equipped to take advantage of their specific design quirks.

True but this is when we are dealing with 7GB games. Next gen we can have 50GB games with amazing textures as well as other data. I can see them use 4+ GB of data especially for large games with uncompressed audio and textures.
 
Eh... you can only fit so much food in your mouth at a single time. Ordering more food from faster chefs isnt going to increase how much you can tank down.

EGBOK. E3 will silence everyone's fears. Both Orbis and Durango will impress.

I suppose but it seems right now that we don't have a full plate of food even though the chef seems to be fast enough. We are still a bit more hungry but not by much. I believe the restaurant will improve enough next year where it can provide plenty of quality food by a fast enough chef and remember, we would have gained some size by then which means we will be able to tank down more although not sure by how much.

All that food would help build quality muscle that will keep you strong and satisfied for many years. Now that's a 5 star cuisine. Microsoft's restaurant has a slow chef, but the assistant is very fast with appetizers, but I'm not so sure that's enough to fill people up while they're waiting for the main meal.
 
I am aware of multiple passes, you need to access the frame buffer multiple times but its the same data and the data isn't large enough to even reach peak bandwidth.

You should check real world data. Increased bandwidth has a definite performance impact that can be exploited. I will simply leave the data below:

gpu-memory-perf-3.png


As you can see in the top 2 lines (and all the rows shaded in blue), the HD5850 has higher theoretical performance with the only exception being that the memory bandwidth is halved (since the memory interface width is halved) but the HD4870 performs better on 3DMark despite being theoretically weaker. The Orbis GPU is rumoured to be faster than the HD4870 (twice as fast approx) and you are telling me developers are unable to exploit the extra bandwidth if they wanted to? Needless to say, I am extremely skeptical.
 
Top Bottom