PS4 has 8 GB OF GDDR5 RAM

What does the 8 GB GDDR Exactly mean??(in a conventional sense) I know that developers where craving for more ram. But Its still not completely clear what developers can benefit besides enormous space for texture work and enormous worlds. But I cant exactly picture what can be achieved does anyone have an simple example ??

It's pretty much pointless to have this much RAM. I'm doing some calculations here, but it's not looking pretty.

For one thing load times will be crazy for this device.
 
It's pretty much pointless to have this much RAM. I'm doing some calculations here, but it's not looking pretty.

deepdown-16-7fps-290-1iosm.gif
 
I am fucking gobsmacked.

Two questions. People are shitting their pants over 8GBDDR5. Is there a current equivalent gfx card on the market? How many 680's slapped together would achieve the same power? Clearly I'm no tech head but in layman's terms what are we looking at here?

Also, with regards to the gif. Is it confirmed to be in-game?

Regardless, I am buying a PS4. yay!

U can't compare it to a PC card because this is ram being used for the system and GPU. The PC doesn't use the GPU ram in the same way because its not as easily accessible due to the limitations of having to communicate through the system bus and yadda yadda.

We just know that bandwidth for the GPU won't be an issue. In dirrect comparison the 680 > ps4 GPU but the ps4 GPU is no slouch and is very good at compute as well.
 
Sooo.. is this an accurate representation for us layman who don' know jack shit about hardware specs? Because if so... I said god damn!

Sony cray....

No, that PS4 Bar is too large if it is just comparing size. Here is an accurate bar in terms of size.

compare.png
 
Then I think it lends credence to the some people's assumption that what we saw today was based on older code and older specs (3.5GB GDDR5) especially KZ4 (and personally perhaps even Watch Dogs). It means games are about to go through some big changes by the time E3 rolls. It also means that second generation games built ground up to take advantage of 3.5+ GB of RAM will be the first proper show case for the system.


Ugh, RAM has little to do with graphical capabilities. The PS4 still has the same graphics card even if it has more RAM to play with.

All this means is developers can store more in memory without the need to load from the disc.
 
Once again, like it or not, Watch Dogs wasn't running on a PS4. Devs confirmed on GT Live that it was running on PC and set to what they believe the PS4 version will look like.

I personally don't mind, this is standard procedure for games in development.
 
Once again, like it or not, Watch Dogs wasn't running on a PS4. Devs confirmed on GT Live that it was running on PC and set to what they believe the PS4 version will look like.

I personally don't mind, this is standard procedure for games in development.

so it will be like that original killzone 2 trailer?
 
Ugh, RAM has little to do with graphical capabilities. The PS4 still has the same graphics card even if it has more RAM to play with.

All this means is developers can store more in memory with the need to load from the disc.

Its all about efficient use of resources. The type of RAM dictates BW which dictates the amount of information on screen. The amount of RAM will dictate how much data can be stored at any given time. 8 GB of fast RAM will be a dream for engines that rely on streaming very high quality textures for the GPU to digest at any given time.
 
I think as the generation winds down engines will take advantage of every drop of BW and every crumb of ram.

I believe 3GB of GDDR5 and 4gb of much cheaper system RAM with that money going to more Stream Processors on the GPU would have been preferable.

I do see the appeal of unified RAM though.
 
Ugh, RAM has little to do with graphical capabilities. The PS4 still has the same graphics card even if it has more RAM to play with.

All this means is developers can store more in memory without the need to load from the disc.
The same graphic cards as the Durango, yes, but significantly more powerful.
1,8Tflops and 18 CUs vs 1,2 Tflops and 16 CUs.

All in all, the PS4 spanks the Durango.
 
Ugh, RAM has little to do with graphical capabilities. The PS4 still has the same graphics card even if it has more RAM to play with.

All this means is developers can store more in memory without the need to load from the disc.

I know the graphics are processed in the GPU but it's the RAM that keeps it fed. That's why I wondered whether things like LOD and object pop-ins would be resolved to a much greater extent this time.
 
Why wouldn't technology prices drop by 50% or more over 3 years?

Even 50% drop will be a huge cost. We're looking at ~$100 just for RAM. Compare this to the original Xbox 360 (which is arguably the closest comparison): http://www.xbitlabs.com/news/multimedia/display/20051123214405.html

Xbox 360 Premium Costs $525 to Manufacture – iSuppli.
Microsoft Loses Hundreds of Dollars on Every Xbox 360, Research Says
[11/23/2005 09:45 PM]
by Anton Shilov
Microsoft Corp. loses hundreds of U.S. dollars on every Xbox 360 game console it sells, according to market analysis firm iSuppli. According to the company, the most expensive components of the console are graphics processor developed by ATI and central processing unit designed by IBM. The company expects Microsoft to lower product cost to the level at which the consoles are sold today in a year from now.

Factoring in costs for the hard disk, the DVD drive, enclosures, the Radio Frequency (RF) receiver board, power supply, wireless controller, cables, literature, and packaging – the total bill of materials (BOM) cost for the Xbox 360 Premium reaches $525, well above the retail price of $399, iSuppli said. The high BOM cost for the Xbox 360 is not unusual. In the video-game business, equipment producers often market games consoles as loss leaders for more lucrative software and licensing fees.



IBM designed and co-manufactures the custom microprocessor that powers the Xbox 360. The microprocessor is a triple-core PowerPC that runs at a frequency of 3.20GHz. At a cost of $106, this single part accounts for 20.2% of the total BOM cost for the Xbox 360 Premium, according to preliminary findings from firm.

Other key semiconductors in the Xbox 360 include the graphics processing unit (GPU), the memory and a Southbridge I/O controller. The GPU, designed by ATI Technologies to provide high-definition (HD) graphics, costs an estimated $141, including embedded DRAM from NEC.

The main memory, 512Mbytes of GDDR DRAM from Samsung Electronics Co. Ltd., accounts for another $65 of the BOM. The SiS Southbridge chip costs an estimated $12. Other semiconductors and electronic components make up the remaining cost of the $370 mainboard.

Factoring in costs for the hard disk, the DVD drive, enclosures, the Radio Frequency (RF) receiver board, power supply, wireless controller, cables, literature, and packaging – the total BOM cost for the Xbox 360 Premium reaches $525, well above the retail price of $399.

“It’s really not surprising for the initial cost of the console to approach or even exceed the retail price. The good news for Microsoft is that during the next year, improved yields for the IBM microprocessor and the ATI GPU should save at least $50 per unit, in addition to other cost reductions,” said Andrew Rassweiler, manager of iSuppli’s teardown analysis service.

IOW, the 360 had just $65 in RAM and cost $525 total. The PS4 will easily trump that at $100 or so, possibly significantly so if GDDR5 stays expensive. Add ~$35 to the 360 launch cost and that's $560. Add $100 more and thats $625. When you factor in inflation, you're easily looking at a launch console manufacturing cost greater than $600. Possibly as bad as the original PS3 which was something like $800 IIRC.

So this is shaping to be one very expensive console. PS3 and Five Hundred and Ninety-Nine US dollars bad all over again.
 
Ugh, RAM has little to do with graphical capabilities. The PS4 still has the same graphics card even if it has more RAM to play with.

All this means is developers can store more in memory without the need to load from the disc.

Wrong.

Greater amounts of ram and fast ram at that allows for far greater texture quality and also lends itself greatly to open old type games.
 
Please elaborate?

Dedicated pools of memory tend to restrict freedom. In the PS4, the machine was split 50/50 VRAM and Main RAM. This limited most games to only 256 MB of texture memory, while an Xbox 360 game could use more OR less if needed. Now, IIRC the PS4 could still dabble into the Main Memory for textures, but you now need to move that data across another memory bus and that starts hitting on BW performance.

Having a big, fast unified pool is sort of the K.I.S.S. method of hardware design.
 
I've been reading more about GDDR5 vs DDR3, and one thing that many other sites are talking about, and not discussing here is the latency. GDDR5 actually has higher latency than DDR3, at the cost of its increased bandwidth. In fact, GDDR5 is actually based on DDR3 memory!

Remember that the "G" in GDDR5 stands for graphics. The reason why this type of memory is found in GPU's is because a GPU is typically performing lots of calculations in parallel, making the latency almost a non-issue. However, when you use that same memory when dealing with a CPU, it actually starts performing worse than DDR3, since CPU's act in a linear fashion (execute instruction X, then Y, then Z, etc...). Of course, you can split up your compute jobs between multiple cores, but each core still executes in a linear fashion. This means that when the CPU needs additional instructions to execute, GDDR5 will actually be slower to respond than DDR3. Bandwidth doesn't matter as much as latency in this scenario, because you aren't shifting a lot of data, but rather you want the next instruction to come as soon as possible so you can move onto the next one.

This issue of latency is actually one of the reasons why they don't sell GDDR5 as system RAM. It's not that they can't, it's just that DDR3 is better. It's mostly useful for graphics cards when almost all of your work can be done in parallel, and DDR3 is more useful when dealing with CPU's when you're dealing with linear instructions and latency is more important.

Perhaps that's why Sony has extra compute units on the GPU, because they want to offload as much as they can from the CPU due to GDDR5's latency issues?

After looking at all of this, I'm actually not 100% sure that GDDR5 is always better than DDR3. It seems like it's an apples and orange comparison. You need to take the rest of the system into account, and not just focus on an individual piece (in this case, the type of RAM).

Thoughts?
 
I've been reading more about GDDR5 vs DDR3, and one thing that many other sites are talking about, and not discussing here is the latency. GDDR5 actually has higher latency than DDR3, at the cost of its increased bandwidth. In fact, GDDR5 is actually based on DDR3 memory!

Remember that the "G" in GDDR5 stands for graphics. The reason why this type of memory is found in GPU's is because a GPU is typically performing lots of calculations in parallel, making the latency almost a non-issue. However, when you use that same memory when dealing with a CPU, it actually starts performing worse than DDR3, since CPU's act in a linear fashion (execute instruction X, then Y, then Z, etc...). Of course, you can split up your compute jobs between multiple cores, but each core still executes in a linear fashion. This means that when the CPU needs additional instructions to execute, GDDR5 will actually be slower to respond than DDR3. Bandwidth doesn't matter as much as latency in this scenario, because you aren't shifting a lot of data, but rather you want the next instruction to come as soon as possible.

This issue of latency is actually one of the reasons why they don't sell GDDR5 as system RAM. It's not that they can't, it's just that DDR3 is better. It's mostly useful for graphics cards when almost all of your work can be done in parallel, and DDR3 is more useful when dealing with CPU's when you're dealing with linear instructions and latency is more important.

Perhaps that's why Sony has extra compute units on the GPU, because they want to offload as much as they can from the CPU due to GDDR5's latency issues?

After looking at all of this, I'm actually not 100% sure that GDDR5 is always better than DDR3. It seems like it's an apples and orange comparison. You need to take the rest of the system into account, and not just focus on an individual piece (in this case, the type of RAM).

Thoughts?

I've read that graphics in general are not anywhere near as latency sensitive as general compute functions.
 
Wrong.

Greater amounts of ram and fast ram at that allows for far greater texture quality and also lends itself greatly to open old type games.

Ugh, this again is the TMU (texture Mapping Unit) and is part of the GPU.

The amount of TMUs and the texture Fill Rate determine the quality of the textures. Just because you have more RAM and faster RAM doesn't chance the fact the GPU is still the same.
 
So I'm copying a post I made in another thread, since I wasn't aware of this one:

"I'm a little confused, so maybe the tech heads can help me understand: Sony now has 8gigs of GDDR5, but the bandwidth speed appears to be the same as it was with 4 gigs. Is that right? I thought in one of the other spec threads it was said that PS4 could read 4 gigs a frame with that speed, but I also thought that having more ram of that type would lead to a bandwidth increase of some sort. If its the same, why should MS be concerned that Sony is packing 8 gigs, if there is no corresponding increase in bandwidth?"

I should add that I'm pleased with the PS4 specs, and I'm just asking an honest question - the last part was just due to the thread the post was made in " Should MS be worried".
 
Ugh, this again is the TMU (texture Mapping Unit) and is part of the GPU.

The amount of TMUs and the texture Fill Rate determine the quality of the textures. Just because you have more RAM and faster RAM doesn't chance the fact the GPU is still the same.

So if you have shit textures stored in memory you'll get beautiful textures on screen? Its a balance.
 
I think as the generation winds down engines will take advantage of every drop of BW and every crumb of ram.

Nope. Not a chance if I figured correctly. Take load times for example. A 6x Bluray drive reads only at 36 * 6 Mbps = 27 MB/s maximum speed. To fill the whole memory, you'll need 8192 / 27 = 303.407 seconds, or about 5 minutes (at best!). Real world load times will be worse!

In other words, the PS4 will be only for the patient gamer; or games will never come close to maxing out RAM usage. Likely most games will just use main memory as way to stream data and store texture currently unseen, but you'll never see a game that uses all of that memory at any one time.

But it would have been far smarter just to place a SSD drive in every PS4 and stream from there! It makes no sense to have so much RAM when it would be incredibly slow to fill it up and there are better ways to improve texture streaming.

$150 dollars per unit is my estimation, based on a drop in price from those figures (last years) and bulk buying (in amounts greater than any graphics card manufacturer).

I would hope you get a discount buying 10+ million gig who knows.

That's what they have to hope for at this point.
 
We have no idea what kind of deal Sony will get on that ram. They can put more units in homes than any of those graphics cards manufacturers.
 
So if you have shit textures stored in memory you'll get beautiful textures on screen? Its a balance.

Basically :)

I just think 8GB of GDDR5 memory won't make a noticable difference for graphical output.

The decision to go with 8GB GDDR5 probably has more to do with the OS needing more memory and keeping the architecture simple at the same time.

Adding more types of memory and controllers would probably increase the cost and complexity of the system then just going with 8GB of GDDR5.
 
Ugh, this again is the TMU (texture Mapping Unit) and is part of the GPU.

The amount of TMUs and the texture Fill Rate determine the quality of the textures. Just because you have more RAM and faster RAM doesn't chance the fact the GPU is still the same.


So how come when increasing graphical detail in a PC game does my VRAM consumption increase?

Also I understand it is that specific part of the GPU that processes texture information for display but having fast ram in a large pool also allows for greater amounts of more detailed texture data to be feed to the GPU for processing.
 
Here's another point to be made: A top of the line GPU only costs around $80-120 to make. This is compared to something like $50 for a midrange GPU. So why didn't they put a 7970 or 680 with 3-4GB of GDDR5 RAM instead? Wouldn't it produce much better graphics, and cost about the same as well?
 
Nope. Not a chance if I figured correctly. Take load times for example. A 6x Bluray drive reads only at 36 * 6 Mbps = 27 MB/s maximum speed. To fill the whole memory, you'll need 8192 / 27 = 303.407 seconds, or about 5 minutes (at best!). Real world load times will be worse!

In other words, the PS4 will be only for the patient gamer; or games will never come close to maxing out RAM usage. Likely most games will just use main memory as way to stream data and store texture currently unseen, but you'll never see a game that uses all of that memory at any one time.

But it would have been far smarter just to place a SSD drive in every PS4! It makes no sense to have so much RAM when it would be incredibly slow to fill it up.





That's what they have to hope for at this point.


You don't need to have all the ram filled to start playing as more assets are streamed on to ram while you are playing.

You don't have to wait on all 8 gig of ram to be filled before you start playing.
 
You don't need to have all the ram filled to start playing as more assets are streamed on to ram while you are playing.

You don't have to wait on all 8 gig of ram to be filled before you start playing.

Then you are simply streaming textures. Better to have a SSD for that purpose in every machine than this.
 
Here's another point to be made: A top of the line GPU only costs around $80-120 to make. This is compared to something like $50 for a midrange GPU. So why didn't they put a 7970 or 680 with 3-4GB of GDDR5 RAM instead? Wouldn't it produce much better graphics, and cost about the same as well?


Cost to make does not to into account profit margins for the manufacturer / designers / R & D costs etc.


Why does a GTX 670 cost £300 if it only costs a third or less to make?
 
Could you imagine if Sony comes out tomorrow and say it was a typo (that it was 4GB) or a few months down the line saying the yields are not good enough to continue or that it will be delayed by 6 months?

I am just scaring myself but what if...
 
Top Bottom