• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

IGN rumour: PS4 to have '2 GPUs' - one APU based + one discrete

I think they were talking about home consoles. And by the end of this year, that's 2.
Ancient was also a terrible word to use, because irrespective to how many ALUs and CPU threads the Wii U has, it will be using newer tech than the CPUs/GPUs inside the Vita.

Does that mean the Vita is using "ancient" tech? Heck no, the Vita is an awesome marriage of practical and available tech and great engineering.

I think everyone is using old tech from now on. Nintendo just uses even older stuff and makes even more feature poor hardware more akin to previous gen consoles. Hence "ancient".
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
S3. The bulk of the memory usage this next gen is gonna be tied to the GPU, and if these rumors are true, guessing the console would have 2GBs of ram Sony has again effectively halved the practical memory you can use as the GPU would certainly have to go thru the APU's buses to try and use the main system ram pool. Ask any developer this gen how fun that is.

I think you are confounding issues. If the APU has 2GB for CPU and GPGPU tasks, then 1GB sounds like a good number for graphics only. On the PS3 it was a different beast. The main draw back was not the lack of VRAM, it was the lack of system RAM (~210MB) and the fact that the Cell was doing graphics related work, thus having to shuffle data back and forth to the GPU memory. Ask any PC developer how fun it is to develop with 1GB of dedicated VRAM, do they complain? Is 1GB enough? I'd argue it is plenty, with streaming and texture compression.

It's kind of an elegant solution if true. Solve the bandwidth/cost issue of having to use one giant GDDR memory pool (or eDRAM) but still exploits the strength of having a GPU next to your CPU with shared memory for GPGPU tasks.
 

Fafalada

Fafracer forever
onQ123 said:
I'm guessing that OpenGL will be hardware level & not software based
That doesn't even mean anything. The only thing "openGL is native" would imply, is that's what you'll be stuck with - no direct-to-hw, no low-level APIs. Which wouldn't really come of as too much of a surprise to me - it's just another nail in the coffin of confirming we're in for extremely boring console generation.

theBishop said:
My understanding is that LibGCM allows "close-to-the-metal" access to the RSX.
LibGCM was essentially a push-buffer API. Which pretty much controls everything in the GPU rendering pipeline.
Whatever you'd write OpenGL for a particular platform with - on some level there would be a push-buffer layer, so saying "emulated with LibGCM" is sort of like saying DirectX is emulated with NVidia drivers on NVidia hardware.
 

StevieP

Banned
I think everyone is using old tech from now on. Nintendo just uses even older stuff and makes even more feature poor hardware more akin to previous gen consoles. Hence "ancient".

Using the word "features" to define how ancient something is is a slippery slope. You could make the argument that the dualshock controller, for instance, is older than the extremely old circa-2000 CPU/GPU in the Wii. Therefore the PS3 is ancient. In the same breath, Move was a response to the cutting-edge (at the time) Wii remote.

If you're talking strictly CPU-GPU, however, the CPU/GPU inside the Wii U will be newer than that of the Vita.

It's basically certain companies choosing where their R&D goes, more in one direction or another. As an example, Microsoft is certainly toying with tablet controllers for the next Xbox.
 
So, for people that understand this more.

So... OpenCL... as long as the hardware is using it, the code will work on ANY hardware right?

So... if people drop x86 ISA... going forward OpenCL will allow much wider varieties of CPU architectures? Theoretically something like... Cell and AMD's Fusion? Write code once and it'll work on both as long as they support OpenCL? This can be fantastic for consoles going forward... IF I'm understanding this correctly.
And from Globox_82 they are using OpenGL, so OpenCl and OpenGL and it's forward compatible and sideways compatible between all handhelds like the Vita. (Baring minor differences that are soon to disappear between Handheld and Desktop OpenGL)
 
Using the word "features" to define how ancient something is is a slippery slope. You could make the argument that the dualshock controller, for instance, is older than the extremely old circa-2000 CPU/GPU in the Wii. Therefore the PS3 is ancient. In the same breath, Move was a response to the cutting-edge (at the time) Wii remote.

If you're talking strictly CPU-GPU, however, the CPU/GPU inside the Wii U will be newer than that of the Vita.

It's basically certain companies choosing where their R&D goes, more in one direction or another. As an example, Microsoft is certainly toying with tablet controllers for the next Xbox.

Why are you comparing handheld with home console? You can't fit console components in a handheld...and the technology is completely different and evolves at a different pace.

My point is that nintendo is behind the curve compared to its competition. There's no reason the wii u could not be similar to ps4/720. Ditto for 3ds vs vita. Instead they decided to make a wii sized box using some old ass tech from 2008, which they might "customize" to become even less capable than the current consoles in some respects.
 

Triple U

Banned
I think you are confounding issues. If the APU has 2GB for CPU and GPGPU tasks, then 1GB sounds like a good number for graphics only. On the PS3 it was a different beast. The main draw back was not the lack of VRAM, it was the lack of system RAM (~210MB) and the fact that the Cell was doing graphics related work, thus having to shuffle data back and forth to the GPU memory. Ask any PC developer how fun it is to develop with 1GB of dedicated VRAM, do they complain? Is 1GB enough? I'd argue it is plenty, with streaming and texture compression.

It's kind of an elegant solution if true. Solve the bandwidth/cost issue of having to use one giant GDDR memory pool (or eDRAM) but still exploits the strength of having a GPU next to your CPU with shared memory for GPGPU tasks.

How likely is it that you are gonna get 2 GBs solely for the APU on top of a GB for VRAM? I think you're shooting a bit too high considering the current state of ram.

The majority of developers(multiplatform) didn't use Cell to offload GPU tasks for a while. And I have never heard of lack of system ram being a major issue, do you mind providing some quotes saying as much because I can find alot complaining of VRAM. Also I think you're confused on the offloading part, Cell doesn't send graphics data to the GPU's memory pool it sends it to the GPU(who then might store it in memory) and thats no slower than the data that it regularly sends there.


As for your point about 1GB being enough, we have developers asking for 8x that. Im sure that 1GB of VRAM would be disappointing to most who work on that machine.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
How likely is it that you are gonna get 2 GBs solely for the APU on top of a GB for VRAM? I think you're shooting a bit too high considering the current state of ram.

DDR5 is much cheaper than GDDR5. The alternative is 4GB GDDR5 or something like the 360 with eDRAM and DDR.

The majority of developers(multiplatform) didn't use Cell to offload GPU tasks for a while.

So? They still had system RAM constraints, even more so because the PS3 Os used to take much more memory in the early years.

And I have never heard of lack of system ram being a major issue, do you mind providing some quotes saying as much because I can find alot complaining of VRAM.

See Bethesda and the Skyrim debacle. That had nothing to do with VRAM. With streaming and texture compression they had enough. They had enough to do thing like triple buffering (UC2/3) which takes even more VRAM than normal. Early engines loaded the entire level into memory, so you often had down sizes textures. Now they all stream mid-level.

Also I think you're confused on the offloading part, Cell doesn't send graphics data to the GPU's memory pool it sends it to the GPU and thats no slower than the data that it regularly sends there.

The Cell was used to do post-processing, typically done on a GPU, which means it had to eventually send the information back to the GPU memory to be used to render the frame buffer. I never mentioned memory bandwidth constraints.

As for your point about 1GB being enough, we have developers asking for 8x that. Im sure that 1GB of VRAM would be disappointing to most who work on that machine.

You think developers can use 8GB of VRAM? PC developers rarely fully tap out 1GB card now. A 4x increase over the PS3 is fine, frame buffer sizes are hardly increasing (maybe by a few tens of MB) and they can continue to use streaming and compression for assets.
 
Yeah, I wasn't too sure what they meant with all that stuff...
PSGL for PS3 uses OpenGL ES 1.0 with 2.0 features along with the mentioned LibGCM...

Maybe they'll keep LibGCM but include a new OpenGL instead of ES? o_O



SNES came out two years after the Genesis... so of course it'll be more powerful in some ways even though the Genesis was a beast in itself.

N64 came out two years after the PS1... so of course it'll be more powerful in some aspects there as well...

GC/Xbox came out came out a year and half after the PS2... after major Shader/gpu breakthroughs happened...

See where I'm going? Just because it was more powerful doesn't mean anything when you put it into the context of time...


Okay, so considering what you just said explain to me how Nintendo's consoles are consistently ancient tech aside from the Wii...which is the main point I was arguing in the first place. Unless all the consoles released in previous gens were ancient tech from day 1. You're latching onto the one small comment I made about the Playstation and ignoring the fact that Nintendo's consoles in previous gens were usually at or above the power of the consoles released in the same gen
 
Okay, so considering what you just said explain to me how Nintendo's consoles are consistently ancient tech aside from the Wii...which is the main point I was arguing in the first place. Unless all the consoles released in previous gens were ancient tech from day 1. You're latching onto the one small comment I made about the Playstation and ignoring the fact that Nintendo's consoles in previous gens were usually at or above the power of the consoles released in the same gen

I never said they were ancient, but they weren't the powerhouses you are trying to imply. ...you know I was going to type something but never mind. I really don't care for this discussion especially in this thread.
 

Triple U

Banned
AgentP said:
DDR5 is much cheaper than GDDR5. The alternative is 4GB GDDR5 or something like the 360 with eDRAM and DDR.
What does this have todo with anything? Also 4GBs of GDDR5 would cost Sony an arm and a leg and require an insane amount of chips. Its highly unlikely.



AgentP said:
See Bethesda and the Skyrim debacle. That had nothing to do with VRAM. With streaming and texture compression they had enough. They had enough to do thing like triple buffering (UC2/3) which takes even more VRAM than normal. Early engines loaded the entire level into memory, so you often had down sizes textures. Now they all stream mid-level.

Skyrim was a very very unique issue and is not really applicable to general design issues. Streaming/Compression is not really the fix-all you are making it to be, they employ the same strategies on the 360 and it is twice as hard because of the VRAM and BD read speed of PS3.


AgentP said:
The Cell was used to do post-processing, typically done on a GPU, which means it had to eventually send the information back to the GPU memory to be used to render the frame buffer. I never mentioned memory bandwidth constraints.

It doesn't goto the Gpus memory, it goes to the GPU(who then decides what to do with the data) via IOIFO. For post processing it might go straight to the FB(thru the GPU of course) but thats one specific instance and not the case all the time. Alot of times the data is further processed before it hits memory.

http://www.ps3devwiki.com/wiki/RSX



AgentP said:
You think developers can use 8GB of VRAM? PC developers rarely fully tap out 1GB card now. A 4x increase over the PS3 is fine, frame buffer sizes are hardly increasing (maybe by a few tens of MB) and they can continue to use streaming and compression for assets.
They rarely tap out 1GB because they have no real incentive to on PC. The majority of their customers won't reap the benefits so its pointless. By no conicidence when the next consoles launch you will start seeing more and more impressive graphics on the on PC.
 
I never said they were ancient, but they weren't the powerhouses you are trying to imply. ...you know I was going to type something but never mind. I really don't care for this discussion especially in this thread.

I never used or implied they were powerhouses either...but yeah, let's end this tangent to the thread
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
What does this have todo with anything? Also 4GBs of GDDR5 would cost Sony an arm and a leg and require an insane amount of chips. Its highly unlikely.

So you are agreeing with me, its hard to tell.



Skyrim was a very very unique issue and is not really applicable to general design issues. Streaming/Compression is not really the fix-all you are making it to be, they employ the same strategies on the 360 and it is twice as hard because of the VRAM and BD read speed of PS3.

Well I brought an example as evidence, assertions alone don't add an value. How do you quantify things like"twice as hard"? Difficulty isn't the issue. You either have enough memory for parity, or you have to make compromises. If you had some data like all texture were half res on all MP PS3 games, then I would concede. But with game like UC2/3 with triple buffering, 720P 2x AA and stunning texture, you need some sort of counter data.


It doesn't goto the Gpus memory, it goes to the GPU(who then decides what to do with the data) via IOIFO. For post processing it might go straight to the FB(thru the GPU of course) but thats one specific instance and not the case all the time. Alot of times the data is further processed before it hits memory.

http://www.ps3devwiki.com/wiki/RSX

Thanks for the link.

Because of the VERY slow Cell Read speed from the 256MB GDDR3 memory, it is more efficient for the Cell to work in XDR and then have the RSX pull data from XDR and write to GDDR3 for output to the HDMI display.

Isn't that what I was eluding to? This kind of thing would not be needed for the leaked design, the APU (CPU+GPU) would have it's own 2GB pool, with no need to touch the VRAM and vice verse. This is why the dual GPU design is cool, one GPU for rendering and one for GPGPU.


They rarely tap out 1GB because they have no real incentive to on PC. The majority of their customers won't reap the benefits so its pointless. By no conicidence when the next consoles launch you will start seeing more and more impressive graphics on the on PC.

True, but 1GB of data purely for the GPU is a massive leap over what we have now. I'm not so sure PC devs are begging for 2-4GB of VRAM (especially 8GB!). If a dev is making a game for WiiU, PS4, PC and 720; 1GB of VRAM is probably the right target. So even if the 720 has more memory who is going to use it? It would not be cost effective unless the game was exclusive.
 

KageMaru

Member
Well I brought an example as evidence, assertions alone don't add an value. How do you quantify things like"twice as hard"? Difficulty isn't the issue. You either have enough memory for parity, or you have to make compromises. If you had some data like all texture were half res on all MP PS3 games, then I would concede. But with game like UC2/3 with triple buffering, 720P 2x AA and stunning texture, you need some sort of counter data.

You can't really use an exclusive, a game that would have every aspect tailered around a system's limitations and weaknesses, as an example. It's easy to say that exclusive A doesn't have a problem with textures when there's no frame of reference to compare it to (who's to say how the textures would look if running on the 360 for example). Besides UC games have plenty of repeated or lower res textures covered up by detail maps.

True, but 1GB of data purely for the GPU is a massive leap over what we have now. I'm not so sure PC devs are begging for 2-4GB of VRAM (especially 8GB!). If a dev is making a game for WiiU, PS4, PC and 720; 1GB of VRAM is probably the right target. So even if the 720 has more memory who is going to use it? It would not be cost effective unless the game was exclusive.

I could easily see 1GB being short if both the PC and xbox have more memory. Developers will use the extra memory, make no mistake on that one.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
You can't really use an exclusive, a game that would have every aspect tailered around a system's limitations and weaknesses, as an example. It's easy to say that exclusive A doesn't have a problem with textures when there's no frame of reference to compare it to (who's to say how the textures would look if running on the 360 for example). Besides UC games have plenty of repeated or lower res textures covered up by detail maps.

Well if it is a weakness, it is a weakness, it doesn't matter who is the developer right? Your last sentence is pure nonsense, it is true of every game made on every platform. Do you know of a 360 game with 100% unique high res textures?

I could easily see 1GB being short if both the PC and xbox have more memory. Developers will use the extra memory, make no mistake on that one.

Well they better start selling 2GB cards on the PC front soon, otherwise it will continue to be 1GB as it has been for years. And remember 2x memory does not allow for 2x texture resolution, that requires 4x memory.

The current steam hardware poll is 43% 1GB, 20% 512MB, 10% 256MB, ~7% > 1GB.
 

i-Lo

Member
See Bethesda and the Skyrim debacle. That had nothing to do with VRAM. With streaming and texture compression they had enough. They had enough to do thing like triple buffering (UC2/3) which takes even more VRAM than normal. Early engines loaded the entire level into memory, so you often had down sizes textures. Now they all stream mid-level.

I thought Skyrim issue had to do more with them not being able to clear memory cache that accounts for object placement and actions in the virtual world. That said, it could have been due to the divided memory pool :S


$600. Guess someone hasn't learned their lesson yet.

Show me where the price of PS4 is $600. Posts like this is trolling. Pulling unverified numbers out of thin air to make baseless accusations that have no redeeming quality should be kept to oneself.
 

KageMaru

Member
Well if it is a weakness, it is a weakness, it doesn't matter who is the developer right? Your last sentence is pure nonsense, it is true of every game made on every platform. Do you know of a 360 game with 100% unique high res textures?

It may not matter who the developer is, but depending on how a game is designed, the memory limitations can be a bigger factor.

Also, where did I claim anything about the 360? I agree that what I said applies to every platform, but what you see as high res texture may actually not be that high quality of a texture was my point.

Basically you using UC2/3 with triple buffering, etc. doesn't really support your point well.

Well they better start selling 2GB cards on the PC front soon, otherwise it will continue to be 1GB as it has been for years. And remember 2x memory does not allow for 2x texture resolution, that requires 4x memory.

The current steam hardware poll is 43% 1GB, 20% 512MB, 10% 256MB, ~7% > 1GB.

I can see more cards using >1GB when density improves, especially after next gen systems launch.
 

i-Lo

Member
Let me preface by saying, I would love the next gen PS to have 4GB of RAM.

Now let's for one moment assume that Sony is going to limit the VRAM to 1GB GDDR5 for PS4. Let's also make a few other assumptions (based on what we've been hearing):

1. Most games will still run at 720p
2. Texture streaming will still be implemented
3. Tessellation will be used extensively (and improved in efficiency as the gen wears on)
4. 30 fps will still be the base
5. DX11/OpenGL 4.0 will be used with the bells n whistles that come with it
6. Some form of AA (FXAA or Temporal AA, MLAA etc) will be implemented in all games (I hope this one is true)

With these points, how limiting is 1GB VRAM given what we have seen been achieved on PS3 so far with a quarter of what is being proposed for PS4?

PS: Does anyone know how much memory on average is dedicated for sound effects and soundtracks?
 
With these points, how limiting is 1GB VRAM given what we have seen been achieved on PS3 so far with a quarter of what is being proposed for PS4?

I believe at 720p, 1GB VRAM should be plenty. Like more than adequate. Though for 1080p, it's probably not enough to do everything you want. But the GPU might not really be powerful enough to target next gen graphic at 1080p anyways, so in that case, more than 2GB of total ram would be mostly wasted and definitely beyond the point of diminishing returns.

Even with only 1 GB VRAM, GDDR-5 on a 256Mbit bus will seem better than only 4x the 256MB VRAM capacity on PS3.

That said, 2 GB unified seems the most likely configuration to me. But I guess they could go 1GB VRAM and 1GB SYS RAM.

I don't think sound takes a lot of memory at all. Maybe 10-20MB tops.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
The change from 720P to 1080P is minimal to the used amount of VRAM. Like 10-30MB depending on what you are doing. Texture sizes are not related to frame buffer sizes. You can play Quake 2 in 1080P, the Q2 textures don't change size.
 
I get what you're saying, I know it depends on the IQ, color depth and palette, now I remember the 10MB of edram on X360 wasn't quite large enough to fit a 720p x2AA image, hence the need for tiling on many titles. Ergo they can fit a 1080p image into like 20-30MB of edram. So simply going from 720p to 1080p render in a game doesn't use a lot of extra Vram. It's more connected to how high of resolution textures the game uses etc.

So what does a higher resolution screen buffer use a lot more of, bandwidth and computational power right ?

I guess they will balance it with whatever amount of Vram is appropriate to the GPU. 1GB for whatever mid-range HD7000 or HD8000 GPU goes into PS4 should be ok. I mean if it's only as powerful as a GTX-560ti, it doesn't need 2GB Vram.
 

CLEEK

Member
I agree with you, just not that the PC has a small selection of games.

I didn't claim the PC has a small selection of game. But the idea that all gamers can ditch consoles for a PC and play the same games is patently wrong.

I don't know what 250 games you own, but if they are on the PS360 consoles, they have to be pretty niche to only have 25 or so on PC.

Of all my favourite games this gen, I can only think of a couple that eventually got PC ports. Bayonetta, Vanquish, Mario Galaxy 1 & 2, NSMB Wii, Super Street Fighter IV, Tactics Ogre, Halo 3 and Reach, GT5, Forza, Uncharted, Journey, Demons and Dark Souls amongst others all remain PC free. My last dozen or so purchases have been on PSN, Vita and 3DS, all obviously console exclusives. They're hardly 'niche' games.

The games I love the most tend to hail from Japan (and aren't niche oddities, but AAA games), yet rarely appear on PC. The majority of the Western developed games are console exclusives (Halos, Forzas, Uncharteds etc).

If I was to give up consoles now or next gen, I wouldn't be able to play the vast, vast majority of games I love. I'll hardly be unique in this. But yeah, if you do mostly play Western developed multi format PS360 games, you're golden and I have no idea why you wouldn't be a PC gamer already.
 
I think he knows that (and so do I). I don't think that's what he was saying, and it's definitely not what I was saying (that CPU was to be used for physics calculations).
Yup, only modern GPUs; (some Nvidia since 2008), the Vita GPU specs in 2009 stated it would support OpenCL, etc., can support "Compute Language" with functions similar to CPUs. It does not require OpenCL....the idea of OpenCL is like OpenGL, you can program to the metal but an upper level "Standard" makes it easier to port code to other platforms and GPUs. OpenGL and OpenCL make it easier to program but efforts in OpenCL have it nearly equal in performance to low level, to the metal programming like that provided in Compiler libraries for C++ by AMD.


This is a nice, and seemingly well informed article, but it does neglect the situation in which Vita used a fairly old hardware, just the high end version of it. He's assuming Sony would use whatever latest 'budget' hardware is available at the time of the launch but what he neglects is the fact that even such hardware is going to be more expensive than years old budget hardware that they'd have according to these specs.
If Sony was buying off the shelf from AMD possibly. We are assuming that AMD and Sony are working together and Sony is purchasing the IP from AMD to manufacture their own SOC. It may contain parts that are exact copies of an AMD APU but reworked for a console.

Further, a key point I bolded was that HSA designs are only more efficient if used properly. AMD released as OPEN STANDARD and is attending Conferences pushing code that takes advantage of HSA efficiencies. It is in their interests that these new efficiencies be used by all...that programmers get used to using them. What better way than for a Game console model which can 100% use and must have those efficiencies to fit in the power/heat envelope of a game console.

I'm guessing that AMD will want Sony to have the 2013-2014 APU and GPU designs that have all the latest HSA efficiencies so that programmers will start using and become familiar with them. Sony for it's part will want to push developers to use Open standards as much as possible so that forward compatibility will be easier from PS4 to PS5. This includes OpenGL, OpenCL, OpenMax IL, OpenCV and more. PS3 was too early to benefit from OpenCL or OpenGL but did use OpenMax IL.
 

Fafalada

Fafracer forever
jeff_rigby said:
PS3 was too early to benefit from OpenCL or OpenGL but...
The last time OpenGL "benefited" anything was SGI workstations 20 years ago. It's a corpse of an API that should have been left for dead a decade ago, but then the mobile industry decided to play necromancy with.
As silly as that rumour about DX on PS4 was, I actually wish it were true if the alternative is a platform restricted to nothing but OpenGL (which indeed, all signs are pointing to).
 
The last time OpenGL "benefited" anything was SGI workstations 20 years ago. It's a corpse of an API that should have been left for dead a decade ago, but then the mobile industry decided to play necromancy with.
As silly as that rumour about DX on PS4 was, I actually wish it were true if the alternative is a platform restricted to nothing but OpenGL (which indeed, all signs are pointing to).
Things are changing (finally), Khronos is now working on new standards for OpenGL to include some OpenGL ES forks and to create a Xwindows lite version of OpenGL that would be more efficient for Game consoles. We might then go back to one standard rather than having two, one for desktop and one for handhelds because of the overhead Xwindows caused (SGI workstation 20 year old APIs still supported).

I'd guess that a new OpenGL standard will be released by Khronos at about the time the PS4 is launched. Sony, AMD, Nvidia and others will have a hand in the final specs.

there are some interesting developments like Wyland

In recent years, GNU/Linux desktop graphics has moved from having "a pile of rendering interfaces... all talking to the X server, which is at the center of the universe" towards putting the Linux kernel "in the middle", with "window systems like X and Wayland ... off in the corner". This will be "a much-simplified graphics system offering more flexibility and better performance".[31]

Høgsberg could have added an extension to X as many recent projects have done, but preferred to "[push] X out of the hotpath between clients and the hardware" for reasons explained in the project's FAQ:[32]

What's different now is that a lot of infrastructure has moved from the X server into the kernel (memory management, command scheduling, mode setting) or libraries (cairo, pixman, freetype, fontconfig, pango etc) and there is very little left that has to happen in a central server process. ... [An X server has] a tremendous amount of functionality that you must support to claim to speak the X protocol, yet nobody will ever use this. ... This includes code tables, glyph rasterization and caching, XLFDs (seriously, XLFDs!) Also, the entire core rendering API that lets you draw stippled lines, polygons, wide arcs and many more state-of-the-1980s style graphics primitives. For many things we've been able to keep the X.org server modern by adding extension such as XRandR, XRender and COMPOSITE ... With Wayland we can move the X server and all its legacy technology to an optional code path. Getting to a point where the X server is a compatibility option instead of the core rendering system will take a while, but we'll never get there if [we] don't plan for it.
That's X server for X windows but the Linux/Unix OpenGL GPU driver software (in it's present form) has to support it. "What's different now is that a lot of infrastructure has moved from the X server into the kernel (memory management, command scheduling, mode setting) or libraries (cairo, pixman, freetype, fontconfig, pango etc) and there is very little left that has to happen in a central server process."
 

Durante

Member
The last time OpenGL "benefited" anything was SGI workstations 20 years ago. It's a corpse of an API that should have been left for dead a decade ago, but then the mobile industry decided to play necromancy with.
As silly as that rumour about DX on PS4 was, I actually wish it were true if the alternative is a platform restricted to nothing but OpenGL (which indeed, all signs are pointing to).
You know, with bindless textures (or, hopefully, bindless everything) and all the stuff introduced in 4.2 it's not such a bad API. Just ignore all the legacy fluff. Less overhead than DirectX even, at least in some microbenchmarks on PC.
(http://timothylottes.blogspot.com/2012/03/dx11-vs-gl-driver-overhead.html)
 
Interesting thread:

AMD Southern Islands (Next Core) is new Cell.

"The *x86* PlayStation 4..." is quite inaccurate since Southern Islands is architecture for itself - x86 is there only because compatibility with current Windows platform, real power does not come from x86 instructions. Sony could skip this x86 thing, and use bare-bone CoreNexts.

IBM/Toshiba/Sony took right path 10 years ago with Cell - path to "bend curve" and overlap x86 world with magnitude of 10 times in speed compared to x86.

since than, Intel trying to make Cell-like Larrabee but they eventually failed.

on the other side, it looks like that AMD will pull the trick and take the risk.

Why this is important? Look at Aperture today: if CPU power double every 1.5 years - it sucks! How to make software 10x faster? Port it to GPU. This will happen. And at that time, x86 will become "obsolete" or *less important*.

and AMD see this: nobody needs 10x faster CPU for everyday work in Microsoft Office but we need for sure 10x faster CPU for applications like: Aperture, FinalCut, Premier, 3D Max, CuBase, PhotoShop... AMD CoreNext will rock in next "GPU" driven generation of software and x86 side of CoreNext will be: it is "fast enought" for "Office".


btw regarding "unoptimized software" and one more problem with x86: all x86 processor, starting with Pentium, are optimized to run existing code faster. Pentium IV break this rule and we all know what happened. Cell-like CPUs does not have this "problem".

A console based on AMD's CPU+GPU will be anything but "conventional." It may use off-the-shelf CPU and GPU cores, but the resulting platform (HSA) will enable programming styles and techniques currently unavailable, and achieve performance currently untouchable. It will also offer much more flexible and compatible (to PC games) programming than PS3 (Cell BE). In fact I'd say such a system is the only chance for a console maker to compete with PC on gaming performance.

Nevertheless, the article's conclusion may be truer than its arguments. If Sony really decides to use AMD CPU+GPU for PS4, it would indeed signal a sea change where the console maker shifts away from proprietary hardware to focus on specialized software. AMD's HSA offers great opportunities for application acceleration by multi-core FMA and XOP and general programmable GPU. It would be exciting to all PC users whether he is a gamer or not, because this means there will finally be vendors (AMD and Sony) who will seriously work on the general-purpose heterogeneous acceleration. The result will benefit the entire gaming and PC industries.

Virtual textures will receive hardware support with AMD 7XXX GPUs and be supported with an extension to OpenGL

2.2.1   Texture Streaming is Becoming a Necessity

Texture mapping is common‐place and highly efficient on consumer GPUs for over a decade. Many challenges have been solved by hardware support for mip‐mapping, advanced texture filtering, border clamp/mirror rules and compressed texture formats. Modern real‐time rendering engines are faced with another challenge: Screen resolution and higher quality standards now require high resolution textures and for draw call efficiency it’s even advised to share one texture for multiple objects [NVIDIA04].

Some graphics hardware already supports texture resolutions up to 8K (8192), but that might not be enough for some applications, and, more of a problem, the memory requirements grow rapidly with texture size. Because the simulated world size is also expected to be much larger it’s no longer possible to keep all textures in graphic card memory (a typical limit is 512MB) and not even in main memory. Having more main memory doesn’t help when limited being by the 32 bit address space (2GB on typical 32bit OS). A 64 bit OS allows using more main memory but most installed OS and
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Framerate doesn't change buffer size. ;)

I know, but when something has to give to render two frames, sometimes they go from 60fps to 30fps and keep the same frame buffer size, thus 3D does not take more frame buffer memory.
 

gofreak

GAF's Bob Woodward
Rigbreezy can you break this down into layman's terms?

Just an echo of what others have been saying here - that GPU is the 'new' processing pack-horse. The CPU in next gen consoles will take a smaller role, be a facilitator and for the traditional branchy stuff that doesn't fit so well on a GPU. GPGPU is such now that the heavy (fp) computation is probably better put on a relatively big gpu than a cpu that will eat into your gpu budget. Last gen and before there was a case for that kind of CPU, but today, given the type of processor GPUs are now, GPU is 'the new Cell'.

And then just goes on to make the point that a closed box, an exemplary implementation of AMD HSA will yield software specialisation and experimentation that you might be less likely to get in another context like PC, but that will benefit other contexts too. If PS4 is AMD HSA and is reasonably powerful I've no doubt AMD will hold it up as an example of what to-the-metal coding can do on that kind of architecture.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Just an echo of what others have been saying here - that GPU is the 'new' processing pack-horse. The CPU in next gen consoles will take a smaller role, be a facilitator and for the traditional branchy stuff that doesn't fit so well on a GPU. GPGPU is such now that the heavy (fp) computation is probably better put on a relatively big gpu than a cpu that will eat into your gpu budget. Last gen and before there was a case for that kind of CPU, but today, given the type of processor GPUs are now, GPU is 'the new Cell'.

And then just goes on to make the point that a closed box, an exemplary implementation of AMD HSA will yield software specialisation and experimentation that you might be less likely to get in another context like PC, but that will benefit other contexts too. If PS4 is AMD HSA and is reasonably powerful I've no doubt AMD will hold it up as an example of what to-the-metal coding can do on that kind of architecture.

I'm getting excited!
 
Just an echo of what others have been saying here - that GPU is the 'new' processing pack-horse. The CPU in next gen consoles will take a smaller role, be a facilitator and for the traditional branchy stuff that doesn't fit so well on a GPU. GPGPU is such now that the heavy (fp) computation is probably better put on a relatively big gpu than a cpu that will eat into your gpu budget. Last gen and before there was a case for that kind of CPU, but today, given the type of processor GPUs are now, GPU is 'the new Cell'.

And then just goes on to make the point that a closed box, an exemplary implementation of AMD HSA will yield software specialisation and experimentation that you might be less likely to get in another context like PC, but that will benefit other contexts too. If PS4 is AMD HSA and is reasonably powerful I've no doubt AMD will hold it up as an example of what to-the-metal coding can do on that kind of architecture.
Said better than I could have done except that PS4 most likely will be OpenGL and OpenCL with support for HSA not to the metal; OpenGL with performance equal of to the metal in spite of higher level OpenGL and OpenCL. Edit: Given message 1095 below, gofreak might be more correct and OpenGL GPU bound not as important a model.

The virtual textures quote explains why more memory is needed (higher resolutions and richer true to life and wider games). That there will be hardware support for virtual streaming textures in the 7XXX AMD GPUs and up might have Sony more attracted to that, I'd then guess that a PS4 GPU would be 7XXX series or later which would support in part a smaller cheaper memory footprint (rumored 2 gigs).

Edit: SteveP was correct (below) I mis-remembered and edited my posts from 7900 to 7XXX.

http://devgurus.amd.com/message/1275047 said:
Right, partially resident textures aren't part of DX11(.1), nor part of core OpenGL. They will be exposed as an OpenGL extension. The extension will be made available on Radeon HD 7xxx products in an upcoming driver release.
I can't really speak for the DX side of things (I'm the OpenGL guy), so I'm not sure what the plans are for PRT in DX or the timing for DX11.1.

Cheers,
Graham
Graham Sellers
Manager, OpenGL Driver Team, AMD
 

StevieP

Banned
I really don't think you're going to see the number 9 in reference to any of the next console GPUs' (i.e. where they are based from).
 
May seem slightly off topic; Brimstone on BY3D brought up a 2009 Epic presentation "THE END OF THE GPU
ROADMAP"
about what Epic expects for next generation in response to a new rumor that the Xbox durango would have 16 PPUs for a CPU.

The tie-in:

1) Epic's Tom Sweeney in 2009 expected an Intel Larabee 16 CPU and/or Nvidia GPGPU (Cuda mentioned) would make it into a next generation game console. "CONCLUSION CPU, GPU architectures are getting closer"
2) Game engines take 5 years to develop so starting in 2009 given the above an engine would be finished in 2014.
3) Not in the article but another developer is rumored to have had to change in mid stream because Sony jumped from a 24 SPU CPU to the AMD APU which has 4 X86 cores and a GPU with 400 CL ready cores.

Tom Sweeney mentions dropping the inflexible OpenGL and DirectX APIs and moving to more CPU rather than GPU pipelines. He also mentions ray-tracing for at least reflections and possibly some indirect lighting.

Memory speed an issue...to much to list here and more thought is needed by me before I post. 2009 Epic post does look like it nails some of the coming issues but advances in OpenGL and DirectX might impact his more CPU than GPU useage which is contrary to what we expected (less CPU use next generation). That appears to be an in error assumption since the rumors of the PS4 AMD APU could be VERY CPU ready given an OpenCL GPU with 400 cores and 4 X86 processors and the new rumor about the next Xbox. It's possible that the GPU is not as important next generation.

My guess is we have two ways a game developer can go;

1) Traditional extension to last generation with OpenGL and a more GPU bound model and
2) Limited Ray tracing (CPU bound) and more CPU use resulting in less of the, as Tom Sweeney said; " all game engines work the same so the products all look the same as they are using the same APIs (OpenGL-DirectX)". This is what I got from the Epic presentation, he wanted to differentiate his games and engine and the only way to do that is with the CPU (provided next generation had the CPU power which rumors might support).

Model 1 and the PS4 APU can be used for GPU use in combination with the second GPU
Model 2 and the AMD APU is 100% used as a CPU with the second GPU as graphics only. In this model the APU is slightly more powerful than 24 SPUs (roughly 1 SPU = 13 GPU elements) Roughly assumes a new cell2 gets over scaling issues with memory and more. Also OpenCL efficiencies for GPUs were nearly 100% while Cell 90%. Branch prediction would be nice to have if more CPU bound which SPUs and GPUs don't really support well; PPUs and X86 cores support branching ...this might be another reason for an AMD APU.


I guess it doesn't matter whether you use OpenGL (GPU) or OpenCL (CPU) for forward compatibility or cross platform so either model above would work if using those open standards. I'd guess that model 1 games will predominate early on.


http://graphics.cs.williams.edu/archive/SweeneyHPG2009/TimHPG2009.pdf said:
The Meta-Problem:
 The fixed-function pipeline is too fixed to solve its problems
 Result:
 All games look similar
 Derive little benefit from Moore’s Law
 Crysison high-end NVIDIA SLI solution only looks at most marginally better than top Xbox 360 games

This is a market BEGGING to be disrupted :)

 Bypass the OpenGL/DirectX API
 Implement a 100% software renderer
 Bypass all fixed-function pipeline hardware
 Generate image directly
 Build & traverse complex data structures
 Unlimited possibilitiesCould implement this…
 On Intel CPU using C/C++
 On NVIDIA GPU using CUDA (no DirectX)

How close does the AMD APU come to supporting what Tom Sweeney believes is needed by future hardware (Red supports features I'm sure of with my limited knowledge) SteveP can you fill this in?

Future Hardware:
A unified architecture for computing and graphics Hardware Model
Three performance dimensions
 Clock rate
 Cores
 Vector width 256 bit
Executes two kinds of code:
 Scalar code (like x86, PowerPC)
 Vector code (like GPU shaders or SSE/Altivec)
Some fixed-function hardware
 Texture sampling streaming texture sampling in hardware with 7XXX AMD GPUs
Rasterization?Vector Instruction Issues A future computing device needs…
 Full vector ISA
 Masking & scatter/gather memory access
 64-bit integer ops & memory addressing
 Full scalar ISA
 Dynamic control-flow is essential
&#61690; Efficient support for scalar<->vector transitions
&#61600; Initiating a vector computation
&#61600; Reducing the results
&#61600; Repacking vectors
&#61600; Must support billions of transitions per second
Memory System Issues
Effective bandwidth demands will be huge Typically read 1 byte of memory per FLOP 4 TFLOP of computing power demands 4 TBPS of effective memory bandwidth!

The REYES Rendering Model
 
So the rumored PS4 CPU, how many cores does that have? Just wondering. I'm assuming 4 if I'm reading correctly.

EDIT: Also, how would they get over the coding issues of making 13+ GPU elements to = 1spu?
 

KageMaru

Member
May seem slightly off topic; Brimstone on BY3D brought up a 2009 Epic presentation "THE END OF THE GPU
ROADMAP"
about what Epic expects for next generation in response to a new rumor that the Xbox durango would have 16 PPUs for a CPU.

The tie-in:

1) Epic's Tom Sweeney in 2009 expected an Intel Larabee 16 CPU and/or Nvidia GPGPU (Cuda mentioned) would make it into a next generation game console. "CONCLUSION CPU, GPU architectures are getting closer"
2) Game engines take 5 years to develop so starting in 2009 given the above an engine would be finished in 2014.
3) Not in the article but another developer is rumored to have had to change in mid stream because Sony jumped from a 24 SPU CPU to the AMD APU which has 4 X86 cores and a GPU with 400 CL ready cores.

IIRC Tim's talk here was about moving back to fully software rendering, much how PC development was before 3DFX and glide entered the scene. Unreal engine 1 was Epics last engine to fully support software rendering IIRC. Don't remember if it was Tim or Mark (which I'm always hesitant to listen to) but someone else at Epic stated they expect software rendering to make a big return when DX12 rolls around.

Assuming I'm correct, haven't had a chance to check out the links yet, I'm not sure this would apply to next gen very much.
 
So the rumored PS4 CPU, how many cores does that have? Just wondering. I'm assuming 4 if I'm reading correctly.

EDIT: Also, how would they get over the coding issues of making 13+ GPU elements to = 1spu?
OpenCL 1.2 can partition a CL enabled GPU in as many sub units as needed.

I'm roughly giving an idea of the CPU power comparisons between What was rumored to be Sony's CPU for PS4 using SPUs and the rumored current choice, a AMD APU and they appear to be close if using just the 400 GPU cores (within 5 SPUs more and not counting the 4 X86 cores) Real performance numbers?

What throws us off is that the AMD APU can be used for graphics too so trying to confirm Sony and Microsoft are supplying massive CPUs for Sweeney's vision is hard.

KageMaru said:
IIRC Tim's talk here was about moving back to fully software rendering, much how PC development was before 3DFX and glide entered the scene. Unreal engine 1 was Epics last engine to fully support software rendering IIRC. Don't remember if it was Tim or Mark (which I'm always hesitant to listen to) but someone else at Epic stated they expect software rendering to make a big return when DX12 rolls around.
He alternately mentions the next generation in 2012 and 2012 -2020 for a second generation. He is talking 2012 when he mentions Larabee and Nvidia cuda. He mentions OpenCL in passing. Bypassing DirectX and OpenGL entirely would not apply if waiting for DirectX 12.

It looks like he was fully aware of the projected hardware issues in 2009 but he couldn't predict how OpenGL and DirectX would evolve. In one section there was no mention of OpenCL but Cuda was mentioned, no knowledge of AMD Fusion APUs or HSA although he mentions, "A unified architecture for computing and graphics Hardware Model" CPU-GPU combinations as well as a common memory pool and cache coherence.

This could be interesting if true. It would allow us to confirm as probable some of the rumors. Why, if the rumors are true, 16 PPUs for the Durango CPU and Sony with the first RUMORED choice of 24 SPUs; is Sweeney's vision the answer? Is future hardware here now with the AMD Fusion and HSA? Are IBM and AMD going to provide something similar for Durango (common 80 meg eDRAM cache, common memory pool and controller)?

http://www.joystiq.com/2011/09/28/epic-games-tim-sweeney-talks-unreal-engine-4-be-patient-until/ said:
Sept 28 2011 Sweeney said: "I spend about 60 percent of my time every day doing research work that's aimed at our next generation engine and the next generation of consoles," Sweeney told IGN, adding that this "technology that won't see the light of day until probably around 2014."

There are two primary technical challenges facing video games today, Sweeney said. The first, and most addressable, is the need to scale up "to tons of CPU cores." While UE3 can divide discrete processes across a handful of cores, "once you have 20 cores" it isn't that simple "because all these parameters change dynamically as different things come on screen and load as you shift from scene to scene." These advancements will help achieve "movie quality graphics" since that outcome has been limited primarily by horsepower. "We just haven't been able to do it because we don't have enough terra flops or petta flops of computer power to make it so," Sweeney said. Less likely to be conquered in the next 10 years: the "simulation of human aspects of the game experience," Sweeney explained. "We've seen very, very little progress in these areas over the past few decades so it leaves me very skeptical about our prospects for breakthroughs in the immediate future."
 
The last time OpenGL "benefited" anything was SGI workstations 20 years ago. It's a corpse of an API that should have been left for dead a decade ago, but then the mobile industry decided to play necromancy with.
As silly as that rumour about DX on PS4 was, I actually wish it were true if the alternative is a platform restricted to nothing but OpenGL (which indeed, all signs are pointing to).

OpenGL has been the api used for Maya, probably the most used 3D animation software across all fields that use that sort of thing, since day one. It's also used by XSI, and some others. So while it may not be popular in PC gaming, it's far from a corpse of an API that hasn't benefited anything in 20 years.
 

Panajev2001a

GAF's Pleasant Genius
OpenGL has been the api used for Maya, probably the most used 3D animation software across all fields that use that sort of thing, since day one. It's also used by XSI, and some others. So while it may not be popular in PC gaming, it's far from a corpse of an API that hasn't benefited anything in 20 years.

Those CAD vendors have helped the OpenGL API, together with IHV's trying to screw each other over not to allow something which could benefit their competitors by mistake, stagnate for YEARS while DirectX got better and better.
 
Top Bottom