AMD reveals potent parallel processing breakthrough. PS4?

Simply put, what AMD's heterogeneous Uniform Memory Access (hUMA) does is allow central processing units (CPUs) and graphics processing units (GPUs) – which AMD places on a single die in their accelerated processing units (APUs) – to seamlessly share the same memory in a heterogeneous system architecture (HSA). And that's a very big deal, indeed.

In fact, hUMAfication already appears to be on the way – and not necessarily where you might have first expected. AMD is supplying a custom processor for Sony's upcoming PlayStation 4, and in an interview this week with Gamasutra, PS4 chief architect Mark Cerny said that the console would have a "supercharged PC achitecture," and that "a lot of that comes from the use of the single unified pool of high-speed memory" available to both the CPU and GPU.

http://www.theregister.co.uk/2013/05/01/amd_huma/
 
Old news.

Sharing the memory space is not something new AMD just came up with, it has been known about for ages.
 
Sony has always been the industry leader when it comes to cutting edge technology. always forward thinking.

Sony probably did a lot of customization work, but AMD has been moving in this direction for a while.
 
What's the story here? I thought everyone already knew that the CPU & GPU in the PS4 was using unified memory for the PS4 APU?

amd_hsa_roadmap.jpg
 
If PS4 is the vanguard of this change then we can expect great things (not in terms of outright graphics) for years to come.

Btw, shouldn't the same expected of xbawx fusion/neverends?
 
Didn't AMD at least have a hand in the design of GDDR5 too?

The JEDEC Solid State Technology Association, formerly known as the Joint Electron Devices Engineering Council (JEDEC), is an independent semiconductor engineering trade organization and standardization body.

via wiki. So they could be members in the JEDEC.

http://www.jedec.org/about-jedec/member-list

Yeyah, they're a member as is Sony and Apple and pretty much everyone
 
I thought everyone already knew that the CPU & GPU in the PS4 was using unified memory for the PS4 APU?
On the original Xbox and such (which has unified memory) I believe you had to portion up the memory so that the GPU access this part and the CPU access another part, they couldn't both read/write to the same range which a HSA system can.
 
I thought we knew about the PS4 and HUMA a while ago?




Using high latency graphics RAM for non-graphics operations is Sony's design :D

Except measures are being taken so that the consequence of using a high latency design is not the same as it would have been on PC.

But oh well, I'll not stop y'all from havin' fun.
 
...but it's Sony's RAM.

But no...

Sony and Microsoft have no chance at making a system these days. AMD and Nvidia are sitting on tens of thousands of man years and patents and are so far ahead of everybody else it's not funny.

I remember some company tried to create a high end GPU a few years ago, not Intel a smaller company, thing was a disaster. Just the drivers alone for AMD/NVidia surely have who know how many man years invested into them.

The PS2 days of Sony/Toshiba being able to design major parts of a system are long gone. It's way too complex now. Hell arguably the PS3, where Sony basically just only designed half the CPU along with IBM, none of the GPU, showed that. Because the CPU was the bad part of the PS3 according to most.
 
Makes me think AMD will have the heads up in graphics card development on PC for next gen.

I'm not nearly knowledgeable enough on the subject to do anything but speculate, but couldn't this mean trouble for Nvidia if the APU/system on a chip design gets powerful enough and the memory sharing capabilities of HSA turn out to be a huge advantage? Could a change in motherboard architecture allow for discrete video cards to share memory in the same way?
 
But no...

Sony and Microsoft have no chance at making a system these days. AMD and Nvidia are sitting on tens of thousands of man years and patents and are so far ahead of everybody else it's not funny.

I remember some company tried to create a high end GPU a few years ago, not Intel a smaller company, thing was a disaster. Just the drivers alone for AMD/NVidia surely have who know how many man years invested into them.

The PS2 days of Sony/Toshiba being able to design major parts of a system are long gone. It's way too complex now. Hell arguably the PS3, where Sony basically just only designed half the CPU along with IBM, none of the GPU, showed that. Because the CPU was the bad part of the PS3 according to most.


XGI.

Except measures are being taken so that the consequence of using a high latency design is not the same as it would have been on PC.

But oh well, I'll not stop y'all from havin' fun.

The only "measures" they have taken is just the natural fact that a lower clocked CPU will have to wait less clocks to get data from RAM than a system with higher clocks and the same RAM.
 
Except measures are being taken so that the consequence of using a high latency design is not the same as it would have been on PC.

But oh well, I'll not stop y'all from havin' fun.
Like what? Because as far as I know the only "measures" they took about it could be summarized in "Oh well, guess we'll live with it".
 
What does this mean in layman's terms?


The CPU and GPU are able to access a shared pool of memory instead of the memory having to be partitioned between the two. Which means if in certain situations the GPU needs more memory than the CPU, this can be accomplished easily, and vice versa.


What I'm confused about is why this is news. Cerny's been talking about the PS4 having unified memory architecture since it was announced.

Makes me think AMD will have the heads up in graphics card development on PC for next gen.


No, the whole point of this is it's for APUs, where the graphics and CPU are on the same die, and in the case of the PS4, have access to the same pool of GDDR5 memory. I don't see how this can affect discrete graphics cards, those will always need their own dedicated supply of GDDR memory. System memory in computers is still DDR3.
 
Except measures are being taken so that the consequence of using a high latency design is not the same as it would have been on PC.

But oh well, I'll not stop y'all from havin' fun.

Well? don't be selfish; share what inside information you seem to have. As far as anyone else is aware there are no measures being taken.
 
I don't think it would mean much unless high-end solutions start using UMA in the future. Kaveri APU is still part of a lower end solution. It won't mean anything to the people building high-end gaming PC, at least in the next couple of years.
 
It'll be the most efficient 30fps possible.
This got me good.

*
In today's CPU-GPU computing schemes, when a CPU senses that a process upon which it is working might benefit from a GPU's muscle, it has to copy the relevant data from its own reservoir of memory into the GPU's – and when the GPU is finished with its tasks, the results need to be copied back into the CPU's memory stash before the CPU can complete its work.
Ah ok, that makes is much easier to understand. It's something neat and will play more a role later on when APU's are super efficient on smaller processes. Another step closer to having everyone with a modern PC able to play 720p games
 
Top Bottom