AMD reveals potent parallel processing breakthrough. PS4?

yep but sorry PC wont remain in the future :(

tumblr_m5vwfif1Pz1ryfjofo1_500.jpg
 
yep but sorry PC wont remain in the future :(

The article basically talks about AMD's magic lamp that allows seamless sharing of resources between GPU and CPU

Which AMD is planning to release for PCs in 2H 2013. I don't see your point.

I have a feeling you think hUMA is a Playstation exclusive technology? It is far from it.
 
What does this mean in layman's terms?

Well in Dragonball terms, it's how the Goku and Vegeta fusion into Vegetto makes it into the strongest being in the universe, even though both by themselves are weaker than powered up Gohan.

That is until SSJ4 Goku comes along.
 
Which AMD is planning to release for PCs in 2H 2013. I don't see your point.

I have a feeling you think hUMA is a Playstation exclusive technology? It is far from it.

this is not just for PC ..this is for APUs for both PS4 and PC . I have a feeling that you are reading this is for APU only right


junior
http://news.msn.com/science-technology/research-firm-pc-sales-fall-14-percent-in-first-quarter

dont just post a random pic to seek attention
 
this is not just for PC ..this is for APUs for both PS4 and PC

Which is what I am saying. I have no idea what this: "yep but sorry PC wont remain in the future :(" means, but it looks like you're saying the PC won't be relevant as it will be locked out of hUMA.


Lol. You're criticising him for not understanding your cryptic English? Your sentence makes no grammatical sense, and he doesn't know what you're trying to say. The pic seems appropriate.
 
Do we know if Intel Haswell or later will offer something similar? I'm guessing AMD is first to market with this.
 
The CPU and GPU are able to access a shared pool of memory instead of the memory having to be partitioned between the two. Which means if in certain situations the GPU needs more memory than the CPU, this can be accomplished easily, and vice versa.

It's better than that. uHSA isn't just about apportioning resources dynamically. It means both the CPU and GPU can be working on the same dataset at the same time. So if your AI is running on the CPU and your Physics simulation is running on the GPU, both can be updating the same set of data on where all the dynamic objects are in the game world simultaneously. On PCs this has been the major block to GPU accelerated physics having any gameplay effect. The GPU calculations couldn't be efficiently fed back to the CPU simulation because they didn't share a memory space and the slow PCI-E bus was a huge limiter.


What I'm confused about is why this is news. Cerny's been talking about the PS4 having unified memory architecture since it was announced.

It's news because AMD is promoting the tech in their new Steamroller powered APUs for PC which featured uHSA as one of their major advances. All the tech sites are just reporting on the new whitepapers about Kaveri.
 

This guy is either incredibly deluded or a Sony plant. Or both.

let's place bets on how many ps4s north korea buys to power their missiles

The US government made a bulk order of PS3s a few years back, but it looks like North Korea is going to have a next-gen military!

Edit:

whats the funny part about that. a platform that is on its demise and soon to be replaced by tablets and smartphones. Where do you see PC games on NPD/PAL charts?

Okay, he's a Sony plant. Either that or this is some next-level trolling.
 
but AMD still can't compete in the PC market, they were destroyed by Intel. I dont see how this will turn around though.
 
What does this mean though. Is it like having GDDR6 ram? would it feel like theres more than 8gb of ram?
 
but AMD still can't compete in the PC market, they were destroyed by Intel. I dont see how this will turn around though.

This wont cause anything in terms of Market share

However, if you are a day trader you can make some quick bucks tomorrow
 
A lot of people here don't realize that unified memory doesn't imply that both the CPU and GPU can access the same address space. This is kind of a big deal, assuming there aren't any large downsides to this technology.
 
Lol if you think Titan has a chance; the PS4 will blow it out of the water.

idk if sarcasm but hUMA is not going to magically make PS4 as powerful as a 4.5TFlops TITAN which also has a faster memory bandwidth than PS4, it is just going to make PS4 a lot more efficient than using PCIe.
 
Old news.

Sharing the memory space is not something new AMD just came up with, it has been known about for ages.

yeah we know that, its been on their roadmap for years. And Intels roadmap for years etc etc. whats your point its new news if their implementation works well and will be commercially available.


If HUMA is as effective as NUMA was for opterons AMD is going to be much better of in the future though.
 
Basically, what this does is allow for a change in workflow for getting data to and from the GPU.

Current tech: Allocate CPU memory block -> load data from disk into CPU memory block -> allocate GPU memory block -> use specialized API (cudaMemcpy for example) to copy from the CPU block to the GPU block. Likewise, once you wanted to examine the results from the GPU on the CPU side you'd need to copy it back.

The reason for this is because the GPU has its own set of high-bandwidth memory on-board, so it doesn't have to go out over the PCIe bus during a frame/compute job. Granted, there are ways to CPU memory so that it's directly accessible from the GPU, but going out over the PCIe bus has a much higher latency cost and much lower bandwidth than the on-board memory.

But an APU is different. CPU and GPU are on the same piece of silicon, and thus have their very own bus(es?) to memory. On-board GPU RAM is not an issue because there is none, so it just uses the CPU's RAM pool.

Thus, the workflow for getting data to the GPU will probably just be allocate memory -> map for CPU read/write and GPU read/write -> load data from disk to memory, and you're done!

However, as the GPU will now be pulling from the CPU's ram pool, I'm not sure how the latency/bandwidth will turn out. Having on-board GDDR5 memory as a discreet video card does have the benefit of accommodating the high-bandwidth high-latency style memory access patterns that games typically have, but CPU memory patterns typically call for lower-latency and lower-bandwidth consumption.

Additionally this means that this style of computing will only be available for APUs. Discreet video cards with on-board memory won't benefit from this.
 
But no...

Sony and Microsoft have no chance at making a system these days. AMD and Nvidia are sitting on tens of thousands of man years and patents and are so far ahead of everybody else it's not funny.

I remember some company tried to create a high end GPU a few years ago, not Intel a smaller company, thing was a disaster. Just the drivers alone for AMD/NVidia surely have who know how many man years invested into them.

The PS2 days of Sony/Toshiba being able to design major parts of a system are long gone. It's way too complex now. Hell arguably the PS3, where Sony basically just only designed half the CPU along with IBM, none of the GPU, showed that. Because the CPU was the bad part of the PS3 according to most.

it was difficult not bad

the cell iwhat carrid the ps3
 
So, is this just for GPUs on CPU dies? Like intel's HD crap? Why even bother unless they intend to eventually offer their flagship GPU cores with such solutions in gaming setups. Would those even be small enough? Not that I'd jump on that and lose the ability to upgrade the components separately and at will. My current PC should last me through the next gen well enough with just a GPU & cooling/OC upgrade somewhere down the line.

It seems like PC has to have this memory bottleneck as long as it remains the open custom platform it is unless they find a way to share memory fast through the motherboard. It's not like consoles are without bottlenecks so are going to be so superior by utilizing such technology as some seem to believe. Like the PS4's CPU. I suppose with many titles being primarily for consoles the game engines could end up unoptimized for the hardware.

That always happens in some cases though, like GTAIV running like shit on PC (if you want it to look good unlike on consoles), or Skyrim launching without proper CPU optimizations which made it lose 40% performance (yet still perform good on decent PCs) until some modder fixed it and Bethesda adopted his work in a patch (rather than do it properly with their means to access the game engine which would have yielded even further boosts).
 
Skyrim launching without proper CPU optimizations which made it lose 40% performance (yet still perform good on decent PCs) until some modder fixed it and Bethesda adopted his work in a patch (rather than do it properly with their means to access the game engine which would have yielded even further boosts).

The problem was that the idiots at Bethesda did not set the compiler flags for the use of SSE2 and above (a beyond simple thing to do), forcing the game to only run with scalar floating point support (x87).
I doubt they just used the mod to fix it (would of taken more work than just setting the flags).
 
The problem was that the idiots at Bethesda did not set the compiler flags for the use of SSE2 and above (a beyond simple thing to do), forcing the game to only run with scalar floating point support (x87).
I doubt they just used the mod to fix it (would of taken more work than just setting the flags).
Well, I think they used the mod because it had the exact same performance boost and the exact same lighting bug introduced on my PC (lights appeared to dim a tiny bit as you first approach them within a certain distance, but don't do it again if you go back and forth so it's not a LOD issue, and I know the mod first introduced it as I tested with and without and actually held off using the mod because I found it jarring, but in the end had to settle for it) but I guess it could be a different implementation still. But the modder did say he expected Bethesda to offer way bigger performance boosts (100% was cited, vs the mod's rough 40%) if they do it right.

Anyway it shows how little some devs care about the PC versions. It and other similarly ridiculously easy to fix things have probably happened in tons of games you'd think should perform way better on lower end hardware, they just didn't care to optimize, maybe even liked saying their game needs a monster PC or whatever, never mind it doesn't justify it being the case, we just never knew without a guy like that being able to investigate.
 
So, is this just for GPUs on CPU dies? Like intel's HD crap? Why even bother unless they intend to eventually offer their flagship GPU cores with such solutions in gaming setups. Would those even be small enough? Not that I'd jump on that and lose the ability to upgrade the components separately and at will. My current PC should last me through the next gen well enough with just a GPU & cooling/OC upgrade somewhere down the line.

It seems like PC has to have this memory bottleneck as long as it remains the open custom platform it is unless they find a way to share memory fast through the motherboard. It's not like consoles are without bottlenecks so are going to be so superior by utilizing such technology as some seem to believe. Like the PS4's CPU. I suppose with many titles being primarily for consoles the game engines could end up unoptimized for the hardware.

That always happens in some cases though, like GTAIV running like shit on PC (if you want it to look good unlike on consoles), or Skyrim launching without proper CPU optimizations which made it lose 40% performance (yet still perform good on decent PCs) until some modder fixed it and Bethesda adopted his work in a patch (rather than do it properly with their means to access the game engine which would have yielded even further boosts).

My personal assumption is that, with the stagnation of CPU performance (you can still pop up in the PC thread with a 4-year-old enthusiast Intel and be told there's no reason to upgrade), we will eventually reach a point via a combination of stacking and diminishing single-core returns where an APU with an x870/x60ti-equivalent will be common. In fact, the next gen of Intel APUs are supposed to compete with 650ti, and the PS4 is running off a custom APU competitive with a 7870, correct? At that point, this tech should pay off quite well.
 
On the original Xbox and such (which has unified memory) I believe you had to portion up the memory so that the GPU access this part and the CPU access another part, they couldn't both read/write to the same range which a HSA system can.
GPUs have had MMUs capable of reading pages as seen from the OS/app for generations now. I suppose HSA has some extra sauce, otherwise this is a non-news.
 
GPUs have had MMUs capable of reading pages as seen from the OS/app for generations now. I suppose HSA has some extra sauce, otherwise this is a non-news.

Well that depends. Suppose the extra sauce is that they access it without additional latency/(bandwidth loss); all it requires is a remapping within their own tables. Would that not be significant enough?
 
Old news.

Sharing the memory space is not something new AMD just came up with, it has been known about for ages.

It was never implemented. This year's APUs will be the first to fully use this architecture [together with nextgen consoles].
 
Really hard to take AMD seriously anymore.

Before anyone gets hyped just remember bulldozer. Remember AMDS cpu history for the past 5 years or so.

They have been making mediocre cpus, and gpus for the past 5 years they need to get their stuff together.

Nvidia and intel need real competition.
 
Really hard to take AMD seriously anymore.

Before anyone gets hyped just remember bulldozer. Remember AMDS cpu history for the past 5 years or so.

They have been making mediocre cpus, and gpus for the past 5 years they need to get their stuff together.

Nvidia and intel need real competition.

The fact that some people actually believe this.
 
Top Bottom