AMD reveals potent parallel processing breakthrough. PS4?

Really hard to take AMD seriously anymore.

Before anyone gets hyped just remember bulldozer. Remember AMDS cpu history for the past 5 years or so.

They have been making mediocre cpus, and gpus for the past 5 years they need to get their stuff together.

Nvidia and intel need real competition.

Their GPU's aren't mediocre. Nvidia seems to have the upper hand, but that does not make AMD's mediocre.

They're in the same performance bracket.
 
Really hard to take AMD seriously anymore.

Before anyone gets hyped just remember bulldozer. Remember AMDS cpu history for the past 5 years or so.

They have been making mediocre cpus, and gpus for the past 5 years they need to get their stuff together.

Nvidia and intel need real competition.

It's easy to tell when someone did not use AMD hardware for a long time.
 
Really hard to take AMD seriously anymore.

Their best CPU is 8-core for less price than i5.
i5 is better now due to how games prefer single core performance but in multicore applications this 8 core is beast.
They have been for years making competitive hardware being cheaper a lot.

Their HSA APUs may be their big win and featuring strong GPUs (like rumored PS4 cut apu) may get them a lot money in notebook market.
 
What's the story here? I thought everyone already knew that the CPU & GPU in the PS4 was using unified memory for the PS4 APU?

amd_hsa_roadmap.jpg
Both of us were seeing that 2014 features were needed but the PS4 was going to be released in 2013.

The changes to the GPU and Onion+ might be what is described in the following and give us 2014 designs:

CPU-GPU-640x347.jpg


Really good article found by statham http://www.extremetech.com/gaming/1...u-memory-should-appear-in-kaveri-xbox-720-ps4

amd-cpu-apu-huma-640x353.jpg


2014 Pennar and Samara Jaguar APU designs were ported to GF from TSMC. They have the GNB and are more advanced than Kabini which is a 2013 Jaguar APU and does not have the hUMA that is in 2013 Kavari (Steamroller 28nm on HP silicon). Both Sony and Microsoft moved from Steamroller to Jaguar because of either Yield issues at GF which delayed Kavari or as Sweetvar26 said for the 10 year life => refreshes as everything is going Mobile mult-processor rather than performance. This was shown by H.Goto of PC watch as no plans for a Performance CPU after Steamroller (might have been premature).

The article above mentions that hUMA is likely in the PS4 according to Cerny's Interview (Onion+ Buss?) and likely the other 2014 features mentioned by AMD. I think that's likely, why does the Kaveri 2013 Steamroller design have 2014 features; because Kavari was supposed to be the base design of game consoles till they moved to Jaguar?
 
Hell arguably the PS3, where Sony basically just only designed half the CPU along with IBM, none of the GPU, showed that. Because the CPU was the bad part of the PS3 according to most.

Haha, what? No. The GPU is the shitty part of the PS3 (along with the split memory pool). The Cell is complex to work with, yes, but it's a very powerful CPU if you know what you're doing. It's the reason games like Uncharted 2-3, Killzone 2-3 and GoW 3 are possible on the machine. Was it a great decision for a gaming console CPU? Probably not. But it most definitely isn't a bad CPU.
 
Sure if you say so... Hyped up much?

Fine, explain to me why increasing the latency and the coherency of memory spaces between the CPU and GPU is a good idea. Explain to me why you need a 'Strong CPU' to take advantage of the better coherency and snooping.
 
Theoretically, would a PC that is based on APU architecture be upgradable? i.e. - plug and play graphics cards?

I don't doubt that architecture could have serious performance potential and efficiency, but even if it does and proves itself to be "the architecture of the future" how likely is it that PC manufacturers would switch to APU based designs in the future?
 
IMO this is an intelligent roadmap for AMD, since basically Intel trounced them in the CPU space but they've persevered in the GPU space (while this is debatable on several counts, cost / performance wise AMD has kept up with NVIDIA).

If AMD does push this architecture it will be interesting to see if Intel adapts and if the performance increase is worth it (we know that on-die memory controllers were a good move when AMD did it and it took Intel some time to adapt). Yeah, AMD did have architectural hickups in that time (more related to technology and yields and being behind the curve manufacturing wise), so I'm not arguing that they made all the right decisions, just postulating that this is their chance to change PC architecture in a pretty significant way going forward.

Note: I am kind of an AMD fanboy but mainly because for the longest time they had the best cost-performance ratio from what I saw (though sacrificing efficiency for that). On die memory controllers, the x86-64 set being adopted over IA-64, APUs being adopted for the next major console releases, I can dream, right? So just take what I say with a grain of salt. Just musing.
 
Really hard to take AMD seriously anymore.

Before anyone gets hyped just remember bulldozer. Remember AMDS cpu history for the past 5 years or so.

They have been making mediocre cpus, and gpus for the past 5 years they need to get their stuff together.

Nvidia and intel need real competition.

Lol you must be into salt distribution.
I have been installing amd since the HD3000 series. me and my friends are super happy with the price/performance we get from their cards.
Nvidia has been living of the 8800GTX success for way too long.
 
Lol you must be into salt distribution.
I have been installing amd since the HD3000 series. me and my friends are super happy with the price/performance we get from their cards.
Nvidia has been living of the 8800GTX success for way too long.

I've been really happy with my 7850 outside of one driver issue which caused issues with the shadows (which has been fixed for some time now). It runs at low temperatures with a really high overclock and is able to run most current games close to max. Whilst Nividia has the edge when it comes to high-end cards AMD offers far superior low-mid range GPUs.
 
this might be finally bringing gpu to work on cpu taks on PCs... currently memory was a big problem and anything that was too big was useless to work on due to latency issues... so in reality, you couldnt use GPUs for much.
 
this might be finally bringing gpu to work on cpu taks on PCs... currently memory was a big problem and anything that was too big was useless to work on due to latency issues... so in reality, you couldnt use GPUs for much.

This reminds me of AMDs issues with OpenCL and the Blender Cycles rendering engine. It just destroys the memory of your computer when you hack it to get it to work. The Blender devs can't easily fix it because it would require undesirable modifications to the code (having separate CUDA / OpenCL branches). AMD has discussed ways to fix the OpenCL compiler so the issue may be resolved before this new architecture is ever developed (if it ever gets big, I should say, could be pipe dreaming). And of course, with large scenes even CUDA is suffering from this problem. In that event having full access to internal memory, which should have 16 gigs standard for high performance rigs in a few more years (according to Steam 4-8 is standard now), will be huge, just wonderful for graphic performance. And it could indicated why AMD isn't terribly concerned about chopping OpenCL's compiler up to fit in the GPU-RAM / CPU-RAM model (which, as Blender shows, NVIDIA / CUDA is better at doing).

Note: grain of salt rambling here, corrections in my observations welcome.
 
This reminds me of AMDs issues with OpenCL and the Blender Cycles rendering engine. It just destroys the memory of your computer when you hack it to get it to work. The Blender devs can't easily fix it because it would require undesirable modifications to the code (having separate CUDA / OpenCL branches). AMD has discussed ways to fix the OpenCL compiler so the issue may be resolved before this new architecture is ever developed (if it ever gets big, I should say, could be pipe dreaming). And of course, with large scenes even CUDA is suffering from this problem. In that event having full access to internal memory, which should have 16 gigs standard for high performance rigs in a few more years (according to Steam 4-8 is standard now), will be huge, just wonderful for graphic performance. And it could indicated why AMD isn't terribly concerned about chopping OpenCL's compiler up to fit in the GPU-RAM / CPU-RAM model (which, as Blender shows, NVIDIA / CUDA is better at doing).

Note: grain of salt rambling here, corrections in my observations welcome.

i am not familiar with that specific problem, but everything you see that CUDA or OpenCL are good at, is actually tasks that require very small amounts of memory - like 32kb.... for instance for encryption, or hash mining, it is all tiny... and where you could use it for something bigger than that, latency is an issue and you end up being slower on GPU due to all the wait for memory.
 
Lol you must be into salt distribution.
I have been installing amd since the HD3000 series. me and my friends are super happy with the price/performance we get from their cards.
Nvidia has been living of the 8800GTX success for way too long.

What a terrible post.

Amd is years behind in both the cpu and gpu market.

If the 7970 wasn't such a weak piece of shit then nvidia wouldn't have been able to release gk104 as gtx 680 and rebrand gk110 as titan and gtx 780 with a year delay .
There is zero competition in both the cpu and gpu markets so prices are high and progress is stagnant.
 
Ooh this is only for apu right?

Because as far as i know current desktop data has to go over the PCIE bus.
And with APU you dont have a PCIE bus to get data from and to the GPU part of the APU.
I thought new AMD mobo came with a special highspeed bus to pump data to the GPU and get data back from the GPU.
 
i am not familiar with that specific problem, but everything you see that CUDA or OpenCL are good at, is actually tasks that require very small amounts of memory - like 32kb.... for instance for encryption, or hash mining, it is all tiny... and where you could use it for something bigger than that, latency is an issue and you end up being slower on GPU due to all the wait for memory.

This is Blender Cycles: http://www.youtube.com/watch?v=8bDaRXvXG0E

Basically it is a realistic renderer that can run on GPUs. And as you say, it's limited in to what it can do (and for very large scenes even a very powerful GPU is RAM constrained and won't beat the CPU).

My explanation was a bit mangled, I was trying to make a connection between OpenCL and a unified architecture like the OP is about and suggesting that OpenCL's poor showing on Blender Cycles might indicate their software priorities are not for the CPU/GPU model but more targeting an APU / GPU model.
 
Their best CPU is 8-core for less price than i5.
i5 is better now due to how games prefer single core performance but in multicore applications this 8 core is beast.
They have been for years making competitive hardware being cheaper a lot.

Their HSA APUs may be their big win and featuring strong GPUs (like rumored PS4 cut apu) may get them a lot money in notebook market.

No, the 8 core AMD CPUs are anything but beasts, even in very heavily multi threaded loads (and with a clock speed advantage) 4 core i7s beat them most of the time.

Theoretically, would a PC that is based on APU architecture be upgradable? i.e. - plug and play graphics cards?

I don't doubt that architecture could have serious performance potential and efficiency, but even if it does and proves itself to be "the architecture of the future" how likely is it that PC manufacturers would switch to APU based designs in the future?

Simple, have an APU with it's CPU and GPU doing it's thing then use a discrete graphics card to do the rendering.

If the 7970 wasn't such a weak piece of shit then nvidia wouldn't have been able to release gk104 as gtx 680 and rebrand gk110 as titan and gtx 780 with a year delay .


I really would not say that, the Kepler microarchitecture just punches way above it's weight.
 
No, the 8 core AMD CPUs are anything but beasts, even in very heavily multi threaded loads (and with a clock speed advantage) 4 core i7s beat them most of the time.




I really would not say that, the Kepler microarchitecture just punches way above it's weight.

1- With 30-50 watts of tdp there is no better cpu

2- Kepler is the best in rendering but the worst in compute ( against GCN and even against Fermi ).
 
1- With 30-50 watts of tdp there is no better cpu

2- Kepler is the best in rendering but the worst in compute ( against GCN and even against Fermi ).

Lololololololololololololololololol, he was talking about Piledriver and Bulldozer, they wish they could do that at 30 to 50 watts (for them it is nearly 10x that!!)

It does not really matter if it has taken a bit of a hit on compute, it is still so fast that they sold the much smaller and cheaper core as the high end model (at least before Titan).
 
What does this mean in layman's terms?

That GPUs embedded into CPUs will be able to do a bunch things discreet GPUs cannot do efficiently due to the massive bottleneck of having to copy data around through the PCI-E bus.

With direct access to the APU it's possible to increase performance of laptop/embedded GPUs by completely skipping data (textures, geometry, shaders, constants, etc) uploading and writing commands directly to RAM.

For those with discreet GPUs, the APU can be used for physics, AI and other stuff while the GPU handles rendering.
 
Lololololololololololololololololol, he was talking about Piledriver and Bulldozer, they wish they could do that at 30 to 50 watts (for them it is nearly 10x that!!)

It does not really matter if it has taken a bit of a hit on compute, it is still so fast that they sold the much smaller and cheaper core as the high end model (at least before Titan).

Well, your lolololololol... doesn´t change that what i said is true. And for future games Nvidia will have to make something with compute power as Kepler is shit in compute ( not only a little worse ).
 
Well, your lol doesn´t change that what i said is true. And for future games Nvidia will have to make something with compute power as Kepler is shit in compute ( not only a little worse ).

No it is not!

Piledriver and Bulldozer are most of the time slower than SB/IB i7s even in extremely multi threaded workloads, they also use a fuck ton of power (e.g when overclocked to 4.8Ghz an eight core Bulldozer CPU system used 586Watts!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! running Prime95, vs a 2600K system running at 5Ghz only pulling 313 watts!).

Get over it.
 
This is not for PS4. You need a powerful Cpu for this, hence Pc.

I don´t understand your reason. What is described in the article is exactly what is happening in the PS4 system configuration. And PS4 still AFAIK is a console ( and 8 jaguars at 1.8-2GHz is the best cpu in performance/tdp in the market ).
 
No it is not!

Piledriver and Bulldozer are most of the time slower than SB/IB i7s even in extremely multi threaded workloads, they also use a fuck ton of power (e.g when overclocked to 4.8Ghz an eight core Bulldozer CPU system used 586Watts!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! running Prime95, vs a 2600K system running at 5Ghz only pulling 313 watts!).

Get over it.

Well, bulldozer was shit and piledriver a little less shit, but still, i was talking about Jaguar of course. I suppossed you knew it for your lols.
 
I'm starting to think their ultimate goal is to reach a point where the embedded GPU takes over the role of SIMD processing (which exist specifically to do parallel processing).
 
What a terrible post.

Amd is years behind in both the cpu and gpu market.

If the 7970 wasn't such a weak piece of shit then nvidia wouldn't have been able to release gk104 as gtx 680 and rebrand gk110 as titan and gtx 780 with a year delay .
There is zero competition in both the cpu and gpu markets so prices are high and progress is stagnant.

Are you living in a world where the 7970 doesn't beat the 680 overall in games and not utterly destroys it when it comes to compute?

I'm starting to think their ultimate goal is to reach a point where the embedded GPU takes over the role of SIMD processing (which exist specifically to do parallel processing).

Pretty much when you look at the HSA roadmap.
 
I'm starting to think their ultimate goal is to reach a point where the embedded GPU takes over the role of SIMD processing (which exist specifically to do parallel processing).

Yes, i think Durango rumored APU with its 32MB of ESRAM would be a killer cpu on its own ( and more so if it had the PS4 main memory config ).
 
Are you living in a world where the 7970 doesn't beat the 680 overall in games and not utterly destroys it when it comes to compute?

While that is certainly true, the 7970 does not compare to Nvidia's original 680: the Titan. If the 7970 was more competitive we wouldn't see Nvidia holding back their flagship card for later and letting their second best card compete with AMD's flagship card (which required the GHz hardware refresh to slightly exceed the 680).
 
What a terrible post.

Amd is years behind in both the cpu and gpu market.

If the 7970 wasn't such a weak piece of shit then nvidia wouldn't have been able to release gk104 as gtx 680 and rebrand gk110 as titan and gtx 780 with a year delay .
There is zero competition in both the cpu and gpu markets so prices are high and progress is stagnant.

7970 WEAK!?! It is more powerful than an Gtx 680, but weaker than a titan. But then again the titan much more expensive. And who has the most powerful Gaming GFX card on the market... Answer: ASUS Radeon HD 7970x2 6GB GDDR5 , PCI-Express 3.0, ROG ARES II, 1100MHz.

As you can see this is a gfx card with AMD GPU's on it. The ARES II is a BEAST!

Anyway, AMD has yet to release their 2013 lineup of Gfx cards. Titan will NOT be able to reign supreme in the single GPU market for long.
 
Simple, have an APU with it's CPU and GPU doing it's thing then use a discrete graphics card to do the rendering.

Okay, but would that additional discrete graphics card be able to benefit from or contribute to the unified memory + parallel processing "breakthrough" if that ends up being the basis of how games are optimized?
 
While that is certainly true, the 7970 does not compare to Nvidia's original 680: the Titan. If the 7970 was more competitive we wouldn't see Nvidia holding back their flagship card for later and letting their second best card compete with AMD's flagship card (which required the GHz hardware refresh to slightly exceed the 680).

how much would it have cost to release the 680 as the titan at the time? its a $1k card now.
 
Okay, but would that additional discrete graphics card be able to benefit from or contribute to the unified memory + parallel processing "breakthrough" if that ends up being the basis of how games are optimized?

It does not need to, it would just act as it always did.

Does this mean that the lower clock speed of the "Jaguar" cpu is no big deal?

Only for code that maps well to a GPU (SIMD).
 
how much would it have cost to release the 680 as the titan at the time? its a $1k card now.

Nvidia has massive margins on all their cards (AMD has decent margins too). GK110 yields weren't as good, but Nvidia would still be making a profit on each card if the Titan retailed for $600, let's put it that way.
 
This means we need to buy a new mobo right :(
Leaks on 20nm Volcanic Island "Hawaii" discrete GPU have CPUs in the GPU to provide Fusion CPU-GPU to run games even on a older PC. PS4 APU design looks similar
to a VI GPU. http://semiaccurate.com/forums/showpost.php?p=182634&postcount=6

So in 2014 VI GPUs can give PC gamers and developers an easy path to port games between PC and console as well as up the baseline in Games even with older motherboards, Yeah!

Found by mistercteam on SemiAccurate, applies to PC with APD = Volcanic Islands :

Block diagram on left is a APD = VI Discrete GPU : Block diagram in center is APD connected to a PC : On right is a future PC with HyperTransport and MMU connection between two APUs and discrete GPU APDs**.

lu1WuWj.jpg


"On right is a future PC with HyperTransport and MMU connection between two APUs and discrete GPU APDs**" This shows two APU rumors are possible for Durango. I still think it's more likely that the second APU in Durango is ARM IP for STB and UI with it's own 4 Gb LPGDDR3.


PC = CPU + APD (because of the PCI port) 100% NEW AMD designs would be PC = APU + APD, my 2 year old PC with a quad AMD CPU could be upgraded to CPU + APD but the APD would be valuable only if software is written for it.

My understanding is that VI has it's own CPUs because it's designed to be a discrete GPU plugged into a PCI port and as such can't be a Fusion APU without it's own serial CPU processors because of the limitations of the PCI port. APD, the "D" stands for what; discrete? Both consoles won't have PCI ports so they don't need APDs.

Also, Hawaii is the big volcanic Island so we should expect all other VI discrete GPU cards to be smaller with fewer CPUs. PS3 and Rumored Durango have much smaller GPUs designed for a $300 price point. No power scaling on LP silicon, designed as cheaply as possible but meeting what both Microsoft and Sony considered a minimum feature set.

What mistercteam has brought to the thread helps us understand how AMD is bringing the 2014 HSA features to their Discrete GPUs. What is written for a Sony PS4 APU can be ported to a PC APD. AMD needs both PS4 and Durango programmers writing software that can be easily ported to the PC.


Good article on AMD.
 
Top Bottom