AMD Polaris architecture to succeed Graphics Core Next

I wouldn't expect Neo to have any new GPU functionality, just more performance from additional CUs. I guess it depends on the PS4 API though, maybe they can change the architecture without compatibility issues.
 
Hey guys,

I noticed the hype train full steam about this call on 5/18 but I need to put the brakes on this for now :(.

This is a high-level partner webinar, and there will be no specific technical details discussed.

I want to set expectations here, and while this may seem incredibly disappointing, you'll know more about Polaris soon (no ™).

Pls no shooterino the messenger. While tempted to downvote, please do the opposite to bring awareness.

https://www.reddit.com/r/Amd/comments/4jcf29/psa_the_call_on_518_at_9am_ct_about_polaris_is/

With this plus the June 1st event being about APUs, I'm getting impatient here, I hope there's desktop Polaris at Computex.
 
That price/perf is just nuts, probably too good to be true.

I know I argued on another forum that AMD would not be giving Fury X perf for $299. Maybe $350.

But after the way he compared it to historically what happens on a node shrink and the relative performance between generations, that kind of ball-park looks to be on the money.
 
I know I argued on another forum that AMD would not be giving Fury X for $299. Maybe $350.

But after the way he compared it to historically what happens on a node shrink and the relative performance between generations, that kind of ball-park looks to be on the money.
Fury X for $299/350€ would be insane. If I manage to sell my 7950 I could make the jump for <300€ and basically max out any game at 1080p.
 
I know I argued on another forum that AMD would not be giving Fury X perf for $299. Maybe $350.

But after the way he compared it to historically what happens on a node shrink and the relative performance between generations, that kind of ball-park looks to be on the money.

People are finding it difficult to comprehend because AMD, and Nvidia, are moving 2 full nodes forward (and skipping the half nodes in between). It's effectively 3-steps-in-1. That is why people are so impressed by the leaked synthetic benchmarks of the 1080 for example. Meanwhile, if you simply factor in all the different variables the performance is simply as expected and, at least, not disappointing.
 
People are finding it difficult to comprehend because AMD, and Nvidia, are moving 2 full nodes forward (and skipping the half nodes in between). It's effectively 3-steps-in-1. That is why people are so impressed by the leaked synthetic benchmarks of the 1080 for example. Meanwhile, if you simply factor in all the different variables the performance is simply as expected and, at least, not disappointing.

16FF+ is more or less the same as 20nm but with FinFETs. Somewhat same is true for GF's 14nm. It's not a "2 full nodes forward".
 
16FF+ is more or less the same as 20nm but with FinFETs. Somewhat same is true for GF's 14nm. It's not a "2 full nodes forward".

It's a lot more complicated than that but that is a basic explaination. Essentially, neither GF or TSMC had a "High Performance" node @ 20/22nm, so they just didn't bother developing it. If I remember correctly the same thing happened with 32nm. It just got dumpstered because from the ground up it wasn't a good fit.
 
16FF+ is more or less the same as 20nm but with FinFETs. Somewhat same is true for GF's 14nm. It's not a "2 full nodes forward".

I thought 16FF+ was actually smaller than their 20nm node. From what I remember, 20nm with FinFets was rebranded as 16FF and the actual smaller node is now called 16FF+. It's been a while since I've seen those leaked(?) slides, so I could be wrong.
 
I thought 16FF+ was actually smaller than their 20nm node. From what I remember, 20nm with FinFets was rebranded as 16FF and the actual smaller node is now called 16FF+. It's been a while since I've seen those leaked(?) slides, so I could be wrong.
Re-posting image. It is true for TSMC, but Samsung/GF's is smaller and Polaris is based on theirs (not sure LPE or LPP).

dsorQis.png
 
I thought 16FF+ was actually smaller than their 20nm node. From what I remember, 20nm with FinFets was rebranded as 16FF and the actual smaller node is now called 16FF+. It's been a while since I've seen those leaked(?) slides, so I could be wrong.

It's smaller but it's not a "node jump" smaller. The biggest benefit 16/14nm bring compared to 20nm are FinFETs and a significant leakage reduction, not the gate size.
 
Re-posting image. It is true for TSMC, but Samsung/GF's is smaller and Polaris is based on theirs (not sure LPE or LPP).

---

The HP node is 16FF+ which is somewhat closer in size to 14nm. It should also be noted that the 14nm node is based on LPE sizes, we don't know if their HP process differs in size.

gr8DopT.jpg
 
It's smaller but it's not a "node jump" smaller. The biggest benefit 16/14nm bring compared to 20nm are FinFETs and a significant leakage reduction, not the gate size.

Yeah, I just meant that 16FF+ was the "true" successor of 20nm that's actually (a bit) smaller.

Re-posting image. It is true for TSMC, but Samsung/GF's is smaller and Polaris is based on theirs (not sure LPE or LPP).

dsorQis.png

I know about this one, but I actually meant a different slide that showed how TSMC fiddled around with their process names. I tried looking for it again, but the closest thing I can find is this. Not that it matters an awful lot since most of it is marketing and beyond my knowledge anyway. As dr_rus said, there's a lot more involved when it comes to area/performance gains than just physical sizes.

Also I think I read LPE and LPP are similar in size. LPP is supposed to have better characteristics for high performance chips though.
 
NVIDIA’s loyal opposition, AMD’s Radeon Technologies Group, has strongly hinted that they’re not going to be releasing comparable high-performance video cards in the near future. Rather the company is looking to make a run at the much larger mainstream market for desktops and laptops with their Polaris architecture, something that GP104 isn’t meant to address.
http://anandtech.com/show/10326/the-nvidia-geforce-gtx-1080-preview/3

Welp.
 
That seems to be the optimistic outlook, along with a 40 CU card. Most rumors seem to be pointing to 32 CUs and 390X performance.

Afaik most rumors actually said 36/40 CUs. I think people starting running with 32 because of their mobile cards.

I hate repeating myself again, but Fury (X) performance still seems way too optimistic. If we assume 40 CUs to be true, then that puts at 390X performace as a baseline. The rest is up to architectural improvements, frequency and bandwidth efficiency. It's entirely possible to come close to Fiji (which isn't THAT much faster than Hawaii), but I'd rather be pleasently surprised than end up with another meh.

And P10 not competing with GP104? I'm shocked. Shocked I tell you!
 
Afaik most rumors actually said 36/40 CUs. I think people starting running with 32 because of their mobile cards.

I hate repeating myself again, but Fury (X) performance still seems way too optimistic. If we assume 40 CUs to be true, then that puts at 390X performace as a baseline. The rest is up to architectural improvements, frequency and bandwidth efficiency. It's entirely possible to come close to Fiji (which isn't THAT much faster than Hawaii), but I'd rather be pleasently surprised than end up with another meh.

And P10 not competing with GP104? I'm shocked. Shocked I tell you!

I think we were the only guys not expecting performance cards, I'm gutted, but I'm not upgrading until the next wave of AAA games have hit and the lay of the land more ascertainable.
 
I know I argued on another forum that AMD would not be giving Fury X perf for $299. Maybe $350.

But after the way he compared it to historically what happens on a node shrink and the relative performance between generations, that kind of ball-park looks to be on the money.

This is what I said, also lines up with the rumoured due size and perf/mm improvements and the rumoured power envelope and the perf/watt improvements.

Afaik most rumors actually said 36/40 CUs. I think people starting running with 32 because of their mobile cards.

I hate repeating myself again, but Fury (X) performance still seems way too optimistic. If we assume 40 CUs to be true, then that puts at 390X performace as a baseline. The rest is up to architectural improvements, frequency and bandwidth efficiency. It's entirely possible to come close to Fiji (which isn't THAT much faster than Hawaii), but I'd rather be pleasently surprised than end up with another meh.

And P10 not competing with GP104? I'm shocked. Shocked I tell you!

Assuming no IPC improvements 40CUs would need to be clocked at around 1.7GHz to reach Fury X Tflops. That clock speed seems possible when you look at the clock speed increase NV achieved but it is at the high end. A 1.5 GHz clock seems more likely and that would need a 10% IPC increase.

A rough ballpark guess for IPC can be done by looking at the perf/watt increase claims. That is saying a 150% increase over current values and AMD said it is roughly 70/30 process/architecture. 30% of 150 is 45 so taking the numbers at face value we have a claimed 45% IPC increase. Very rough and large error bars surround that calc though.

Taking a 30% IPC increase a 40CU 1.3 GHz part would be around Fury X level which seems like a very reasonable clockspeed. 36CU would need 1.45GHz which also seems reasonable so either of those core counts could work.

Admittedly using Tflop numbers to predict performance is very hit n miss but at the moment it is the best we have to go on.
 
I locked myself into AMD by using one of their proprietary technologies that nVidia doesnt support, so this news of no high performance card hit really hard, Im currently on a 290x, I guess my best choice is to wait for a price drop on the 390x or Fury???
 
I locked myself into AMD by using one of their proprietary technologies that nVidia doesnt support, so this news of no high performance card hit really hard, Im currently on a 290x, I guess my best choice is to wait for a price drop on the 390x or Fury???

Nooo, just wait for polaris/vega. Why would you buy a 28nm GPU when the new cards are around the corner?
 
I locked myself into AMD by using one of their proprietary technologies that nVidia doesnt support, so this news of no high performance card hit really hard, Im currently on a 290x, I guess my best choice is to wait for a price drop on the 390x or Fury???

When u say locked to an AMD proprietary tech, you mean Freesync?

Also what resolution are u looking for an upgrade?

Dont believe the 390x is a huge step up from the 290 so your best bet if AMD is Fury I suppose which should see a price drop with the 1070 launch.

There are rumors of Vega dropping later this year though. So keep that in mind.
 
390X - Fury performance at 249 would change the game. If they're not going to address Nvidia at the high-end until Vega, they really need to shake things up price/performance with polaris. All signs point to them doing this too, so fingers crossed
 
I locked myself into AMD by using one of their proprietary technologies that nVidia doesnt support, so this news of no high performance card hit really hard, Im currently on a 290x, I guess my best choice is to wait for a price drop on the 390x or Fury???

A 390X would be more of a sidegrade as the difference isn't that great. Polaris 10 would probably be more of an upgrade and even that would hardly be worth it. Either you're going to have to wait for Vega or get a Fury.
 
Maybe I'm reading this wrong so I'd like someone to clarify me and tell me where I'm not getting this. I know that Global Foundries (although it's Samsung Tech) will be making Polaris but according to TSMC, their 16nm chips can provide 60% power savings over 28nm. Now, I know that TSMC isn't Samsung but I doubt that the difference will be huge. However, with the rumor that that P10 will perform at 390 levels at half the draw, is it right of me to assume that P10 is more or less a 390 die shrink?

Link where TSMC say 60% power saving.
 
I locked myself into AMD by using one of their proprietary technologies that nVidia doesnt support, so this news of no high performance card hit really hard, Im currently on a 290x, I guess my best choice is to wait for a price drop on the 390x or Fury???

Assuming you bought a freesync monitor/adaptive sync monitor?

Adaptive sync isn't really proprietary as it simply leverages adaptive sync in the monitor. I did the same as I have a 970 and refused to pay the premium for Gsync. The nice things is that it has adaptive sync for when (hopefully) Nvidia finally supports it. In addition Freesync monitors really are barely more expensive than just any other decent quality monitor. So it's just like buying a solid monitor that can support adaptive sync if you want it to with a nominal fee. I figured if I end up going Nvidia again (I don't want to) that I'll just use it like any other monitor but I didn't have to pay an extra $200-300 dollars.

My plan is to ride out my 970 until the Vega card if the mid level AMD offerings don't spark my interest.
 
Maybe I'm reading this wrong so I'd like someone to clarify me and tell me where I'm not getting this. I know that Global Foundries (although it's Samsung Tech) will be making Polaris but according to TSMC, their 16nm chips can provide 60% power savings over 28nm. Now, I know that TSMC isn't Samsung but I doubt that the difference will be huge. However, with the rumor that that P10 will perform at 390 levels at half the draw, is it right of me to assume that P10 is more or less a 390 die shrink?

Link where TSMC say 60% power saving.

Not at all. If we are to believe AMD's PR slides, they've made a lot of improvements to their architecture. How many of those being implemented and how big of a difference they're going to make is yet to be seen. Besides that they're using a completely different memory controller and have a different shader layout.


Just because a next-gen chip performs similarly to an older one, doesn't mean there aren't any improvements. It's always been the case in the past that the new mid-range chip would end up in the performance bracket of the previous high end while being smaller, cheaper and using less power. Just look at what the HD 7870 was compared to the HD 6970 or even GTX 580. Or as seen with nVidia, GTX 1080 vs 980 Ti.
 
When u say locked to an AMD proprietary tech, you mean Freesync?

Also what resolution are u looking for an upgrade?

Dont believe the 390x is a huge step up from the 290 so your best bet if AMD is Fury I suppose which should see a price drop with the 1070 launch.

There are rumors of Vega dropping later this year though. So keep that in mind.

Assuming you bought a freesync monitor/adaptive sync monitor?

Adaptive sync isn't really proprietary as it simply leverages adaptive sync in the monitor. I did the same as I have a 970 and refused to pay the premium for Gsync. The nice things is that it has adaptive sync for when (hopefully) Nvidia finally supports it. In addition Freesync monitors really are barely more expensive than just any other decent quality monitor. So it's just like buying a solid monitor that can support adaptive sync if you want it to with a nominal fee. I figured if I end up going Nvidia again (I don't want to) that I'll just use it like any other monitor but I didn't have to pay an extra $200-300 dollars.

My plan is to ride out my 970 until the Vega card if the mid level AMD offerings don't spark my interest.


Not freesync, but mixed resolution Eyefinity(Surround)

20140910_214555zwjp7.jpg


Outside monitors are 1920*1200(16:10) and center is 2560*1080 (21:9), AMD handles it admirably well, nVidia doesnt have any support for something like this.

It hurts to wait till end of year as most modern games are starting to not run at descent Frame rates with my resolution :( 6400x1080
 
Not freesync, but mixed resolution Eyefinity(Surround)

20140910_214555zwjp7.jpg


Outside monitors are 1920*1200(16:10) and center is 2560*1080 (21:9), AMD handles it admirably well, nVidia doesnt have any support for something like this.

It hurts to wait till end of year as most modern games are starting to not run at descent Frame rates with my resolution :( 6400x1080

Oh, when you said a proprietary tech that locked you in I assumed you meant a hardware component that literally forced you to not use Nvidia. I had no idea Nvidia didn't support something like this. That's a sweet setup, I can see why you would be dissappointed, that looks like it would take a hefty rig to drive properly.
 
Anyone thinking that the AMD cards that will be out this summer will only rival the 980 Ti and then they will push as hard as they can to have HBM2 cards out by the end of the year (before Nvidia's next year) to get the possible upper hand? Just a thought.
 
Anyone thinking that the AMD cards that will be out this summer will only rival the 980 Ti and then they will push as hard as they can to have HBM2 cards out by the end of the year (before Nvidia's next year) to get the possible upper hand? Just a thought.




Don't count on that. It'll be lucky to rival 980 imo.
If the leaks are true, we're getting 36CU in a 150W TDP. What it means is we'll be looking at R9 390X performances for less than 300 dollars and at a smaller TDP. It's meant to replace R9 380.
 
Anyone thinking that the AMD cards that will be out this summer will only rival the 980 Ti and then they will push as hard as they can to have HBM2 cards out by the end of the year (before Nvidia's next year) to get the possible upper hand? Just a thought.

Kind of. Yes, they are probably trying to get their higher performance part out later this year instead of 2017 but who knows if that will happen.

I doubt P10 will be on par with the 980 Ti though.
 
An illustration on why you can't multiply Fury X performance by 2.5x power efficiency gain figure:

AMDpolaris_01.png


And another one, unrelated to AMD but still relevant in the discussion on this:

3_575px.PNG
 
Not freesync, but mixed resolution Eyefinity(Surround)

20140910_214555zwjp7.jpg


Outside monitors are 1920*1200(16:10) and center is 2560*1080 (21:9), AMD handles it admirably well, nVidia doesnt have any support for something like this.

It hurts to wait till end of year as most modern games are starting to not run at descent Frame rates with my resolution :( 6400x1080

funny I am using this feature as well :)

2 x 1080P 22" screens on the sides and 1 x 1440P freesync 27" screen on the center.

Works really well
 
Not freesync, but mixed resolution Eyefinity(Surround)

20140910_214555zwjp7.jpg


Outside monitors are 1920*1200(16:10) and center is 2560*1080 (21:9), AMD handles it admirably well, nVidia doesnt have any support for something like this.

It hurts to wait till end of year as most modern games are starting to not run at descent Frame rates with my resolution :( 6400x1080

I had no idea this was even possible, that's cool as fuck.
 
AMD's "Polaris 10" GPU will feature 32 compute units (CUs) which TPU estimates &#8211; based on the assumption that each CU still contains 64 shaders on Polaris &#8211; works out to 2,048 shaders. The GPU further features a 256-bit memory interface along with a memory controller supporting GDDR5 and GDDR5X (though not at the same time heh). This would leave room for cheaper Polaris 10 derived products with less than 32 CUs and/or cheaper GDDR5 memory. Graphics cards would have as much as 8GB of memory initially clocked at 7 Gbps. Reportedly, the full 32 CU GPU is rated at 5.5 TFLOPS of single precision compute power and runs at a TDP of no more than 150 watts.

PCPer : New AMD Polaris 10 and Polaris 11 GPU Details Emerge
 
Top Bottom