AMD Polaris architecture to succeed Graphics Core Next

Oops, yeah, meant GPU, but the physical size in the pictures pre-release, I could have sworn that 10 was a smaller die size than 11.

Maybe I am misremembering.

AMD first designed bigger chip, and they named it P10.

Then they worked on smaller chip, and they named it P11.
 
I doubt we will see a 40CU/2560 shader part until yields are better. Probably next series as a 575 or something.

Ah ok. So P10 pushes more power than the P11?

Meaning it is a more powerful part performance wise, or are they comparable?
P11 is the small chip, P10 is the medium chip, Vega is the big one. Price, performance, and power consumption will all correlate.
 
puts them in a significantly worse spot then they are in right now. if that turns out to be the situation than its clear amd had serious problems transitioning gcn to 14nm. pascal is nothing more than maxwell at higher clocks. gcn 1.2 was already faring better than maxwell so with further improvements to the architecture(something completely absent from gp104) they should have pulled ahead.

I don't think that's necessarily true though. AMD was ahead of the node transitions for so many years, even with their flagship cards, and nVidia still came ahead. Even during the Fermi days when nVidia was like 6 months behind AMD still only managed to get like 50% marketshare and they had nVidia beat on every single metric except for raw performance. People like to perpetuate AMD's power consumption, but Fermi was way worse relatively speaking. So even when they have a competitive product with better price/performance and performance/W, they still get swallowed up by nVidia's marketing and mindshare. Either that or people just wait for nVidia's responce.

Now they're focusing on the mainstream where you move more volume and can get OEM/mobile wins. They just don't have the money to do a complete top to bottom refresh, so they get the smaller chips that are (I assume) cheaper to design and validate while they take their time to not botch their end high end cards like they did with Fiji and to an extent Hawaii.

I don't know whether this is a good strategy or not. I guess time will tell.
 
No there isn't, if we're talking about the mid range. You haven't noticed my note on the top end being a different story? Polaris is already limited by 1070's $380 price, they can't "gouge" us any higher than that. And $350 ceiling for a new chip on a new process isn't very much.

if this was true and the 1070 turns out to be faster than a titan x for $370, amds product wont be very compelling at $299

The 1070 is $450 until we see otherwise.
 
I don't think that's necessarily true though. AMD was ahead of the node transitions for so many years, even with their flagship cards, and nVidia still came ahead. Even during the Fermi days when nVidia was like 6 months behind AMD still only managed to get like 50% marketshare and they had nVidia beat on every single metric except for raw performance. People like to perpetuate AMD's power consumption, but Fermi was way worse relatively speaking. So even when they have a competitive product with better price/performance and performance/W, they still get swallowed up by nVidia's marketing and mindshare. Either that or people just wait for nVidia's responce.

Now they're focusing on the mainstream where you move more volume and can get OEM/mobile wins. They just don't have the money to do a complete top to bottom refresh, so they get the smaller chips that are (I assume) cheaper to design and validate while they take their time to not botch their end high end cards like they did with Fiji and to an extent Hawaii.

I don't know whether this is a good strategy or not. I guess time will tell.

AMD had some serious supply issues with the 5800 series. Unless you checked every single day, it was borderline impossible to get one for months. By the time that was sorted out, Fermi was close.
 
Except for the fact that nvidia are gouging the shit out of us (just like amd gouged the shit out of everyone when they were first to market at 28nm)


There's PLENTY of room for price wars and then some.

What? How are you supposed to start a price war at 1/2 the performance of the competitors top shelf part and presumably 30% lower than their mainstream part if 1070 really does match the 980 Ti?

You are literally in a lower performance bracket. You ain't starting a war with jack shit until the 1060 shows up .
 
No there isn't, if we're talking about the mid range. You haven't noticed my note on the top end being a different story? Polaris is already limited by 1070's $380 price, they can't "gouge" us any higher than that. And $350 ceiling for a new chip on a new process isn't very much.

What?

Maybe I misunderstood your previous post:

What I thought you were saying: "amd can't offer better performance/dollar at the midrange"
Now you're saying the opposite

So either I'm misunderstanding you now, or then
 
The gaming hardware market is still growing so that isn't it.

I'd imagine the market for PC gaming hardware as a whole may be growing but the market for $200+ video cards has stopped growing or maybe even shrank because minimum specs aren't inflating as quickly anymore. Hell, most of the more popular PC games (LoL, WoW, etc.) are playable on integrated Intel chipsets if you're okay with super-low settings. >$400 cards are pretty much only relevant if you insist on 60+fps and max settings and supersampling and what-not.
 
puts them in a significantly worse spot then they are in right now. if that turns out to be the situation than its clear amd had serious problems transitioning gcn to 14nm. pascal is nothing more than maxwell at higher clocks. gcn 1.2 was already faring better than maxwell so with further improvements to the architecture(something completely absent from gp104) they should have pulled ahead.

Ehm, not really. Puts them in exactly the same spot they've been occupying with some remissions since G80 launch basically - they'll provide a bit more performance for the same price or will ask a bit cheaper price for the same performance while completely abandoning some of market segments for the time being.

GCN 1.2 wasn't faring better than Maxwell in any metric imaginable. Unless Polaris is a big architectural change from GCN3 nobody should expect the situation to change significantly between it and Pascal compared to how it was between Maxwell and 200/300 series.

What?

Maybe I misunderstood your previous post:

What I thought you were saying: "amd can't offer better performance/dollar at the midrange"
Now you're saying the opposite

So either I'm misunderstanding you now, or then

No, I'm not saying the opposite. AMD has no way of offering better perf/dollar in the mid range. Both companies are tied to production costs in the mid range and they are comparable for both Pascal and Polaris as both chips use new production lines of external foundries. The difference in perf/dollar in the mid range has never been big enough to justify a choice of one vendor over another. No reason why it suddenly will be with Polaris vs Pascal.

Top end is a different story as there are no price ceiling basically as the prices are regulated by demand and competition mostly and this is where AMD may easily undercut NV with their $1000 Titan or $800 1080Ti or even $600 1080. But that won't happen until Vega which is October at best, a year from now at worst.
 
I don't think that's necessarily true though. AMD was ahead of the node transitions for so many years, even with their flagship cards, and nVidia still came ahead. Even during the Fermi days when nVidia was like 6 months behind AMD still only managed to get like 50% marketshare and they had nVidia beat on every single metric except for raw performance. People like to perpetuate AMD's power consumption, but Fermi was way worse relatively speaking. So even when they have a competitive product with better price/performance and performance/W, they still get swallowed up by nVidia's marketing and mindshare. Either that or people just wait for nVidia's responce.

Fermi managed to take performance crown which gave Nvidia - performance at all cost market, and also it had quite nice compute capabilities which also helped when we had big boinc/folding boom. And most importantly Nvidia had gtx 460 which offered amazing value and overclocking potential.
 
Ehm, not really. Puts them in exactly the same spot they've been occupying with some remissions since G80 launch basically - they'll provide a bit more performance for the same price or will ask a bit cheaper price for the same performance while completely abandoning some of market segments for the time being.

GCN 1.2 wasn't faring better than Maxwell in any metric imaginable. Unless Polaris is a big architectural change from GCN3 nobody should expect the situation to change significantly between it and Pascal compared to how it was between Maxwell and 200/300 series.



No, I'm not saying the opposite. AMD has no way of offering better perf/dollar in the mid range. Both companies are tied to production costs in the mid range and they are comparable for both Pascal and Polaris as both chips use new production lines of external foundries. The difference in perf/dollar in the mid range has never been big enough to justify a choice of one vendor over another. No reason why it suddenly will be with Polaris vs Pascal.

Top end is a different story as there are no price ceiling basically as the prices are regulated by demand and competition mostly and this is where AMD may easily undercut NV with their $1000 Titan or $800 1080Ti or even $600 1080. But that won't happen until Vega which is October at best, a year from now at worst.

right now the situation is

380/380x > 960
390 > 970
390x > 980
furyx < 980ti

in the market segment they are addressing they have gone from faster(sometimes significantly) and cheaper to being significantly slower and slightly cheaper. amd being better on 3 out of the top 4 gpu tiers to me means gcn is faring better than maxwell
 
AMD had some serious supply issues with the 5800 series. Unless you checked every single day, it was borderline impossible to get one for months. By the time that was sorted out, Fermi was close.

Fermi managed to take performance crown which gave Nvidia - performance at all cost market, and also it had quite nice compute capabilities which also helped when we had big boinc/folding boom. And most importantly Nvidia had gtx 460 which offered amazing value and overclocking potential.

Yeah, I know I was being reductive, but my point mostly was that in a head on fight AMD just kept losing. They dropped their small die strategy to focus on compute and that's when GTX 680 vs 7970 happened. They got beat by a smaller die that used less power because nVidia decided to beef up their second tier chip that mainly focused on gaming while AMD had a more compute-heavy card that only managed to beat it after months. nVidia has the resources to keep making monster dies they mainly use for HPC (and halo cards), but now they're also focusing on making really big gaming-focused dies like GM200 and the now rumored GP102.

AMD doesn't have that kind of luxury to focus on compute and gaming while being competitive and also keeping power consumption in check. For all we know small Vega might be able to compete with GP104, but I wonder at what cost. If it's going to be bigger, more expensive (if it uses HBM2) and more power hungry, the Maxwell cycle will just repeat. And then there's this hypothetical GP102 that might be GM200's successor that will make matters worse. I can see it be similar to what they did with GM200 and GK210. Since GM200 had its DP cut, nVidia did a respin of GK110 for the HPC market and used GM200 for desktops.

I don't really know where I'm going with all this. I don't want to come across as another harbinger of doom and gloom as I'd love to see AMD do well. I guess they have to find their niche and execute well. If they price Polaris right, they'll probably be interesting products. Let's just hope Vega ends up doing the same thing.
 
Why do desktop users really care about power consumption anyway? It's hard to believe this type of thing could really make a dent in overall electricity bills.
 
Why do desktop users really care about power consumption anyway? It's hard to believe this type of thing could really make a dent in overall electricity bills.

Personally, if I have two options that both would achieve my performance goals in a similar price range, I'd prefer the more efficient option. In my case, it's less about the money and more about wanting to reduce how much electricity I'm using from a community power load and environmental perspective. Even if it's not a huge difference and just something minuscule, I'm of an "optimizer" mindset and often in the "every ____ counts" camp, so I make these kinds of decisions when I can. However, I'm sure the fact that I'm even thinking about it is due to marketing, and I bet I'm making terrible decisions elsewhere in my life.

Also, I might be wrong about this (I'll be building my first desktop later this year and haven't done too much research yet), but I thought that lower power consumption correlates to lower temperatures. If that's the case, that's also appealing to me.
 
It was heavily cited in the hundreds of Hawaii vs Maxwell forum arguments, since Maxwell was more efficient.

more efficient to me means I am getting similar or higher perfomance with less energy use. And when you reduce power consumption you're likely to reduce heat. This in turn means I can increase core and memory clocks to much higher levels, increase the amount of cards within my case without fear of melting steel beams (which current AMD GPUs can do), or mount a high performance card within a smaller enclosure with a lower power PSU.
 
Why do desktop users really care about power consumption anyway? It's hard to believe this type of thing could really make a dent in overall electricity bills.

Because the market leader says that it's important. If AMD overtakes Nvidia in power efficiency again, power consumption won't matter anymore.

I know how bitter that sounds, but that's really what it comes down to.
 
I want to believe. My guess is that the center result is somewhere around where stock P10 will realistically land for $249-$279, with a shit ton of OC headroom due to the node shrink allowing it to on average knock heads with the Fury X.

If it plays out like I just listed I'll buy two.

Indeed. Crossfire those two up and it appears performance is fantastic.
 
Why do desktop users really care about power consumption anyway? It's hard to believe this type of thing could really make a dent in overall electricity bills.

I have a mini ITX case. A case that small is liable to heat up quickly, especially if you introduce a power-hungry graphics card that produces a lot of heat. More heat with nowhere to go would work the fans more, making it noisy, which I don't want.
 
more efficient to me means I am getting similar or higher perfomance with less energy use. And when you reduce power consumption you're likely to reduce heat. This in turn means I can increase core and memory clocks to much higher levels, increase the amount of cards within my case without fear of melting steel beams (which current AMD GPUs can do), or mount a high performance card within a smaller enclosure with a lower power PSU.
This.

End of power or Dennard scaling (~2006) has forced IHVs to get better energy efficiency from hardware.
 
It was heavily cited in the hundreds of Hawaii vs Maxwell forum arguments, since Maxwell was more efficient.

And amusingly just a couple generations before when it was Fermi, the same arguments occurred but in the other camp's favor. Oh well.

Indeed. Crossfire those two up and it appears performance is fantastic.

Yeah but then you have to deal with Crossfire. Nobody in their right mind picks 2 weaker GPUs to offer similar performance to 1 more powerful GPU. You just don't do this. Multi-GPU is just not very good. The only "good" multi-GPU that ever existed was the original 3dfx Voodoo SLI which actually interleaved the scan lines and gave the user exactly double the performance in all cases.
 
I have a mini ITX case. A case that small is liable to heat up quickly, especially if you introduce a power-hungry graphics card that produces a lot of heat. More heat with nowhere to go would work the fans more, making it noisy, which I don't want.

Yes. This is me. I can only fit a certain size or watt into my case. Power pushing cards are pointless to me, both logistically and for budget.
 
right now the situation is

380/380x > 960
390 > 970
390x > 980
furyx < 980ti
Right, so all old Maxwell cards are mystically "worse" than all new Radeon cards (although that's highly debatable in case of 380 vs 960 for example; and in case of GM204 vs Hawaii there are some cases where Hawaii's TDP is an issue as well). And the only "new" Maxwell card is better than the new Radeon card. What does that tell you? To me this tells that NV hadn't updated their lineup since 2014 while AMD has such update in 2015.

In general terms Maxwell is a better architecture as can be seen from 980Ti > Fury X and 960 = 380. The 900 series lacked clocks and VRAM because it was launched nearly a year before Radeon 300 series. With Pascal and Polaris launching rather close to each other this is unlikely to be the case again.

in the market segment they are addressing they have gone from faster(sometimes significantly) and cheaper to being significantly slower and slightly cheaper. amd being better on 3 out of the top 4 gpu tiers to me means gcn is faring better than maxwell

They haven't gone anywhere. AMD has launched 300 series.

4870 vs 260? The 4870 was like $150 cheaper when it launched and offered better performance.

And NV quickly reacted with 260-something-something which was on the same perf/dollar level if not even better. The space for such maneuvers is very limited in the mid range segment.
 
Right, so all old Maxwell cards are mystically "worse" than all new Radeon cards (although that's highly debatable in case of 380 vs 960 for example; and in case of GM204 vs Hawaii there are some cases where Hawaii's TDP is an issue as well).
What do you mean by 'new'? Apart from the Fury, none of the listed Radeons are new. The 380/380X are Tonga, which first appeared as the 285 (in fact, the 380 is the 285), while the 390/390X are the old Hawaii chips (2013!) with a fresh coat of paint and improved coolers.
 
What do you mean by 'new'? Apart from the Fury, none of the listed Radeons are new. The 380/380X are Tonga, which first appeared as the 285 (in fact, the 380 is the 285), while the 390/390X are the old Hawaii chips (2013!) with a fresh coat of paint and improved coolers.

this. amds 2013 chips beating nvidias 2014.
 
With much higher power consumption and heat and a bunch of games seems to prefer the maxwell cards. Even games like the dx12 Forza 6 Apex.

You mean the Forza 6 Apex in which the r9 390 gets 10 more FPS than a 970?

yep. very few games prefer maxwell. the overwhelming majority run better on GCN. and power consumption is meaningless. its also worth putting in perspective that the 10 fps improvement the 390 offers in forza is ~20%.

http://www.techpowerup.com/reviews/Sapphire/R9_390_Nitro/27.html
http://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/27.html
http://www.techpowerup.com/reviews/AMD/R9_Nano/36.html
http://www.techpowerup.com/reviews/Sapphire/R9_Fury_Tri-X_OC/34.html

those super high temps tho.....what will you ever do with all that heat?
 
Power efficiency being something that people now care about makes no sense to me.

I'm imagining a world where all was exactly the same but AMD had 70% market share and people insisting they bought AMD because of the superior theoretical compute performance.
 
Power efficiency being something that people now care about makes no sense to me.

I'm imagining a world where all was exactly the same but AMD had 70% market share and people insisting they bought AMD because of the superior theoretical compute performance.

What?
You can imagine that world, and I will still prefer cards that consume less power, if I am able to do so. AMD has been very good on me so far.
 
Power efficiency being something that people now care about makes no sense to me.

Yeah, I never heard anyone complain about this until this past generation of cards. Maybe because AMD was so much higher though.

It is a consideration now if you aren't buying new everything. You probably had a power supply that can cover a nvidia product. There is a decent possibility you need to buy a new one to cover a 390 upgrade for example.
 
The GTX 480 was panned for its high power consumption.
Indeed. And the whole FX series was completely obliterated (rightfully so) for its temperatures and inefficiency.
Generally, I think many people who believe to perceive this overwhelming bias in the market simply lack a long-term perspective.

Personally I don't even care about power consumption much, but there have always been people who do, and with the spread of ITX and other SFF systems it would only make sense for this awareness to increase.
 
Indeed. And the whole FX series was completely obliterated (rightfully so) for its temperatures and inefficiency.
Generally, I think many people who believe to perceive this overwhelming bias in the market simply lack a long-term perspective.

Personally I don't even care about power consumption much, but there have always been people who do, and with the spread of ITX and other SFF systems it would only make sense for this awareness to increase.

Well, we could consider (being features equal) an architecture with better perf/watt as more elegant. That´s a selling point for some, above all the tech geeks.

That´s why people still remember Kyro 2, AMD R300, Nvidia G80...or AMD Athlon 64 and Intel Conroe...

Of course, also in the end a more efficient architecture will be capable of giving you more powerful cards.
 
The GTX 480 was panned for its high power consumption.

A lot of people still defended it as a good card, though. Hawaii still get more shit, even though the noise and overheating issues were fixed by AIBs a month later. Instead of attacking the terrible cooler, the chip is being blamed.
 
Well, we could consider (being features equal) an architecture with better perf/watt as more elegant. That´s a selling point for some, above all the tech geeks.

That´s why people still remember Kyro 2, AMD R300, Nvidia G80...

Of course, in the end a more efficiente architecture will be capable of giving you more powerful cards.
The Kyro 2 was amazing, too bad rendering tech developments kind of worked against it.
 
Indeed. And the whole FX series was completely obliterated (rightfully so) for its temperatures and inefficiency.
Generally, I think many people who believe to perceive this overwhelming bias in the market simply lack a long-term perspective.

Personally I don't even care about power consumption much, but there have always been people who do, and with the spread of ITX and other SFF systems it would only make sense for this awareness to increase.

Oh man, I still remember the press laughing at the dustbuster.

But yes, the problem with the 290/390 was the really awful reference cooler. It does have more power consumption, but the only time that will be a problem is when you buy those $20 "750W" PSUs or you are doing SFF builds like others here. Then you can't use it, but that precludes other stuff from both camps.

The Kyro 2 was amazing, too bad rendering tech developments kind of worked against it.
Oh wow, I remember wanting one while reading about it.
 
What do you mean by 'new'? Apart from the Fury, none of the listed Radeons are new. The 380/380X are Tonga, which first appeared as the 285 (in fact, the 380 is the 285), while the 390/390X are the old Hawaii chips (2013!) with a fresh coat of paint and improved coolers.
What I've said. It doesn't matter that the chips aren't new, the cards are and the chips in them are upclocked and repriced to better fight off the 900 series. Nothing strange that they are a little bit faster for the same money as that was the whole point of updating 200 series a year ago. What matters is that Maxwell gives the same performance with a lot lower power consumption and a lot less transistors spent which means that it is factually better than GCN2 or GCN3 architecturally. So unless Polaris (and 14LPE to some degree) will provide a big architectural jump, Pascal can easily end up being faster for the same price not only in the high end but across the lineup.
 
yep. very few games prefer maxwell. the overwhelming majority run better on GCN. and power consumption is meaningless. its also worth putting in perspective that the 10 fps improvement the 390 offers in forza is ~20%.

http://www.techpowerup.com/reviews/Sapphire/R9_390_Nitro/27.html
http://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/27.html
http://www.techpowerup.com/reviews/AMD/R9_Nano/36.html
http://www.techpowerup.com/reviews/Sapphire/R9_Fury_Tri-X_OC/34.html

those super high temps tho.....what will you ever do with all that heat?

Dang it, I only saw the 750Ti vs 360 DF video. My bad.

For the perf/watt debate: I really don't give a damn until my PSU can support it. But more power = more heat -> louder fans. And I really don't want a vacuum cleaner on my desk and even less I'd like to witness any thermal throttling.
 
Dang it, I only saw the 750Ti vs 360 DF video. My bad.

For the perf/watt debate: I really don't give a damn until my PSU can support it. But more power = more heat -> louder fans. And I really don't want a vacuum cleaner on my desk and even less I'd like to witness any thermal throttling.

more power /= more heat as ive just shown. nor does it create more noise. every card i linked is just as quiet or quieter than comparable nvidia cards. a bad cooler = more heat and more noise

What I've said. It doesn't matter that the chips aren't new, the cards are and the chips in them are upclocked and repriced to better fight off the 900 series. Nothing strange that they are a little bit faster for the same money as that was the whole point of updating 200 series a year ago. What matters is that Maxwell gives the same performance with a lot lower power consumption and a lot less transistors spent which means that it is factually better than GCN2 or GCN3 architecturally. So unless Polaris (and 14LPE to some degree) will provide a big architectural jump, Pascal can easily end up being faster for the same price not only in the high end but across the lineup.

they are upclocked by a whopping 5% and even the original 2 series cards perform better in most games anyway
 
What I've said. It doesn't matter that the chips aren't new, the cards are and the chips in them are upclocked and repriced to better fight off the 900 series. Nothing strange that they are a little bit faster for the same money as that was the whole point of updating 200 series a year ago. What matters is that Maxwell gives the same performance with a lot lower power consumption and a lot less transistors spent which means that it is factually better than GCN2 or GCN3 architecturally. So unless Polaris (and 14LPE to some degree) will provide a big architectural jump, Pascal can easily end up being faster for the same price not only in the high end but across the lineup.

If you use 1440p or 4k Fury X = 980Ti, the die size is basically the same as is power consumption. Maxwell does not really seem that much better. In some DX12 titles Fury X is far faster than 980Ti too and I bet in a year's time GCN will keep chugging away as Maxwell does a Kepler and starts to look far worse by comparison.
 
Top Bottom