AMD Polaris architecture to succeed Graphics Core Next

Power efficiency being something that people now care about makes no sense to me.

I'm imagining a world where all was exactly the same but AMD had 70% market share and people insisting they bought AMD because of the superior theoretical compute performance.

What? Noone wants a 300W gpu in their pc unless they really can't avoid it

I've always bought the lower power consumption card (9800 pro instead of fx5800, hd4870 instead of gtx 260, hd6870 instead of gtx 560ti, gtx 970 instead of r9 290/390)

Power consumption equals fanspeed equals noise.
(and I'm just going to preempt the 'a 390x can be quiet' bullshit, a 980 will be a lot more quiet using the same cooler)

Here's what everyone thought about the geforce 5800 cards back in the day : https://www.youtube.com/watch?v=lmYrYcDp07Q
And here's what people think of the 290x : https://www.youtube.com/watch?v=u5YJsMaT_AE
 
What? Noone wants a 300W gpu in their pc unless they really can't avoid it

I've always bought the lower power consumption card (9800 pro instead of fx5800, hd4870 instead of gtx 260, hd6870 instead of gtx 560ti, gtx 970 instead of r9 290/390)

Power consumption equals fanspeed equals noise.
(and I'm just going to preempt the 'a 390x can be quiet' bullshit, a 980 will be a lot more quiet using the same cooler)

Here's what everyone thought about the geforce 5800 cards back in the day : https://www.youtube.com/watch?v=lmYrYcDp07Q
And here's what people think of the 290x : https://www.youtube.com/watch?v=u5YJsMaT_AE

you have no idea what you are talking about tbh. but do go on and continue to ignore factual noise and heat tests

fannoise_load.gif
 
What? Noone wants a 300W gpu in their pc unless they really can't avoid it

I've always bought the lower power consumption card (9800 pro instead of fx5800, hd4870 instead of gtx 260, hd6870 instead of gtx 560ti, gtx 970 instead of r9 290/390)

Power consumption equals fanspeed equals noise.
(and I'm just going to preempt the 'a 390x can be quiet' bullshit, a 980 will be a lot more quiet using the same cooler)

Here's what everyone thought about the geforce 5800 cards back in the day : https://www.youtube.com/watch?v=lmYrYcDp07Q
And here's what people think of the 290x : https://www.youtube.com/watch?v=u5YJsMaT_AE

My two XFX 390Xs are among the most quiet quiet cards I have ever owned. My EVGA Silent 1000 watt PSU in ECO mode is louder. Think about that for a moment.

.
Yeah but then you have to deal with Crossfire. Nobody in their right mind picks 2 weaker GPUs to offer similar performance to 1 more powerful GPU. You just don't do this. Multi-GPU is just not very good. The only "good" multi-GPU that ever existed was the original 3dfx Voodoo SLI which actually interleaved the scan lines and gave the user exactly double the performance in all cases.

I think that would depend on the overall cost factor and perceived value. That said, I largely agree with you, but I do think the stink that people make about SLI and Crossfire is largely overplayed. It's a pretty fantastic option to have and the performance benefits are incredible. Keep in mind, I'm still in the honeymoon period with Crossfire right now, but it's been pretty awesome.
 
they are upclocked by a whopping 5% and even the original 2 series cards perform better in most games anyway
A. They are officially "upclocked by a whopping 5%" but factually they are upclocked a lot more than that as 300's cards - especially the 390s - are actually holding that stated clock under load way more often than 290 cards ever did.

B. These 5% happen to be precisely the general advantage 300 series has over 900 series which was launched nearly a year before.

I'm not saying that GCN cards didn't get better during the last year but this happened not because of NV getting worse or AMD doing something to achieve that but because the whole industry is putting its efforts into optimizing the code for GCN based consoles - and this transferred to PC GCN cards over the last year or so. This however isn't an ongoing process and it will run out of steam eventually. What matters is which architecture provides more performance per watt and transistor - and comparing what we had till the 16/14 generation that's Maxwell, with a very healthy lead which will be rather hard to cover without doing some drastic changes to GCN architecture - which in turn may result in loosing that console code optimizations advantage.

The whole thing isn't nearly as simple as you paint it.

If you use 1440p or 4k Fury X = 980Ti, the die size is basically the same as is power consumption. Maxwell does not really seem that much better. In some DX12 titles Fury X is far faster than 980Ti too and I bet in a year's time GCN will keep chugging away as Maxwell does a Kepler and starts to look far worse by comparison.

If you use something which plays to the strengths of one architecture while completely ignoring the strengths on another then all you have is a skewed picture which doesn't represent the actual state of affairs.

Fury X is 1B transistors more complex than 980Ti (~11% difference), runs with a next generation memory type which gives it 50% more bandwidth and works under a water cooling system to achieve the same result as a 980Ti with 384-bit GDDR5 with a blower. I fail to see how you can make GCN a better architecture from these facts.

And let's not even start talking about these "some DX12 titles", especially since that part was actually improved in Pascal. Maxwell will "do a Kepler" no sooner than Volta launch because Maxwell is actually more advanced than GCN and it's optimization guidelines are basically the same as Pascal's.

Here's another thing for you to ponder about: while Kepler cards are "doing a Kepler", GCN cards of these times are simply dying because of overheating. What do you prefer?
 
A. They are officially "upclocked by a whopping 5%" but factually they are upclocked a lot more than that as 300's cards - especially the 390s - are actually holding that stated clock under load way more often than 290 cards ever did.

B. These 5% happen to be precisely the general advantage 300 series has over 900 series which was launched nearly a year before.

I'm not saying that GCN cards didn't get better during the last year but this happened not because of NV getting worse or AMD doing something to achieve that but because the whole industry is putting its efforts into optimizing the code for GCN based consoles - and this transferred to PC GCN cards over the last year or so. This however isn't an ongoing process and it will run out of steam eventually. What matters is which architecture provides more performance per watt and transistor - and comparing what we had till the 16/14 generation that's Maxwell, with a very healthy lead which will be rather hard to cover without doing some drastic changes to GCN architecture - which in turn may result in loosing that console code optimizations advantage.

The whole thing isn't nearly as simple as you paint it.



If you use something which plays to the strengths of one architecture while completely ignoring the strengths on another then all you have is a skewed picture which doesn't represent the actual state of affairs.

Fury X is 1B transistors more complex than 980Ti (~11% difference), runs with a next generation memory type which gives it 50% more bandwidth and works under a water cooling system to achieve the same result as a 980Ti with 384-bit GDDR5 with a blower. I fail to see how you can make GCN a better architecture from these facts.

And let's not even start talking about these "some DX12 titles", especially since that part was actually improved in Pascal. Maxwell will "do a Kepler" no sooner than Volta launch because Maxwell is actually more advanced than GCN and it's optimization guidelines are basically the same as Pascal's.

Here's another thing for you to ponder about: while Kepler cards are "doing a Kepler", GCN cards of these times are simply dying because of overheating. What do you prefer?

A. A you talking about the reference cards with the shitty blowers from when they first launched? My 290x has a three fan cooler and has no issues maintaining clock speeds under load.

B. My 290x is the same clock speed as the average 390x but doesn't hit the same numbers in benchmarks.


As far as your statement regarding coding... why are you choosing to omit that for years programs and game engines have been tailored to Nvidia's strengths.... but suddenly that means something different when AMD cards actually start getting optimizations?


Also, the old ROG Matrix 7970 is still going strong in my wife's PC... no issues so far...
 
A. They are officially "upclocked by a whopping 5%" but factually they are upclocked a lot more than that as 300's cards - especially the 390s - are actually holding that stated clock under load way more often than 290 cards ever did.

B. These 5% happen to be precisely the general advantage 300 series has over 900 series which was launched nearly a year before.

I'm not saying that GCN cards didn't get better during the last year but this happened not because of NV getting worse or AMD doing something to achieve that but because the whole industry is putting its efforts into optimizing the code for GCN based consoles - and this transferred to PC GCN cards over the last year or so. This however isn't an ongoing process and it will run out of steam eventually. What matters is which architecture provides more performance per watt and transistor - and comparing what we had till the 16/14 generation that's Maxwell, with a very healthy lead which will be rather hard to cover without doing some drastic changes to GCN architecture - which in turn may result in loosing that console code optimizations advantage.

The whole thing isn't nearly as simple as you paint it.



If you use something which plays to the strengths of one architecture while completely ignoring the strengths on another then all you have is a skewed picture which doesn't represent the actual state of affairs.

Fury X is 1B transistors more complex than 980Ti (~11% difference), runs with a next generation memory type which gives it 50% more bandwidth and works under a water cooling system to achieve the same result as a 980Ti with 384-bit GDDR5 with a blower. I fail to see how you can make GCN a better architecture from these facts.

And let's not even start talking about these "some DX12 titles", especially since that part was actually improved in Pascal. Maxwell will "do a Kepler" no sooner than Volta launch because Maxwell is actually more advanced than GCN and it's optimization guidelines are basically the same as Pascal's.

Here's another thing for you to ponder about: while Kepler cards are "doing a Kepler", GCN cards of these times are simply dying because of overheating. What do you prefer?

The 680 wasnt exactly a cool cucumber, and neither is the 980 ti, or 1080 at least with a reference cooler. The 290 series with an aftermarket cooler does not get that hot, and neither does a 7950/70/280/280x. At least compare ref you ref and aftermarket to aftermarket if you're going to say amd cards are dying from overheating. Nvidia products are just as likely to die given a fair temp comparison. The outlier of course being the blower 290 series.
 
If you use something which plays to the strengths of one architecture while completely ignoring the strengths on another then all you have is a skewed picture which doesn't represent the actual state of affairs.

Considering the cost of those cards using 1440p or 4k resolutions is entirely fair, they were designed for those targets so it is not really playing to either of their strengths, just the market they were targeted towards.

Fury X is 1B transistors more complex than 980Ti (~11% difference), runs with a next generation memory type which gives it 50% more bandwidth and works under a water cooling system to achieve the same result as a 980Ti with 384-bit GDDR5 with a blower. I fail to see how you can make GCN a better architecture from these facts.

GCN has a hardware command scheduler and support for async compute that Maxwell lacks so maybe that is why it has 1B more transistors. They also managed to fit it in the same die space as Maxwell so maybe AMD are better designers as they can get higher transistor density than NV. GCN is also far better for GPU compute than Maxwell as well so if you want to mine etherium and have your card cost a net 0 it is far easier with GCN than Maxwell.

Point is there are many metrics you can use to say one is better than the other, if all you want to use is fps/watt then the Nano is just as good as the 900 series, if you want to use fps/transistor then sure, Maxwell wins. If you want to Tflops/watt or Tflops/transistor then GCN wins. It is not a straight forward X>Y scenario like you implied.

And let's not even start talking about these "some DX12 titles", especially since that part was actually improved in Pascal. Maxwell will "do a Kepler" no sooner than Volta launch because Maxwell is actually more advanced than GCN and it's optimization guidelines are basically the same as Pascal's.

Here's another thing for you to ponder about: while Kepler cards are "doing a Kepler", GCN cards of these times are simply dying because of overheating. What do you prefer?

Pascal has a few improvements over Maxwell in some areas, it does seem to have lost per shader ipc though and Fury X is closer to the 1080 in several DX12 titles than you would expect if major improvements had been made. Maxwell has already started to lose performance relative to the competing GCN cards, at launch the 390X was decently slower than the 980 and now they are neck and neck.

What is the sample size of GCN cards dying, we're they overclocked for their life span, we're the coolers covered in dust, what was there operational life spent at full load (GCN has been heavily used for mining) etc? Many reasons why cards die so unless you can show me that GCN cards of that era has a statistically higher failure rate than the Kepler cards of the same era then stop talking shit. Put up or shut up.
 
If those Polaris benches are real, that higher end Polaris 10 (480X?) that has the same performance as a Fury non-X has me interested.

As I predict this card will cost no more than $300.

The slower 480, which has performance just under a 390X, would also be compelling for many if it costs under $250.

http://videocardz.com/60253/amd-radeon-r9-480-3dmark11-benchmarks

I'm hoping:

480X, Fury performance - $299.99
480. 390X performance - £229.99

There is room for improvement here too as the clocks are relatively low for 14nm. 1.4Ghz would make it closer to Fury X.

Price points seem reasonable and are what I expect if AMD really want to increase the TAM for VR like they said.
 
There is room for improvement here too as the clocks are relatively low for 14nm. 1.4Ghz would make it closer to Fury X.

Price points seem reasonable and are what I expect if AMD really want to increase the TAM for VR like they said.

Yeah true, there are all kinds of factors here that mean the performance could go up. These P10 test samples may have been clocked low so that the true performance couldn't be gleamed from these benches, and apparently these benches are from a month ago so driver unoptimized for example.

I'm definitely in the market for a new card that costs around the $300 as that should be £250 in the UK.
 
I've been pretty set on getting a 1080, but really interested now to see real price and performance of Polaris 10. If the price is really good and can give ~fury performance, it could be a stopover card until 1080ti or Vega hit us....

Geez, custom 1080's need to get announced, and AMD need to get Polaris 10 in the hands of reviewers already, damn it.
 
I've been pretty set on getting a 1080, but really interested now to see real price and performance of Polaris 10. If the price is really good and can give ~fury performance, it could be a stopover card until 1080ti or Vega hit us....

Geez, custom 1080's need to get announced, and AMD need to get Polaris 10 in the hands of reviewers already, damn it.

The Polaris 10 480X in crossfire should be faster than a single 1080 if AMD has any sense. If you can get a crossfire set-up for $600 that may sway some from the high-end.
 
The Polaris 10 480X in crossfire should be faster than a single 1080 if AMD has any sense. If you can get a crossfire set-up for $600 that may sway some from the high-end.

Too few games nowdays support Crossfire, and many of those that do support them, have issues (The Division being the latest example of ones with issues). Besides as it stands SteamVR/OpenVR doesn't currently support Crossfire/SLI at all, and the latest update actually even forcefully disables those functions before launching the games.
 
A. They are officially "upclocked by a whopping 5%" but factually they are upclocked a lot more than that as 300's cards - especially the 390s - are actually holding that stated clock under load way more often than 290 cards ever did.

B. These 5% happen to be precisely the general advantage 300 series has over 900 series which was launched nearly a year before.

I'm not saying that GCN cards didn't get better during the last year but this happened not because of NV getting worse or AMD doing something to achieve that but because the whole industry is putting its efforts into optimizing the code for GCN based consoles - and this transferred to PC GCN cards over the last year or so. This however isn't an ongoing process and it will run out of steam eventually. What matters is which architecture provides more performance per watt and transistor - and comparing what we had till the 16/14 generation that's Maxwell, with a very healthy lead which will be rather hard to cover without doing some drastic changes to GCN architecture - which in turn may result in loosing that console code optimizations advantage.

The whole thing isn't nearly as simple as you paint it.



If you use something which plays to the strengths of one architecture while completely ignoring the strengths on another then all you have is a skewed picture which doesn't represent the actual state of affairs.

Fury X is 1B transistors more complex than 980Ti (~11% difference), runs with a next generation memory type which gives it 50% more bandwidth and works under a water cooling system to achieve the same result as a 980Ti with 384-bit GDDR5 with a blower. I fail to see how you can make GCN a better architecture from these facts.

And let's not even start talking about these "some DX12 titles", especially since that part was actually improved in Pascal. Maxwell will "do a Kepler" no sooner than Volta launch because Maxwell is actually more advanced than GCN and it's optimization guidelines are basically the same as Pascal's.

Here's another thing for you to ponder about: while Kepler cards are "doing a Kepler", GCN cards of these times are simply dying because of overheating. What do you prefer?

are you saying resolutions above 1080p are a strength of fury x and a weakness of the 1080ti?
 
If you use 1440p or 4k Fury X = 980Ti, the die size is basically the same as is power consumption. Maxwell does not really seem that much better. In some DX12 titles Fury X is far faster than 980Ti too and I bet in a year's time GCN will keep chugging away as Maxwell does a Kepler and starts to look far worse by comparison.


5or501S.jpg



Also, someone who said maxwell is more advanced than GCN is in for a sour treat in the next few years.
 
Considering the cost of those cards using 1440p or 4k resolutions is entirely fair, they were designed for those targets so it is not really playing to either of their strengths, just the market they were targeted towards.
4K is unplayable on any modern card and 1440p results depends on what you choose to benchmark the cards with and how you setup these benchmarks. Fury's bandwidth isn't the only part which gives it advantage.

GCN has a hardware command scheduler and support for async compute that Maxwell lacks so maybe that is why it has 1B more transistors. They also managed to fit it in the same die space as Maxwell so maybe AMD are better designers as they can get higher transistor density than NV. GCN is also far better for GPU compute than Maxwell as well so if you want to mine etherium and have your card cost a net 0 it is far easier with GCN than Maxwell.
So? If GCN has something which doesn't actually help it anywhere but in DX12/Vulkan then it's a worse architecture. Maxwell runs DX12/Vulkan just fine without "a hardware command scheduler" (I don't think that you even know what that means or how it actually affects the execution happening on GPU tbh) or "support for async compute" (which it actually has as was shown during Pascal announcement).

GCN isn't "far better" for GPU compute at all, it's actually worse now. It's better in running mixed loads of compute and graphics simultaneously only, and that's basically the only advantage over NV GPUs it has left at the moment.

Point is there are many metrics you can use to say one is better than the other, if all you want to use is fps/watt then the Nano is just as good as the 900 series, if you want to use fps/transistor then sure, Maxwell wins. If you want to Tflops/watt or Tflops/transistor then GCN wins. It is not a straight forward X>Y scenario like you implied.
Well, sure, you can use the heavily tessellated load for example to say this. Or run some ray marching on FL12_1 features which GCN lack. The fact is that we're talking about averages which are seen right now and in these a 960 is on the same level as 380 while 980Ti is beating Fiji with water cooling. These are the facts of the moment.

Pascal has a few improvements over Maxwell in some areas, it does seem to have lost per shader ipc though and Fury X is closer to the 1080 in several DX12 titles than you would expect if major improvements had been made. Maxwell has already started to lose performance relative to the competing GCN cards, at launch the 390X was decently slower than the 980 and now they are neck and neck.
It didn't loose any "IPC", it is balanced differently, for workloads which will be prevalent in the coming couple of years, and it's expected that in modern workloads it will be somewhat underutilized - that's where this "lower IPC" came from while in fact it's not lower IPC, it's a different ratio of math to bandwidth.

These "several DX12 titles" you keep referring to are nothing more than AMD's technical demos. Unless there will be titles which won't be sponsored by one of IHVs or won't be put out on PC in their straight console form (that's MS's efforts mostly) - there's nothing to discuss about DX12 performance of NV cards.

What is the sample size of GCN cards dying, we're they overclocked for their life span, we're the coolers covered in dust, what was there operational life spent at full load (GCN has been heavily used for mining) etc? Many reasons why cards die so unless you can show me that GCN cards of that era has a statistically higher failure rate than the Kepler cards of the same era then stop talking shit. Put up or shut up.
Lol, no, I won't. A hotter part have a higher rate of failure by default. Go ask anyone who ever used to worked at some repair shop or retail outlet. That's just basic knowledge.

The 680 wasnt exactly a cool cucumber, and neither is the 980 ti, or 1080 at least with a reference cooler. The 290 series with an aftermarket cooler does not get that hot, and neither does a 7950/70/280/280x. At least compare ref you ref and aftermarket to aftermarket if you're going to say amd cards are dying from overheating. Nvidia products are just as likely to die given a fair temp comparison. The outlier of course being the blower 290 series.

All NV cards from Kepler are generally cooler than AMD cards, and this leads to them having better thermals in general which in turn means that the chance of these cards dying because of overheating is lower. 680 is way cooler than 7970. 780 is way cooler than 290. 970 is way cooler than 390. Aftermarket coolers increase the dissipation but they are available for both vendors and the relative picture doesn't change at all if you compare some Strix or Windforce cards -- Radeons are still running hotter comparatively. This statistically leads to a higher failure rate as this is just simple physics. So Kepler may not be doing too well today but a lot of 2013 Radeons aren't doing anything at all already. Many people omit this from their praise for how Radeons are "aging" while the fact is that 390 cards have a much higher chance of going dodo because of their thermals when compared to 970/980. Everything has its own price.
 
oOoooh, if those benchmarks are indeed real (which I somewhat doubt, but hey), then Polaris may be a quick stopgap I'm willing to take before Vega hits.

I need a card with HDMI 2.0 in order to get 4k out to my tv and my R9 290 just has 1.4. Honestly, I was hoping to switch to the green team after the problems I had with Doom, but they seem to be price hiking like all hell here in Australia and I can't support that. Plus, the 380X is just $350 here, so if the 480X is faster than a 980 and is, say, $400, I'm sold.
 
4K is unplayable on any modern card and 1440p results depends on what you choose to benchmark the cards with and how you setup these benchmarks. Fury's bandwidth isn't the only part which gives it advantage.

it is based on the latest TPU review as they test a wide range of games so give a more general picture.

So? If GCN has something which doesn't actually help it anywhere but in DX12/Vulkan then it's a worse architecture. Maxwell runs DX12/Vulkan just fine without "a hardware command scheduler" (I don't think that you even know what that means or how it actually affects the execution happening on GPU tbh) or "support for async compute" (which it actually has as was shown during Pascal announcement).

So having something that is useful long term and keeps the architecture relevant over a longer period of time makes it worse, wow.

GCN isn't "far better" for GPU compute at all, it's actually worse now. It's better in running mixed loads of compute and graphics simultaneously only, and that's basically the only advantage over NV GPUs it has left at the moment.

Why are the majority of GPU currency mining rigs based around GCN if it is far worse at compute?

Well, sure, you can use the heavily tessellated load for example to say this. Or run some ray marching on FL12_1 features which GCN lack. The fact is that we're talking about averages which are seen right now and in these a 960 is on the same level as 380 while 980Ti is beating Fiji with water cooling. These are the facts of the moment.

On average the Fury X and 980Ti are about the same if you exclude 1080p, which is only relevant if you are running a 144hz display. In that special niche then sure the 980Ti is better. The rest of the product stack GCN is ahead in performance at each price point vs Maxwell.

It didn't loose any "IPC", it is balanced differently, for workloads which will be prevalent in the coming couple of years, and it's expected that in modern workloads it will be somewhat underutilized - that's where this "lower IPC" came from while in fact it's not lower IPC, it's a different ratio of math to bandwidth.

In the short term IPC is worse, long term is wait and see.

These "several DX12 titles" you keep referring to are nothing more than AMD's technical demos. Unless there will be titles which won't be sponsored by one of IHVs or won't be put out on PC in their straight console form (that's MS's efforts mostly) - there's nothing to discuss about DX12 performance of NV cards.

So Forza, Hitman, AOTS etc are all AMD tech demos, ok.


Lol, no, I won't. A hotter part have a higher rate of failure by default. Go ask anyone who ever used to worked at some repair shop or retail outlet. That's just basic knowledge.

Depends on the design of the GPU and it's thermal limits. The lack of evidence for a higher than normal failire rate is telling though.


All NV cards from Kepler are generally cooler than AMD cards, and this leads to them having better thermals in general which in turn means that the chance of these cards dying because of overheating is lower. 680 is way cooler than 7970. 780 is way cooler than 290. 970 is way cooler than 390. Aftermarket coolers increase the dissipation but they are available for both vendors and the relative picture doesn't change at all if you compare some Strix or Windforce cards -- Radeons are still running hotter comparatively. This statistically leads to a higher failure rate as this is just simple physics. So Kepler may not be doing too well today but a lot of 2013 Radeons aren't doing anything at all already. Many people omit this from their praise for how Radeons are "aging" while the fact is that 390 cards have a much higher chance of going dodo because of their thermals when compared to 970/980. Everything has its own price.

Show the evidence or stop making baseless remarks, each chip has its own thermal limits and these will have been thoroughly tested. Unless you can show GCN has higher failire rates than Kepler or Maxwell stop spewing nonsense.

I am on mobile so responded in bold in the quote text.
 
dr_Rus I was willing to hear your argument until you started talking about heat and failure of cards because of it.

Are you seriously trying to argue that AMD cards carry some kind of appreciable risk of dying from over-heating? You can't be serious.When does that happen ever, in 99.9% of use-case scenarios? What a load of nonsense.
 
All NV cards from Kepler are generally cooler than AMD cards, and this leads to them having better thermals in general which in turn means that the chance of these cards dying because of overheating is lower. 680 is way cooler than 7970. 780 is way cooler than 290. 970 is way cooler than 390. Aftermarket coolers increase the dissipation but they are available for both vendors and the relative picture doesn't change at all if you compare some Strix or Windforce cards -- Radeons are still running hotter comparatively. This statistically leads to a higher failure rate as this is just simple physics. So Kepler may not be doing too well today but a lot of 2013 Radeons aren't doing anything at all already. Many people omit this from their praise for how Radeons are "aging" while the fact is that 390 cards have a much higher chance of going dodo because of their thermals when compared to 970/980. Everything has its own price.

Given these statistics, hopefully you'll be able to produce them? Also to show that it is statistically significant?

Why are the majority of GPU currency mining rigs based around GCN if it is far worse at compute?
IIRC this is actually because AMD had support for certain integer instructions that Nvidia didn't support until Maxwell v1. One should note that even VLIW4 architectures were far better than Nvidia GPUs. For the longest time AMD GPUs have been used for brute forcing hashing functions hence their applicability for things like password cracking. Cryptocurrencies are also the same.
 
All NV cards from Kepler are generally cooler than AMD cards, and this leads to them having better thermals in general which in turn means that the chance of these cards dying because of overheating is lower. 680 is way cooler than 7970. 780 is way cooler than 290. 970 is way cooler than 390. Aftermarket coolers increase the dissipation but they are available for both vendors and the relative picture doesn't change at all if you compare some Strix or Windforce cards -- Radeons are still running hotter comparatively. This statistically leads to a higher failure rate as this is just simple physics. So Kepler may not be doing too well today but a lot of 2013 Radeons aren't doing anything at all already. Many people omit this from their praise for how Radeons are "aging" while the fact is that 390 cards have a much higher chance of going dodo because of their thermals when compared to 970/980. Everything has its own price.

Wooooow lol. That's not how it works at all. The price you pay is in a slightly higher electric bill and more heat dumped into your case/room. One card running 10C hotter than another isn't going to reduce its lifespan by any appreciable amount so long as it's well below the heat threshold. If you seriously believe this, you should be attacking Nvidia for charging so much for the hot-running Founder's Edition cards that will die much sooner than aftermarket ones, apparently. And don't get me started on overclocking, since that means that nobody who overclocks could possibly have a card last more than two years.
 
A. They are officially "upclocked by a whopping 5%" but factually they are upclocked a lot more than that as 300's cards - especially the 390s - are actually holding that stated clock under load way more often than 290 cards ever did.

B. These 5% happen to be precisely the general advantage 300 series has over 900 series which was launched nearly a year before.

I'm not saying that GCN cards didn't get better during the last year but this happened not because of NV getting worse or AMD doing something to achieve that but because the whole industry is putting its efforts into optimizing the code for GCN based consoles - and this transferred to PC GCN cards over the last year or so. This however isn't an ongoing process and it will run out of steam eventually. What matters is which architecture provides more performance per watt and transistor - and comparing what we had till the 16/14 generation that's Maxwell, with a very healthy lead which will be rather hard to cover without doing some drastic changes to GCN architecture - which in turn may result in loosing that console code optimizations advantage.

The whole thing isn't nearly as simple as you paint it.



If you use something which plays to the strengths of one architecture while completely ignoring the strengths on another then all you have is a skewed picture which doesn't represent the actual state of affairs.

Fury X is 1B transistors more complex than 980Ti (~11% difference), runs with a next generation memory type which gives it 50% more bandwidth and works under a water cooling system to achieve the same result as a 980Ti with 384-bit GDDR5 with a blower. I fail to see how you can make GCN a better architecture from these facts.

And let's not even start talking about these "some DX12 titles", especially since that part was actually improved in Pascal. Maxwell will "do a Kepler" no sooner than Volta launch because Maxwell is actually more advanced than GCN and it's optimization guidelines are basically the same as Pascal's.

Here's another thing for you to ponder about: while Kepler cards are "doing a Kepler", GCN cards of these times are simply dying because of overheating. What do you prefer?

why are you using the very first batch of reference 290/x to represent the entirety of them? even ignoring the fact that there were plenty of custom cards, even the reference cards were fixed shortly after launch.

the difference is 5% when you include alot of older game titles as part of an average. its larger than 5% when you just focus on more recent titles. and i never claimed nvidia got worse relative to itself so i dont know why you constantly mention that when we are having discussions

when looking at architectural strong points from a rendering performance perspective, outside of geometry/tessellation(which is quite clearly never going to materialize in actual games), i dont know what other strength you are referring to. i mean i guess you could latch on to the minor 12.1 features, but again those will likely only see use in gameworks.

i also cant agree at all that maxwell is more advanced than GCN. absolutely not
 
I am on mobile so responded in bold in the quote text.

> it is based on the latest TPU review as they test a wide range of games so give a more general picture.

Check other reviews as well. 980Ti is faster on average in 1440p in computerbase.de's 1080 review, for example, other as well.

> So having something that is useful long term and keeps the architecture relevant over a longer period of time makes it worse, wow.

Yes, of course it makes it worse as it makes it perform worse right now, when it's actually on the market and people are choosing what to buy. No one cares that it will perform better in three years as this isn't something you can even guess beforehand. And you won't be selling these cards in three years so how's that going to benefit you?

> Why are the majority of GPU currency mining rigs based around GCN if it is far worse at compute?

Because GCN provides a better flops/dollar ratio thanks to it being so bad in gaming workloads that AMD has to push higher tier cards in lower price segments to compete with what NV have. This however doesn't mean that GCN is "far better in GPU compute".

> On average the Fury X and 980Ti are about the same if you exclude 1080p, which is only relevant if you are running a 144hz display. In that special niche then sure the 980Ti is better. The rest of the product stack GCN is ahead in performance at each price point vs Maxwell.

On average they're not because you have to include the clocking potential in this comparison as well since that is a part of the architecture too.

> In the short term IPC is worse, long term is wait and see.

Short term it doesn't matter as even 1070 is faster than Titan X and judging from what we know of Polaris it won't be able to reach 1070's level.

I also kinda wonder where that idea of a worse IPC of Pascal is coming from? Is it from these leaked benches comparing GM200 to GP104 at the same clocks? If yes then these were obviously wrong as GM200 has more SPs than GP104.

> So Forza, Hitman, AOTS etc are all AMD tech demos, ok.

Hitman and AoTS yes, Forza a) runs fine on NV h/w and b) is MS's UWP effort. I'm actually surprised by the amount of PC specific optimization Turn 10 put into FM6A as I pretty much expect something like QB from any MS UWP effort at this point.

> Depends on the design of the GPU and it's thermal limits. The lack of evidence for a higher than normal failire rate is telling though.

Depends on a GPU's comparative TDP only. Hotter cards are overheating more often than cooler cards. All NV's cards are cooler than Radeon counterparts since 2013 when put into the same cooling environment.

> Show the evidence or stop making baseless remarks, each chip has its own thermal limits and these will have been thoroughly tested. Unless you can show GCN has higher failire rates than Kepler or Maxwell stop spewing nonsense.

You need an evidence that a hotter part is more likely to overheat than a cooler one? How about some physics book for beginners? The only nonsense here is your inability to see this fact.

dr_Rus I was willing to hear your argument until you started talking about heat and failure of cards because of it.

Are you seriously trying to argue that AMD cards carry some kind of appreciable risk of dying from over-heating? You can't be serious.When does that happen ever, in 99.9% of use-case scenarios? What a load of nonsense.

They carry a higher risk of dying because of overheating. Most heat related failures are happening simply because of board components dying because of high temperatures. The higher the temperature a card is running on - the higher that risk is. It's always been like this, and it was a vice versa back in R300-R700 days when AMD cards where actually cooler in general (unless some stupid cooling decisions were made) and NV cards could run up to 105C by default.

why are you using the very first batch of reference 290/x to represent the entirety of them? even ignoring the fact that there were plenty of custom cards, even the reference cards were fixed shortly after launch.

the difference is 5% when you include alot of older game titles as part of an average. its larger than 5% when you just focus on more recent titles. and i never claimed nvidia got worse relative to itself so i dont know why you constantly mention that when we are having discussions

when looking at architectural strong points from a rendering performance perspective, outside of geometry/tessellation(which is quite clearly never going to materialize in actual games), i dont know what other strength you are referring to. i mean i guess you could latch on to the minor 12.1 features, but again those will likely only see use in gameworks.

i also cant agree at all that maxwell is more advanced than GCN. absolutely not

Well, that's your problem that you can't agree with a fact. Maxwell is more advanced than GCN in pretty much everything but the mixed contexts scheduling - which is what AMD is pushing hard to be used as it actually gives benefits to GCN's graphics utilization while simultaneously reducing Maxwell's performance (and killing Kepler's).

Here's a mind experiment for you: let's put a Maxwell chip against GCN chip of the same complexity and flops and see how they compare? This is something you can rather easily do right now even in mid range, which Polaris is targetting:

A. R9 380X is a 5000M transistor GPU with 359mm^2 die size with 256 bit bus. It's rated at ~4TFlops of math performance. It's built on the latest GCN3 revision.

B. GTX 970 is a 5200M transistors GPU with 398mm^2 die size with 256 bit bus. It has ~20% of that die disabled which likely puts it quite a bit below Tonga in complexity (~4100M working transistors). 970 is rated at ~4TFlops of math performance.

For all intents and purposes these chips production costs should be near meaning that omitting other factors AMD and NV should have the same margin from them when selling them for the same price.

Which one is faster in general? Let's look at benchmarks known to favor AMD cards:


Nope, not even on the reference 970 level here. DX12 maybe?


Aha, here it is. The only win is happening in a game heavily skewed towards AMD h/w in general in a renderer made under AMD supervision. And it's actually a loss again in another such game.

So when you look at comparatively similar complexity and math throughput GPUs NV is winning quite a lot in performance even in those games which are favoring GCN h/w. This can be seen in 980Ti and Fury X comparisons as well, and it's definitely not a case where Maxwell's geometric performance or any other strength comes into play.

To get the thread back on topic - unless Polaris will provide some seriously revolutionary changes compared to GCN3, it will end up on the same level against Pascal as GCN3 did against Maxwell 2: quite a bit slower on average (up to 50% slower in fact) and on par at best in those titles which are using DX12 with AMD provided/funded/whatever renderer. I don't see how you would expect anything else really, going off what we have at the moment.

If we take the P10 234mm^2 die it should end up somewhere below in its average performance than a comparable Pascal die. GP104 is cut by 1/4 (less actually but it's hard to get an accurate figure) for 1070 which puts the actually working die of 1070 around P10 territory. Which in turn would put P10 compared to 1070 onto 380X position when compared to 970. That's my expectation at the moment which can be wrong only if Polaris will be a big architectural change from GCN3.
 
Rus the 380x is a £200 and the 970 a £250 one

its not a fair comparison the r9-390 is closer in price ( you can find it for £255) to a gtx970 and that matches or outperforms the 970 in the majority of situations.

that you get less performance per flop in AMD cards is besides the point when they scale their pricing to accommodate for that.
 
Goddamn! How old is the 290x? Looks like this could be the new 8800GT. This is a rebranded 7950 or something, right?

Would you believe the 290x can get the 390X performance with a simple bios flash? its true.

would you believe that with a simple bios tweak and bit of OC, the 290X can perform between the GTX980 and the Fury (non X) 24/7 ? its true



This is the world 290x owners are living.
 
Goddamn! How old is the 290x? Looks like this could be the new 8800GT. This is a rebranded 7950 or something, right?



Nah, not at all. R9 290x is like a 44CU gpu while 7950 is a 28CU one. R9 290x was a new chip and a far bigger one at that.


Would you believe the 290x can get the 390X performance with a simple bios flash? its true.

would you believe that with a simple bios tweak and bit of OC, the 290X can perform between the GTX980 and the Fury (non X) 24/7 ? its true



This is the world 290x owners are living.



Can you believe that R9 290 can become a R9 290x ? :")
Well... that was the case in the begining :""""")
Anyway, what are you talking about with these bios tweak and OC ? You got me curious.
 
Would you believe the 290x can get the 390X performance with a simple bios flash? its true.

would you believe that with a simple bios tweak and bit of OC, the 290X can perform between the GTX980 and the Fury (non X) 24/7 ? its true



This is the world 290x owners are living.

The 290X absolutely was the best value buy, and has been been for a while. Thing is, I don't think anyone thought it'd be punching its weight well above its class this long after release.
 
My two XFX 390Xs are among the most quiet quiet cards I have ever owned. My EVGA Silent 1000 watt PSU in ECO mode is louder. Think about that for a moment.

Each card is outputting > 275w under load (more than that peak)

A gtx 980 will output around 160-170w

The same cooler on the 980 will be able to run at significantly lower fanspeeds to keep it at the same temp

Amd gpus are not made of unicorn poop, more TDP = more heat that needs to be dissapitated, that is literally all there is to it.

Rus the 380x is a £200 and the 970 a £250 one

its not a fair comparison the r9-390 is closer in price ( you can find it for £255) to a gtx970 and that matches or outperforms the 970 in the majority of situations.

that you get less performance per flop in AMD cards is besides the point when they scale their pricing to accommodate for that.

He isn't arguing price though he is arguing about the potential of polaris 10 based on its die size, with gcn as context.

And he is right in what he claims:
Maxwell providing greater performance/die size vs GCN (and this is my addition: people saying fury x compares to the 980ti are being incredibly dishonest, the 980ti overclocks like crazy and the fury x obviously doesn't, non reference 980tis shit all over the fury x , the 980ti overclocks so much that it approaches a stock 1080 in performance)

And he is right that from tere it is logical to conclude that either of these scenarios will happen:

A -polaris 10 will not be changed much from gcn1.3 : it'll still have this advantage and and will be behind in performance again. That's the beginning and the end of that reasoning.
Ofc they can sell their gpus at smaller margins (which is easy, considering nvidia is charging out the ass for pascal) and offer better value than nvidia. (this fact makes dr_rus super mad in the other thread because reasons, even though his own logic contradicts him here)

B: -polaris 10 is a big departure from gcn 1.3 and so it is no longer disadvantaged compared to pascal the way gcn was to maxwell when it comes to perf/mm² and polaris 10 CAN compete with the gtx 1070 performance wise.


In case A r9 290 owners can keep enjoying decent driver support but it sucks for people who buy polaris as the card will be underpowered for its size.

In case B gcn 1.3 cards will be "keplered" (or hd5870'd, or hd6870'ed, or radeon 9800 pro'd) with faltering driver focus causing it to fall behind performance wise (again dr rus would disagree here and solely blame console ports being designed for GCN, which reminds me we should compare pc exclusives performance between the 780ti and gtx 970 to settle that argument and see who is right). Which will put gcn user's feet back on the ground of the reality of owning a gpu that is no longer sold.



I think I'm being objective here but somehow I'll piss off both dr rus and amd users :p
 
Anyway, what are you talking about with these bios tweak and OC ? You got me curious.

There is a 390x bios we can flash into the 290x. From doing this you will gain ~5% over stock.

http://www.overclock.net/t/1564219/modded-r9-390x-bios-for-r9-290-290x-updated-02-16-2016/0_50


290x memory timings (think CAS latency of normal RAM such as DDR3) are separated into "steps".

801-900MHz strap

901-1000Mhz strap

1001-1125MHz strap

1126-1250MHz strap

1251-1375MHz strap

1376-1500MHz strap

1501-1625MHz strap

1626-1750MHz strap

Each step has its own set of memory timings.

The lower the strap mhz, the tighter the timings are.

We edit the Bios by copying the timings from the strap 1126-1250mhz into all the straps above it.

Now when you overclock you ram from 1250mhz (stock) to 1500mhz, your performance will be another 5~10%higher than a stock bios 290x also at 1500mhz.

http://www.overclock.net/t/1561372/hawaii-bios-editing-290-290x-295x2-390-390x/0_50




Here is my 290x @ 1150mhz / 1580mhz (conservative clocks)

http://www.3dmark.com/fs/7825736
 
Rus the 380x is a £200 and the 970 a £250 one

its not a fair comparison the r9-390 is closer in price ( you can find it for £255) to a gtx970 and that matches or outperforms the 970 in the majority of situations.

that you get less performance per flop in AMD cards is besides the point when they scale their pricing to accommodate for that.

Prices are something which is set by the vendors depending on how their h/w perform compared to competition. The reason 380x is a cheaper card than 970 is because it's slower even though it has approximately the same complexity and die size - which means that AMD is getting less margins than NV from same production costs because their GCN architecture isn't as efficient as Maxwell. Which is what that mind experiment is about. And if Polaris won't radically improve GCN architecture it will be the same with Polaris vs Pascal because Pascal is basically Maxwell @ 16nm.

Ofc they can sell their gpus at smaller margins (which is easy, considering nvidia is charging out the ass for pascal) and offer better value than nvidia. (this fact makes dr_rus super mad in the other thread because reasons, even though his own logic contradicts him here)

What? This fact doesn't make me anything. I fully agree that AMD cards can have better value and most of 300 series actually does have it compared to 900 series. But this cost AMD their profits and AMD isn't in a good position to loose any of those. Hence why I hope that Polaris will actually be a big improvement and it will actually be on the same level as Pascal - this will allow AMD to compete in the long run. Selling 512-bit cards on 440mm^2 chips for $300 doesn't.
 
The reason why AMD's die sizes seem so bloated compared to Maxwell is because nVidia ditched their DP units and used it purely for gaming. If AMD weren't forced to rebrand two previous compute-heavy flagship cards (Tonga is basically Tahiti redone), then GCN would've looked a lot better perf/mm²-wise compared to Maxwell. Hell, look at Pitcairn and it still holds up looking at that metric. Not to mention it's still GCN 1.0, smaller and released nearly three years before the GTX 960 and it's like what, 10-15% slower? If Fiji wasn't severely bottlenecked, it probably would've done a lot better against GM200 because there's a lot of grunt unused.

I find it more surprising how well some of the old GCN chips actually manage to hold up against Maxwell because it was Kepler's competitor.

AMD has a history of delivering small lean and mean chips back when no one cared because nVidia was still the fastest in town. If they don't mess up, Polaris will probably do just fine for what it is.
 
The reason why AMD's die sizes seem so bloated compared to Maxwell is because nVidia ditched their DP units and used it purely for gaming.
Both Fiji and Tonga has 1/16 DP rate which is just two times higher than what Maxwell and GP10x have. So in essence AMD has "ditched" its DP units as well in GCN3 cards. This, btw, is the most likely reason of perf/watt improvements seen in GCN3 (that and HBM use on Fiji obviously).
 
Too few games nowdays support Crossfire, and many of those that do support them, have issues (The Division being the latest example of ones with issues). Besides as it stands SteamVR/OpenVR doesn't currently support Crossfire/SLI at all, and the latest update actually even forcefully disables those functions before launching the games.

Am I missing something here? I just installed Crossfire the other day (2nd R390x) and have encountered zero issues playing the following games:

  • Doom
  • Far Cry 4
  • Divinity Original Sin
  • Crysis 3
  • Fallout 4

In fact, the only issue I did encounter was with Warhammer--which was fixed that day?

I swear, I always see people spreading stuff like this about Crossfire and SLI and my experience so far has been incredible. Granted, I'm still in the honeymoon period, but sheesh, it seems to be working pretty well for me right now.

Plus, with DX 12, the multi-GPU stuff becomes easier to implement.
 
Top Bottom