AMD Polaris architecture to succeed Graphics Core Next

It seems like the 470 and 480 are rumored to be unveiled soon. As someone who is unfamiliar with AMD's naming but a bit more familiar with how Nvidia labels theirs, which models in Nvidia's lineup are the x70 and x80 generally comparable to? Based on GPU boss, it seems like Nvidia's numbers are generally 20 below AMD's (so 970 vs. 390, 960 vs. 380, 950 vs. 370, etc.). Is that right?

It varys per gen. The x70 and x80 will be replacements for 970 and 980. 970 and 980 are the competition for 390 and 390x currently but the majority of users would be foolish to purchase the nvidia cards over the amd ones.

I'm basing that on their marketing and that Pascal has changes to the way it handles compute which will close the DX12 gap a bit, and will probably be more energy efficient by virtue of going from 28nm to 16nm. Polaris 11 is touting energy efficiency above all and is a smaller die than Hawaii/Grenada XT, it is clearly intended to be an iterative improvement on the 390 rather than hold a candle to the flame that is Fury.

Sorry dude but you have it backwards
 
Although that is possible I guess, why would Sony pay AMD to design a new GPU based on old tech when Polaris is right there?

Also it is claimed the Jaguar already bottlenecks the PS4 so how would doubling GPU performance be a good idea?
Full PS4 to PS4k compatibility.

I don't think Sony is looking into increase CPU performance drastically... well I guess they will work with 200-400Mhz increase in clock only (about 20-30% more performance).

Using Polaris can end giving more trouble to Sony with SDK and gaming compatibility between PS4 and PS4k.
 
To me it is too obvious and perfect that the leaks line up nicely as semi-custom parts: Polaris 11 = NX and Polaris 10 = PS4K.

Polaris 10 [Ellesmere]
target - 1440p60 DX12 gaming for desktop
36CU 2304 SPs & 40CU 2560 SPs
~100-110W

Exactly 2X PS4 GPU for the 36CU version

Polaris 11 [Baffin]
target - 1080p60 gaming for desktop & laptops
2 GPUs
16CU 1024 SPs & 20CU 1280 SPs
4GB of GDDR5/X
~50W

16CU+new architecture = bit more powerful than PS4 plus nice and low powered.

This makes a lot of sense actually.
I just hope that it will indeed be GDDR5X as otherwise bandwidth would be bottlenecked with an only 128bit wide memory interface (PS4 has 256bit GDDR5 as does Pitcairn). Even with latest compression tech this would be a step back.
 
It varys per gen. The x70 and x80 will be replacements for 970 and 980. 970 and 980 the competition for 390 and 390x currently but the majority of users would be foolish to purchase the nvidia cards over the amd ones.

Why do you say that? GPUboss made Nvidia the winner of all those comparisons when I was looking at them.

I find it interesting that the numbering varies. Thanks for letting me know!
 
It seems like the 470 and 480 are rumored to be unveiled soon. As someone who is unfamiliar with AMD's naming but a bit more familiar with how Nvidia labels theirs, which models in Nvidia's lineup are the x70 and x80 generally comparable to? Based on GPU boss, it seems like Nvidia's numbers are generally 20 below AMD's (so 970 vs. 390, 960 vs. 380, 950 vs. 370, etc.). Is that right?

Yea that's right but please don't use that crappy site, you could be better served even by wikipedia.
 
Why do you say that? GPUboss made Nvidia the winner of all those comparisons when I was looking at them.

I find it interesting that the numbering varies. Thanks for letting me know!

Im not familar with gpuboss but nvidia performance is generally poor in all the marquee titles these days

Far cry 4
Evolve
Far cry primal
The division
Battlefront
Ryse
Black ops 3
Quantum break
Rainbow six siege
Killer instinct 3
Gears of war ultimate edition
Need for speed
Hitman
Ashes of singularity
Xcom 2
Arkham knight

To list some
 
Yea that's right but please don't use that crappy site, you could be better served even by wikipedia.

Oh, okay. It just popped up in search results. I'll keep that in mind.

Im not familar with gpuboss but nvidia performance is generally poor in all the marquee titles these days

Far cry 4
Evolve
Far cry primal
The division
Battlefront
Ryse
Black ops 3
Quantum break
Rainbow six siege
Killer instinct 3
Gears of war ultimate edition
Need for speed
Hitman
Ashes of singularity
Xcom 2
Arkham knight

To list some

Hmm. Okay. I'll look at game-specific benchmarks once the new chips launch. Thanks for the response.
 
Oh, okay. It just popped up in search results. I'll keep that in mind.



Hmm. Okay. I'll look at game-specific benchmarks once the new chips launch. Thanks for the response.

Almost every big game with dx12 support announced is partnering with amd to boot. I see rough times ahead for nvidia until volta
 
Full PS4 to PS4k compatibility.

I don't think Sony is looking into increase CPU performance drastically... well I guess they will work with 200-400Mhz increase in clock only (about 20-30% more performance).

Using Polaris can end giving more trouble to Sony with SDK and gaming compatibility between PS4 and PS4k.

Honestly I don't know how all that works but I would think compatibility would be something worked into the design? I doubt putting a new Polaris based GPU into a 5 year old PC would be a complete fail. Sure driver updates etc early on but not completely broken.

I'm sure whatever Sony's plans are for the PS4K, compatibility was close to the top of the list of must have's.

disap.ed said:
This makes a lot of sense actually.
I just hope that it will indeed be GDDR5X as otherwise bandwidth would be bottlenecked with an only 128bit wide memory interface (PS4 has 256bit GDDR5).

GDDR5X would be nice. I think the PS4K would be a semi-custom Polaris 10 though (256 bit).
 
As an AMD owner I have to say that I'm more interested in Framepacing at this point since I'm at least semi certain that AMD can come up with cards that are competitive in the 1070/490 pricerange from a pure performance point of view.

But framepacing has been a problem for AMD in the last couple of GPU generations so I wonder if that'll change.

Afair they announced improved frametimes back when that Crimson driver came out but I didn't check back with games thoroughly enough to verify that.
 
Im not familar with gpuboss but nvidia performance is generally poor in all the marquee titles these days

Far Cry 4
Evolve (hardly marquee)
Far Cry Primal
The Division
Battlefront
Ryse (really?)
Black Ops 3
Quantum Break
Rainbow Six: Siege
Killer Instinct 3 (horribly optimized)
Gears of War Ultimate Edition (barely works)
Need for Speed
Hitman
Ashes of Singularity (niche RTS game developed for Mantle, then Vulkan)
X-Com 2
Arkham Knight (pretty looking dumpster fire)

Wut?

As for some of the others, the 980 Ti handily outperforms the Fury X in Quantum Break (even at 4K), and non-reference variations push it past the Fury X in most of the games you mentioned.
 
Although that is possible I guess, why would Sony pay AMD to design a new GPU based on old tech when Polaris is right there?

Also it is claimed the Jaguar already bottlenecks the PS4 so how would doubling GPU performance be a good idea?

Because while AMD was busy doing things like taping out Polaris, they were also working on the PS4K, the Xbox 1.5 and the NX. AMD's not able to integrate an unfinished design into three new consoles, let alone keep the price where it needs to be.

The 2x statement isn't accurate, people need to accept that.
 
I'm sure whatever Sony's plans are for the PS4K, compatibility was close to the top of the list of must have's.

While multiple GCN generations aren't an insurmountable and "super big" problem, that is a definite issue with trying to cram in Polaris mid-gen. It does get weird when you start taking advantage of the new capabilities of Polaris and then want to down-port that from PS4K to PS4 compatibility. You're also just adding complexities to a situation that, still, does not seem to exactly be getting accepted as unilaterally good by the people who will have to work with these hardware profile disparities. Probably don't want to introduce even more disparities.

Its easier, and probably cheaper, to just use a custom standard mature cards. The size rumors also rule out Polaris 10, leaving only Polaris 11.

Also I have an immense doubt that AMD can tape out Polaris for multiple consoles, that's just asking for too much too quickly.
 
Hmm. Okay. I'll look at game-specific benchmarks once the new chips launch. Thanks for the response.

I don't want to enter into Nvidia vs AMD arguments cause I really don't care about that and I just choose whatever is the best for me in the current generation. (7 out of 10 cards I've owned over years were ATI/AMD)

The pattern usually is AMD cards having slightly better performance at the same price point but worse day 1 driver support. So if you play a lot of AAA games on day 1, Nvidia would be a better choice whether AMD drivers would catch on a later date if waiting a bit doesn't bother you. That said, things have changed a bit lately as there are games with good AMD driver support from day 1 so it's a bit more complicated than it used to be.

Will see how the new generation will change things around especially with DX12 which seems to favor AMD current cards at the moment.
 
Wut?

As for some of the others, the 980 Ti handily outperforms the Fury X in Quantum Break (even at 4K), and non-reference variations push it past the Fury X in most of the games you mentioned.

You can make all the excuses you want for titles being broken/unoptimized but the performance is what it is. That QB benchmark is an outlier. We already know the game runs like shit on nvidia. But yes the 980ti is competitive in some(keyword some) of the listed titles if overclocked enough. Thats but 1 nvidia card.

Oh yeah, heres ryse

http://www.techpowerup.com/reviews/Gigabyte/GTX_980_Ti_Waterforce/16.html
 
Because while AMD was busy doing things like taping out Polaris, they were also working on the PS4K, the Xbox 1.5 and the NX. AMD's not able to integrate an unfinished design into three new consoles, let alone keep the price where it needs to be.

Maybe you're right. I just think that it would be easier to manage a single architecture for multiple lines (dGPU, PS4/Xbox and NX) than possibly 3 different ones. Price and availability are two strong reasons this won't be the case, I agree.

The 2x statement isn't accurate, people need to accept that.

OsirisBlack made it clear the person from Sony relating the info was unequivocal?
 
I don't want to enter into Nvidia vs AMD arguments cause I really don't care about that and I just choose whatever is the best for me in the current generation. (7 out of 10 cards I've owned over years were ATI/AMD)

The pattern usually is AMD cards having slightly better performance at the same price point but worse day 1 driver support. So if you play a lot of AAA games on day 1, Nvidia would be a better choice whether AMD drivers would catch on a later date if waiting a bit doesn't bother you. That said, things have changed a bit lately as there are games with good AMD driver support from day 1 so it's a bit more complicated than it used to be.

Will see how the new generation will change things around especially with DX12 which seems to favor AMD current cards at the moment.

Thanks for the response! I don't have much interest in running AAA games on Day 1 on PC, as I'm more likely to buy them on console (this is probably the wrong thread to admit that...haha). What I'm looking at the Nvidia and AMD product announcements for is mostly a computer to be built in late 2016/early 2017 that will allow me to play:

  • 7th gen (using a console timeline) games that I missed because I didn't have a PS360 or gaming PC at max settings/1080/60 (basically AAA through 2013)
  • Various indie and smaller titles for the next 5-7 years that only come out on PC or were very clearly designed with PC in mind
  • The random PC library of games I've built up over the years and/or have been wanting to play (e.g., Starcraft 2, Audiosurf 2, Torchlight 2, maybe Guild Wars 2)
So, for me, it's going to come down to cost/value and being an (energy) efficient design.

This news/speculation thread probably isn't the place for my kind of questions, so when it gets closer, I'll definitely poke my head into the PC building thread to get some advice after doing some research.

Thanks again for your thoughts!
 
It seems like the 470 and 480 are rumored to be unveiled soon. As someone who is unfamiliar with AMD's naming but a bit more familiar with how Nvidia labels theirs, which models in Nvidia's lineup are the x70 and x80 generally comparable to?
Supposedly:
970 -> x70
980 -> x80

This means Nvidia may have the advantage because AMD Polaris will debut in the 950 and 960 segment.
 
Supposedly:
970 -> x70
980 -> x80

This means Nvidia may have the advantage because AMD Polaris will debut in the 950 and 960 segment.

Polaris 10 MIGHT be able to come close to a 980. Purely looking at shaders, at 40 CUs it's pretty much a 290. Throw in those rumored clock speeds and architectural improvements and who knows where it ends up. The thing is that Hawaii needed a 512-bit memory bus to keep the chip fed and Polaris 10 will be using 256-bit. I'm sure that it won't be too much of a problem if they use higher clocked memory and there's obviously delta compression.

But yeah, if GP104 is going to launch around Computex, people are going to have to wait until Vega to get something comparable in performance. If that one leaked picture of its die wasn't a fake, GP104 ends up at around 300mm² which is a decent amount bigger than P10's ~230mm².
 
Polaris 10 MIGHT be able to come close to a 980. Purely looking at shaders, at 40 CUs it's pretty much a 290. Throw in those rumored clock speeds and architectural improvements and who knows where it ends up. The thing is that Hawaii needed a 512-bit memory bus to keep the chip fed and Polaris 10 will be using 256-bit. I'm sure that it won't be too much of a problem if they use higher clocked memory and there's obviously delta compression.

But yeah, if GP104 is going to launch around Computex, people are going to have to wait until Vega to get something comparable in performance. If that one leaked picture of its die wasn't a fake, GP104 ends up at around 300mm² which is a decent amount bigger than P10's ~230mm².

Hawaii didn't exactly need a 512 bit memory bus per se.

It's just that AMD could design a smaller and simpler memory controller for lower clock VRAM with 512bit than what would have been with 384 bit and higher memory clocks.

So they went with it.
 
Maybe you're right. I just think that it would be easier to manage a single architecture for multiple lines (dGPU, PS4/Xbox and NX) than possibly 3 different ones. Price and availability are two strong reasons this won't be the case, I agree.

Most of us aren't privy to the internal workings of AMD. Perhaps working with a single architecture would be more efficient. On the other hand, doing so might disrupt operations for a period of time that management finds untenable.

I'd add that you have to remember confidentiality practices: the PS4 and Xbox One GPU SoC teams were separate teams and were supposed to operate without knowledge of what the other was doing. Even organized under the same division, each of the three console teams is probably working separately from the others.

OsirisBlack made it clear the person from Sony relating the info was unequivocal?

In killer instinct a 290x is as fast as a 980ti so yeah

Icecold's post is the only way I can see something like this being true -- jumping from a cut-down 7870 to a cut-down 290X.
 
AMD

and

Nvidia

Seems like Polaris 11 and GP104 will both be announced at Computex 2016. The difference is expect Polaris 11 (R7 470 and R9 480) to be R7 380[X] and R9 390-level cards with better energy efficiency, while I imagine GP104 (GTX 1080 and GTX 1070) will most likely be noticeable performance increases--Polaris is more a revision to GCN where as Pascal is a new architecture.

That is definitely worth waiting for; R7 470 here I come.
 
I am sorry to not have browsed through the entire thread but if these numbers above are correct, can we expect something of a GTX 950 level of performance at a 50 W TDP with Polaris 11 ? That would be a nice upgrade for someone looking for a GTX 750Ti (read PCIe powered GPU) replacement.
 
I am sorry to not have browsed through the entire thread but if these numbers above are correct, can we expect something of a GTX 950 level of performance at a 50 W TDP with Polaris 11 ? That would be a nice upgrade for someone looking for a GTX 750Ti (read PCIe powered GPU) replacement.

Absolutely. Chances are, we already saw the card in action. A few months ago, AMD demoed a card with lower power usage and better frame rate against a GTX 950. Both ran Star Wars side by side in the demo.
 
Technically speaking, the performance itself doesn't actually "fall off a cliff". It's just that nVidia adhere quite strictly to an aggressive engineered obsolescence model via turning various things on and off in their software packs/drivers (or removing them altogether). It's why you end up with Kepler Titan Blacks running worse than one would expect by reading the specs.

Technically speaking the reason why Kepler cards aren't performing good these days is because most devs aren't optimizing their engines for Kepler cards these days (or even NV cards in general) as their main optimization target is GCN in modern consoles. NV software isn't really to blame here.

However there is also the fact that generally GCN when conceived was more forward looking than Kepler.

That's true only if you postulate the console design wins which happened only after GCN was made as something which AMD was planning from the start to rely on as a main optimization force for their architecture as otherwise GCN isn't really "forward looking" to anything at all. GCN's main power is in how it handles compute alongside graphics but to get this power exploited you have to force the devs into a situation where using compute optimizations of the graphics pipeline is the only viable option of getting peak performance out of your h/w. Hence why GCN was rather shit until it was in all modern consoles because up till that moment most devs didn't actually need to rely on compute hacks to reach the higher h/w utilization on GCN.

One could argue that compute usage in games will go up with time and that's why GCN was "forward looking" and this is somewhat true. It is also true however that GCN's other part - the graphics h/w - isn't forward looking at all and is actually the main reason why this architecture was underperforming during the previous years. Hence my hope that they'll finally fix the most glaring issues here in Polaris as this will lead to nice performance increases across the board without the need to push developers into new programming models to achieve them (async compute and compute optimizations of graphics in general).
 
Technically speaking the reason why Kepler cards aren't performing good these days is because most devs aren't optimizing their engines for Kepler cards these days (or even NV cards in general) as their main optimization target is GCN in modern consoles. NV software isn't really to blame here.

Oh wow is this actually a thing now? I feel like people had that discussion of "AMD will perform better because console stuff is optimized for AMD" but that was quickly dismissed.

Is there a source confirming this somewhere?
 
Technically speaking the reason why Kepler cards aren't performing good these days is because most devs aren't optimizing their engines for Kepler cards these days (or even NV cards in general) as their main optimization target is GCN in modern consoles. NV software isn't really to blame here.
Suddenly all already created optimizations for Kepler are vanished from every engine.

Fact is that NV's architecture doesn't seem as future proof as AMD's one. The customer will see the results and will blame the card, they won't go "Oh but this game isn't optimized for my GPUs specific architecture.".
 
Oh wow is this actually a thing now? I feel like people had that discussion of "AMD will perform better because console stuff is optimized for AMD" but that was quickly dismissed.

Is there a source confirming this somewhere?
What proof do you need? For the devs to come out and say "yeah, we decided to forget about 80% of PC graphics h/w because of budgets"? This won't happen obviously. It's also pretty obvious that when a slower in all possible metrics GPU is outperforming a faster one on a different architecture (like 380X beating an OG Titan in several modern games) then this is a sign of optimizations for the latter being absent above anything else.

Suddenly all already created optimizations for Kepler are vanished from every engine.

Fact is that NV's architecture doesn't seem as future proof as AMD's one. The customer will see the results and will blame the card, they won't go "Oh but this game isn't optimized for my GPUs specific architecture.".
When there's a game which is running on a somewhat older version of the renderer -- Kepler is suddenly doing fine. Example - DS3. When there's a game which is obviously using a new console optimized renderer - Kepler (and Maxwell, which actually proves the point here beyond any doubt) suddenly dies. Example - QB.

Fact is you don't know what you're talking about when you talk about some architecture being "future proof" and some don't. If not for the console design wins GCN would still be under-performing in the exact same way it did earlier because there would be no incentive for devs to fix GCN's bad graphics performance with various compute tricks as they are doing it right now to get console engines up to speed. From a pure technical point of view GCN is certainly more advanced than Kepler in compute and both Kepler and Maxwell are actually more advanced than GCN in graphics. Maxwell is arguably on the same level in compute as GCN as well. The somewhat strange results we're getting currently are mostly because of developers omitting their renderer optimizations for anything but console GCN h/w and much less because of some architecture not being "future proof".
 
What proof do you need? For the devs to come out and say "yeah, we decided to forget about 80% of PC graphics h/w because of budgets"? This won't happen obviously. It's also pretty obvious that when a slower in all possible metrics GPU is outperforming a faster one on a different architecture (like 380X beating an OG Titan in several modern games) then this is a sign of optimizations for the latter being absent above anything else.


When there's a game which is running on a somewhat older version of the renderer -- Kepler is suddenly doing fine. Example - DS3. When there's a game which is obviously using a new console optimized renderer - Kepler (and Maxwell, which actually proves the point here beyond any doubt) suddenly dies. Example - QB.

Fact is you don't know what you're talking about when you talk about some architecture being "future proof" and some don't. If not for the console design wins GCN would still be under-performing in the exact same way it did earlier because there would be no incentive for devs to fix GCN's bad graphics performance with various compute tricks as they are doing it right now to get console engines up to speed. From a pure technical point of view GCN is certainly more advanced than Kepler in compute and both Kepler and Maxwell are actually more advanced than GCN in graphics. Maxwell is arguably on the same level in compute as GCN as well. The somewhat strange results we're getting currently are mostly because of developers omitting their renderer optimizations for anything but console GCN h/w and much less because of some architecture not being "future proof".

Do you think this will affect how NV designs their GPU's in the future?
 
AMD

and

Nvidia

Seems like Polaris 11 and GP104 will both be announced at Computex 2016. The difference is expect Polaris 11 (R7 470 and R9 480) to be R7 380[X] and R9 390-level cards with better energy efficiency, while I imagine GP104 (GTX 1080 and GTX 1070) will most likely be noticeable performance increases--Polaris is more a revision to GCN where as Pascal is a new architecture.

They're both revisions, and Polaris seems to be a much bigger revision while Pascal is basically 16nmFF Maxwell.
 
Absolutely. Chances are, we already saw the card in action. A few months ago, AMD demoed a card with lower power usage and better frame rate against a GTX 950. Both ran Star Wars side by side in the demo.

...aaaaaand nVidia noticed and reacted :
Anandtech said:
GIGABYTE has quietly added a low-power GeForce GTX 950 video card to its lineup. The product does not require auxiliary PCIe power connector and can be powered entirely by a PCIe x16 slot. Low-power graphics cards featuring the GM206 graphics chip were released by multiple manufacturers recently, GIGABYTE’s board will compete against similar products by three other makers.

Depending on what the price/performance ratio of Polaris 11 will be, these GTX 950 could eat some market share in this segment. AMD would be well inspired to launch a low-profile card that would fit in HTPCs and business towers.
 
...aaaaaand nVidia noticed and reacted :


Depending on what the price/performance ratio of Polaris 11 will be, these GTX 950 could eat some market share in this segment. AMD would be well inspired to launch a low-profile card that would fit in HTPCs and business towers.

I know about these. Don't know if they upped the performance. Don't forget, Polaris will be DX12 compatible with async shaders and what not. Something the 950 can't do.
 
As an AMD owner I have to say that I'm more interested in Framepacing at this point since I'm at least semi certain that AMD can come up with cards that are competitive in the 1070/490 pricerange from a pure performance point of view.

But framepacing has been a problem for AMD in the last couple of GPU generations so I wonder if that'll change.

Afair they announced improved frametimes back when that Crimson driver came out but I didn't check back with games thoroughly enough to verify that.

I agree and thats definitely one area they're improving, both with Crimson and in hiring Scott Wasson from Tech Report to tackle frame pacing:

http://techreport.com/blog/29390/into-a-new-era

Some months ago, I got a phone call from Raja Koduri, who heads up the newly formed Radeon Technologies Group at AMD. Raja asked me if I'd be interested in coming to work at AMD, to help implement the frame-time-based testing methods for game performance that I've championed here at TR. In talking with Raja, I came to see that he has a passion for doing things the right way, for creating better experiences for Radeon owners. He was offering me a unique opportunity to be a part of that effort, to move across organizational lines and help ensure that the Radeon Technologies Group creates the best possible experiences for gamers.
 
Do you think this will affect how NV designs their GPU's in the future?

This is something I was thinking on lately as well but for now I don't think that we'll see anything but the usual annual/once-per-two-years architectural advances from NV. Copying GCN is something which they won't do obviously because it's, well, not possible (patents) and it's actually a step backwards for NV from a power efficiency point of view. What they will do instead is try to improve the slow parts of their architecture in such a way that most GCN console optimizations won't slow them down at least if not help them. I see them already doing this to async compute in Pascal for example.

And if you're asking in a more general way of if NV will now design their GPUs for something which doesn't exist on the market at the moment they are released - I'm pretty sure that this won't happen as NV's way of building an optimal architecture for the modern loads instead of stuffing the chip with parts which won't work for the next couple of years is what brought them their 80% of market share. This is also a question of what you as a user prefer: to have the most performance out of the card the moment you buy it or wait for 2-3 years until your card will be able to beat the competition in some benchmarks while already providing a subpar performance on average because of its age. Granted it's not always clear if some card will be able to even provide such an increase in the future, and with GCN's recent wins it is mostly because they've got the console designs for themselves - otherwise they'd probably be still loosing to NV in all tiers of the market.
 
So i'm a console gamer who's planning out a build for this summer-ish June-july. I've been focused on the NV 970-980ti range but after stumbling into this thread it has me rethinking my NV bias.

Are these new AMDs really supposed to be that much better than their NV counterparts? Because of this console partnership? Sorry, complete noob here!
 
So i'm a console gamer who's planning out a build for this summer-ish June-july. I've been focused on the NV 970-980ti range but after stumbling into this thread it has me rethinking my NV bias.

Are these new AMDs really supposed to be that much better than their NV counterparts? Because of this console partnership? Sorry, complete noob here!

Sounds like you should wait for a 480
 
So i'm a console gamer who's planning out a build for this summer-ish June-july. I've been focused on the NV 970-980ti range but after stumbling into this thread it has me rethinking my NV bias.

Are these new AMDs really supposed to be that much better than their NV counterparts? Because of this console partnership? Sorry, complete noob here!

All the console partnership means is AAA games will likely be more optimized for AMD cards going forward, especially with the newer cards if NX and PS4K are using Polaris 11.

As far as performance, it's too early to tell, we know almost nothing about Nvidia's target here. Strictly generalizing what little we know so far, it seems AMD is focusing on VR-capable cards with low power usage and mainstream price points, whereas Nvidia is probably going to lead with their upper mid range cards as they usually do. We should have some comparisons very soon.
 
Are these new AMDs really supposed to be that much better than their NV counterparts? Because of this console partnership? Sorry, complete noob here!

Nobody knows if that's actually true. Right now they're just excuses.

I suggest you wait and not spend money on a 970/980. See what both companies have to offer later this year, then decide. I decided on AMD because it's more forward thinking and has better DX 12 support.
 
Sounds like you should wait for a 480

Is the 480 expected to be 980-980ti competitive or more like an updated 390?

All the console partnership means is AAA games will likely be more optimized for AMD cards going forward, especially with the newer cards if NX and PS4K are using Polaris 11.

As far as performance, it's too early to tell, we know almost nothing about Nvidia's target here. Strictly generalizing what little we know so far, it seems AMD is focusing on VR-capable cards with low power usage and mainstream price points, whereas Nvidia is probably going to lead with their upper mid range cards as they usually do. We should have some comparisons very soon.

Very cool information thanks! I wasn't aware all 3 of the "new" consoles were AMD powered. Heck, i dont even know who made the GPUs for the current 3 lol. I always forget that stuff is at times developed by the PC card manufacturers.

Excited to see both the NV and AMD announcements.
 
So is AMD going to be able to compete with Nvidia at all? I need to be able to troll my Nvidia friends but I can't do that if it doesn't at least have raw power.
 
I must not have been paying attention for a while, are people saying that AMD is the better performer?

I mean, I'm team Red, but I never expected people on GAF to say as much...
 
I definitely want to go AMD for my next card. The problem is that Freesync monitors are not getting the same treatment as gsync.

For example i want a freesync version of the Asus pg348q. Ultrawide 34inch gsync 100hz curved.
 
Is the 480 expected to be 980-980ti competitive or more like an updated 390?



Very cool information thanks! I wasn't aware all 3 of the "new" consoles were AMD powered. Heck, i dont even know who made the GPUs for the current 3 lol. I always forget that stuff is at times developed by the PC card manufacturers.

Excited to see both the NV and AMD announcements.

As been mentioned before, it's hard to say at this point. On paper the 480 falls in line with the 390(X). It has slightly less shaders, but is supposed to run at a higher frequency. The question now is how much extra performance will be gained from GCN4 and overal design tweaks and balances.

I'd say expect around 390X/980 performance while using less power and if it ends up faster, consider it a nice surprise.
 
I must not have been paying attention for a while, are people saying that AMD is the better performer?

I mean, I'm team Red, but I never expected people on GAF to say as much...

GAF at large is still very pro-Nvidia and there are a few dedicated posters who work very hard to keep it that way. Still, it's more or less been known that AMD cards have had better overall value for quite some time now and they've also made considerable improvements to frame pacing, efficiency, and most of all driver support. The future is looking very bright and many are expecting Polaris to be a huge win on a performance per dollar and performance per watt level.
 
Top Bottom