I am on mobile so responded in bold in the quote text.
> it is based on the latest TPU review as they test a wide range of games so give a more general picture.
Check other reviews as well. 980Ti is faster on average in 1440p in computerbase.de's 1080 review, for example, other as well.
> So having something that is useful long term and keeps the architecture relevant over a longer period of time makes it worse, wow.
Yes, of course it makes it worse as it makes it perform worse right now, when it's actually on the market and people are choosing what to buy. No one cares that it will perform better in three years as this isn't something you can even guess beforehand. And you won't be selling these cards in three years so how's that going to benefit you?
> Why are the majority of GPU currency mining rigs based around GCN if it is far worse at compute?
Because GCN provides a better flops/dollar ratio thanks to it being so bad in gaming workloads that AMD has to push higher tier cards in lower price segments to compete with what NV have. This however doesn't mean that GCN is "far better in GPU compute".
> On average the Fury X and 980Ti are about the same if you exclude 1080p, which is only relevant if you are running a 144hz display. In that special niche then sure the 980Ti is better. The rest of the product stack GCN is ahead in performance at each price point vs Maxwell.
On average they're not because you have to include the clocking potential in this comparison as well since that is a part of the architecture too.
> In the short term IPC is worse, long term is wait and see.
Short term it doesn't matter as even 1070 is faster than Titan X and judging from what we know of Polaris it won't be able to reach 1070's level.
I also kinda wonder where that idea of a worse IPC of Pascal is coming from? Is it from these leaked benches comparing GM200 to GP104 at the same clocks? If yes then these were obviously wrong as GM200 has more SPs than GP104.
> So Forza, Hitman, AOTS etc are all AMD tech demos, ok.
Hitman and AoTS yes, Forza a) runs fine on NV h/w and b) is MS's UWP effort. I'm actually surprised by the amount of PC specific optimization Turn 10 put into FM6A as I pretty much expect something like QB from any MS UWP effort at this point.
> Depends on the design of the GPU and it's thermal limits. The lack of evidence for a higher than normal failire rate is telling though.
Depends on a GPU's comparative TDP only. Hotter cards are overheating more often than cooler cards. All NV's cards are cooler than Radeon counterparts since 2013 when put into the same cooling environment.
> Show the evidence or stop making baseless remarks, each chip has its own thermal limits and these will have been thoroughly tested. Unless you can show GCN has higher failire rates than Kepler or Maxwell stop spewing nonsense.
You need an evidence that a hotter part is more likely to overheat than a cooler one? How about some physics book for beginners? The only nonsense here is your inability to see this fact.
dr_Rus I was willing to hear your argument until you started talking about heat and failure of cards because of it.
Are you seriously trying to argue that AMD cards carry some kind of appreciable risk of dying from over-heating? You can't be serious.When does that happen ever, in 99.9% of use-case scenarios? What a load of nonsense.
They carry a higher risk of dying because of overheating. Most heat related failures are happening simply because of board components dying because of high temperatures. The higher the temperature a card is running on - the higher that risk is. It's always been like this, and it was a vice versa back in R300-R700 days when AMD cards where actually cooler in general (unless some stupid cooling decisions were made) and NV cards could run up to 105C by default.
why are you using the very first batch of reference 290/x to represent the entirety of them? even ignoring the fact that there were plenty of custom cards, even the reference cards were fixed shortly after launch.
the difference is 5% when you include alot of older game titles as part of an average. its larger than 5% when you just focus on more recent titles. and i never claimed nvidia got worse relative to itself so i dont know why you constantly mention that when we are having discussions
when looking at architectural strong points from a rendering performance perspective, outside of geometry/tessellation(which is quite clearly never going to materialize in actual games), i dont know what other strength you are referring to. i mean i guess you could latch on to the minor 12.1 features, but again those will likely only see use in gameworks.
i also cant agree at all that maxwell is more advanced than GCN. absolutely not
Well, that's your problem that you can't agree with a fact. Maxwell is more advanced than GCN in pretty much everything but the mixed contexts scheduling - which is what AMD is pushing hard to be used as it actually gives benefits to GCN's graphics utilization while simultaneously reducing Maxwell's performance (and killing Kepler's).
Here's a mind experiment for you: let's put a Maxwell chip against GCN chip of the same complexity and flops and see how they compare? This is something you can rather easily do right now even in mid range, which Polaris is targetting:
A. R9 380X is a 5000M transistor GPU with 359mm^2 die size with 256 bit bus. It's rated at ~4TFlops of math performance. It's built on the latest GCN3 revision.
B. GTX 970 is a 5200M transistors GPU with 398mm^2 die size with 256 bit bus. It has ~20% of that die disabled which likely puts it quite a bit below Tonga in complexity (~4100M working transistors). 970 is rated at ~4TFlops of math performance.
For all intents and purposes these chips production costs should be near meaning that omitting other factors AMD and NV should have the same margin from them when selling them for the same price.
Which one is faster in general? Let's look at benchmarks known to favor AMD cards:
Nope, not even on the reference 970 level here. DX12 maybe?
Aha, here it is. The only win is happening in a game heavily skewed towards AMD h/w in general in a renderer made under AMD supervision. And it's actually a loss again in another such game.
So when you look at comparatively similar complexity and math throughput GPUs NV is winning quite a lot in performance even in those games which are favoring GCN h/w. This can be seen in 980Ti and Fury X comparisons as well, and it's definitely not a case where Maxwell's geometric performance or any other strength comes into play.
To get the thread back on topic - unless Polaris will provide some seriously revolutionary changes compared to GCN3, it will end up on the same level against Pascal as GCN3 did against Maxwell 2: quite a bit slower on average (up to 50% slower in fact) and on par at best in those titles which are using DX12 with AMD provided/funded/whatever renderer. I don't see how you would expect anything else really, going off what we have at the moment.
If we take the P10 234mm^2 die it should end up somewhere below in its average performance than a comparable Pascal die. GP104 is cut by 1/4 (less actually but it's hard to get an accurate figure) for 1070 which puts the actually working die of 1070 around P10 territory. Which in turn would put P10 compared to 1070 onto 380X position when compared to 970. That's my expectation at the moment which can be wrong only if Polaris will be a big architectural change from GCN3.