Nvidia GTX 980/970 3DMark Scores Leaked- from Videocardz.com

So as someone who will be building his first gaming PC in the coming months once my Rift DK2 arrives, is it worth getting this new 980? Or should I get a cheap 780 or AMD R9 290?
 
So as someone who will be building his first gaming PC in the coming months once my Rift DK2 arrives, is it worth getting this new 980? Or should I get a cheap 780 or AMD R9 290?

Wait for actual reviews and benchmarks.
 
the performance difference isn't that huge :/

did nvidia skip the 800 series like just to make people think that it is a huge upgrade over the 700 series?
Apparently they skipped it because someone in their marketing department thought it would be good idea to name one of their previous mobile parts 8xx.

Really hope that 9xx series does finally have some significant changes to the architecture and their fixed units. (ROP, TEX, etc.)
We haven't really seen new features since Fermi. (hoping this part would be at least fully dx12 compliant.
 
Oh hell no, that stock 980 is barely any faster than my plain-jane, overclocked 780... like, at all; 10%. It's no faster than the 780Ti at the same clock speeds, as well. Computer parts with "well" in the their codenames are just too disappointing. Failwell. I need more for 2560x1440, damnit; TW3 is coming. Can't help but feeling I'm going to upgrade anyway because of potential VRAM limitations. That 980/970 will undoubtedly have a full 33% more, at least, than barely-slower 780/780Tis. Piss off Nvidia (if article is true), can't even SLI comfortably to fill the void.
 
@Jasec, if you think that new nvidia graph is any more accurate than the old one then you're crazy
Pascal is going to be another 10-15 percent performance increase over big maxwell, not a 1.8x one.
At least initially (in 2017) until they sell you the proper die (in 2018, and for a thousand bucks)

Yes, I realise there's more to GPU performance than just matrix multiplication.
 
For the full experience of CV1 we're going to need a single GPU that can pull off Crysis 3 in stereo 3D at 1440p with a minimum frame rate of 90fps. By my count that's a 4x leap at the absolute minimum but a 8-16x leap is closer to where we need to be. We won't even get there in a decade with these 10% year on year increases.

So frustrating. We're at a point where we need GPU performance to accelerate at the fastest it ever has and yet we're stuck with the slowest progress in history.

So as someone who will be building his first gaming PC in the coming months once my Rift DK2 arrives, is it worth getting this new 980? Or should I get a cheap 780 or AMD R9 290?

If you're planning to use VR then buy the most expensive GPU you can afford. If you can wait then wait. Please appreciate that even the eventual Titan 2 still won't be nearly fast enough for a perfect VR experience with modern games (heck Titan 3 probably won't even get close) but the more horsepower you have, the better.
 
So as someone who will be building his first gaming PC in the coming months once my Rift DK2 arrives, is it worth getting this new 980? Or should I get a cheap 780 or AMD R9 290?
Depends entirely on how Nvidia price these cards.

If you're planning to use VR then buy the most expensive GPU you can afford. If you can wait then wait. Please appreciate that even the eventual Titan 2 still won't be nearly fast enough for a perfect VR experience with modern games (heck Titan 3 probably won't even get close) but the more horsepower you have, the better.
He cant exactly wait for very long when he's got a DK2 on the way!

Being able to wait is always great, but there are plenty of people that do need a new PC or new GPU still.
 
Oh hell no, that stock 980 is barely any faster than my plain-jane, overclocked 780... like, at all; 10%. It's no faster than the 780Ti at the same clock speeds, as well. Computer parts with "well" in the their codenames are just too disappointing. Failwell. I need more for 2560x1440, damnit; TW3 is coming. Can't help but feeling I'm going to upgrade anyway because of potential VRAM limitations. That 980/970 will undoubtedly have a full 33% more, at least, than barely-slower 780/780Tis. Piss off Nvidia (if article is true), can't even SLI comfortably to fill the void.


980 being on par with a 780Ti sounds reasonable to me. Assuming pricing is lower than a 780Ti then you're getting slightly better value for money, and there is still space in he lineup for a 980Ti next year.
 
For the full experience of CV1 we're going to need a single GPU that can pull off Crysis 3 in stereo 3D at 1440p with a minimum frame rate of 90fps. By my count that's a 4x leap at the absolute minimum but a 8-16x leap is closer to where we need to be. We won't even get there in a decade with these 10% year on year increases.

So frustrating. We're at a point where we need GPU performance to accelerate at the fastest it ever has and yet we're stuck with the slowest progress in history.



If you're planning to use VR then buy the most expensive GPU you can afford. If you can wait then wait. Please appreciate that even the eventual Titan 2 still won't be nearly fast enough for a perfect VR experience with modern games (heck Titan 3 probably won't even get close) but the more horsepower you have, the better.


The double edged sword that is mobile. On the one hand, the emphasis on mobile means nice laptop chips and focus on power consumption etc. so there are big gains there but stagnation at the top end. On the other hand, the focus on mobile is pretty much the only way to even get a 5" 1440p screen for VR in the first place.
 
really hope the 980 ships with a decent amount of vram. i'm still sore about the 780ti 6gb getting cancelled because it might take some of the shine off the new cards.
 
really hope the 980 ships with a decent amount of vram. i'm still sore about the 780ti 6gb getting cancelled because it might take some of the shine off the new cards.

I hope all the newer GPU's have a decent amount of vram.

2GB doesn't quite cut it any more.

It's the new Maxwell architecture, but without the die-shrink that was originally expected, right?

That and without the original plan to have Unified Virtual Memory.
 
980 being on par with a 780Ti sounds reasonable to me. Assuming pricing is lower than a 780Ti then you're getting slightly better value for money, and there is still space in he lineup for a 980Ti next year.

If I have to guess I'd say that it won't be like this in real games and on average. Firestrike is a highly GPU limited benchmark. 780Ti will be faster I think meaning that they'll need to price the new cards below it. These GM104 based parts are likely to be cooler and cheaper to produce meaning that they'll be able to clock them higher and sell with a bigger margin (lol at anyone expecting them to set lower than that of a competing AMD parts prices) but in the end 28nm is the real limit. Nothing will change much until 20nm/16nm.

It's the new Maxwell architecture, but without the die-shrink that was originally expected, right?

Not really. 20nm GM2xx parts should've been based on a 2.x version of Maxwell. If these are indeed GM104 then they'll be not as advanced as GM2xx parts were supposed to be.
 
really hope the 980 ships with a decent amount of vram. i'm still sore about the 780ti 6gb getting cancelled because it might take some of the shine off the new cards.

There will be a 8GB version apparently. Probably not at launch though.
 
Yeah, I'll wait to for the prices on these new cards to see if it's worth getting.

If you're planning to use VR then buy the most expensive GPU you can afford. If you can wait then wait. Please appreciate that even the eventual Titan 2 still won't be nearly fast enough for a perfect VR experience with modern games (heck Titan 3 probably won't even get close) but the more horsepower you have, the better.

I guess that's true, but even if I decided to hold off and wait a year, it still sounds like the increase in performance would be negligible. I'll get the best I can afford when my DK2 arrives and probably won't be upgrading until this hypothetical huge leap that modern VR needs actually happens. I'm also not expecting to be playing Crysis type experiences at 90 fps. I'll be content with driving/flight sims and abstract stuff for now.
 
980 being on par with a 780Ti sounds reasonable to me. Assuming pricing is lower than a 780Ti then you're getting slightly better value for money, and there is still space in he lineup for a 980Ti next year.

It's not reasonable, not by historical trends, not by the logic of GPU architecture progression. The 780Ti itself was not reasonable, nor was the 780 before it, nor the Titan before it. The GTX 680, by historical trends, should have been the GK100/110 chip, not a measly GK104. The Titan should never have been $1000, nor the highly-crippled 780 $650. Nor should the 780Ti (read: the full real big Kepler die) have launched at $650 and quite some months late by the trends preceding it. A Maxwell (read: new architecture) part practically no faster than the full die chip of the preceding architecture should never be branded as an x80 part. Before the 680, it was unheard of. Perhaps the conditioning Nvidia so cleverly carried out with relative prices (680 managing to match up to the 7970 despite the mid-range chip, Titan making the 780 seem like a steal in comparison) makes you think the 980 is reasonable, but it's not. It's a disgrace. Here's how new architectures are supposed to be and have been for the longest time before the mess that's become the current microprocessor market.

6800 -> 7800 = ~50% improvement.

7800 -> 8800/9800 (new architecture) = ~200% improvement (and that justified the initial $650 pricetag on the 8800GTXs, not the crap Nvidia pulled with the GK110)

8800/9800 -> 280 (heavily improved Tesla architecture) = ~50-60% improvement

280 -> 480/580 (new architecture) = can't remember rough percentage improvements, but people were comparing 280 SLI to a single 480

The speculated GK110 -> 980 (new Maxwell architecture) jump is a joke in comparison, and no, the $500 pricetag isn't reasonable, it's what should been the reality for this level of performance for at least a year now following past card prices/releases.

The second revision of a given architectural lineup isn't supposed to be where the performance comes in (like the supposed "980Ti", that I really think will be a $800-$1000 Titan II, followed by a heavily crippled version of that for $600-$650 with the new naming scheme), it's supposed to be a revision. If these numbers are true for the 980, the 780Ti to 980 is more like 0-10% improvement (assuming some additional clocking headroom on the 980) and the overpriced-as-hell part we should have gotten in a more reasonable price range will be a huge jump rendering the 980 obsolete and charging a huge premium just to do so... just to progress in a somewhat reasonable manner as new architectures once did. Maybe it can be argued this is just a one-time stop-gap measure because of 20nm production issues or some-such, but I'm skeptical because of what Nvidia have been doing with Kepler and the Kepler architecture has overstayed it's welcome. We're at 2.5 years now. Time for a new architecture that actually means something, not just a new code-name for the continued overpriced drip-feeding Nvidia are intent on making the norm.
 
It's not reasonable, not by historical trends, not by the logic of GPU architecture progression. .

it depends on your range when measuring historical trends. Disappointingly if you go back a couple of years, then shitty incremental improvements while fleecing consumers *is* the new historical trend.
 
Those are depressing numbers. The GTX980 can't be that disappointing right, right. It doesn't even beat a 780 ti.
 
Moore's law in full effect here, the hardware industry have been stuck on the 28nm process for far to long. There is barely any improvement for year to year. The GTX titans and the GTX 780s are very power hungry compared to their predecessors, they just got bigger rather than more efficient.
 
it depends on your range when measuring historical trends. Disappointingly if you go back a couple of years, then shitty incremental improvements while fleecing consumers *is* the new historical trend.

Doesn't mean it's reasonable, doesn't negate that we used to get and arguably deserve much, much, much, much, much better, and frankly, if you're only going back a couple of years in assessing what's reasonable, you're making a huge mistake in omitting the real historical trends and devaluing the worth of your own spent money. No, the fleecing is the new trend that's only been going for a couple years and one architecture, not the historical one that dictated parallel processor progression since its inception before that. Even in comparison to the Fermi -> Kepler jump with the 680 (was something like 30% and cooler-running/better clocking), 0% improvement over the previous flagship with a new architecture is an unprecedented new low. It's a potential trend in the making, yes, but not a historical one and in addressing that and perhaps spending a bit more wisely, consumers can prevent it from becoming a lasting trend. A bit of a tight bind there, certainly, given the lack of competition at times and the fact that people really should buy the best available they can afford when they need it, but it would help to at least make some noise about it.
 
Doesn't mean it's reasonable, doesn't negate that we used to get and arguably deserve much, much, much, much, much better, and frankly, if you're only going back a couple of years in assessing what's reasonable, you're making a huge mistake in omitting the real historical trends and devaluing the worth of your own spent money. No, the fleecing is the new trend that's only been going for a couple years and one architecture, not the historical one that dictated parallel processor progression since its inception before that. Even in comparison to the Fermi -> Kepler jump with the 680 (was something like 30% and cooler-running/better clocking), 0% improvement over the previous flagship with a new architecture is an unprecedented new low. It's a potential trend in the making, yes, but not a historical one and in addressing that and perhaps spending a bit more wisely, consumers can prevent it from becoming a lasting trend.

I didn't say it was reasonable. It is shitty, but other than bullshit margins screwing us over, there isn't much more they can do with GPUs right now. Maybe stacked ram? But otherwise we've been through the early days of massive innovation and development, we've milked the process node shrinks dry
 
It's the new Maxwell architecture, but without the die-shrink that was originally expected, right?
If I read correctly, nowadays Nvidia and AMD cannot trivially transplant an architecture to a new process or vice versa. They need to redesign their chips around the process node. So whatever that was designed for 16nm/20nm is not completely here. Intel gets to do that because they own the fabs.

Maybe it can be argued this is just a one-time stop-gap measure because of 20nm production issues or some-such, but I'm skeptical because of what Nvidia have been doing with Kepler and the Kepler architecture has overstayed it's welcome. We're at 2.5 years now. Time for a new architecture that actually means something, not just a new code-name for the continued overpriced drip-feeding Nvidia are intent on making the norm.
It's not tooo far off from the status quo. Fermi 40nm hung around for 2 years between 400 and 500, before Kepler 28nm blew them away (670 destroyed 570). Ultimately both vendors are epicly hamstrung by TSMC's process improvements.
 
That 970 Will be mine.

Titan performances at ~300€? I'd definitely be happy with it given that I play at 1080p and 60fps.
I come from a GTX570, so it will be a big jump for me, but I understand other peoples complaints.
 
I didn't say it was reasonable. It is shitty, but other than bullshit margins screwing us over, there isn't much more they can do with GPUs right now. Maybe stacked ram? But otherwise we've been through the early days of massive innovation and development, we've milked the process node shrinks dry

Margins are at least half the problem. In the case of Maxwell, maybe not, given node difficulties, but with Kepler, there's no excuse I would accept. That was some ridiculous stuff. I'll reserve some judgment on Maxwell until we actually get these chips, see how they are, how big the dies are, how Nvidia handle the 20nm transition, etc. but certainly, margins are still part of the problem. Perhaps they could cram a few more shaders into the thing, could have priced it lower, perhaps that in turn with more focus on SLI drivers and promotion could make SLI a potentially valid placeholder for people really needing more performance on a somewhat reasonable budget (more VRAM could help too, Nvidia restricted 6GB models). Can't say any of these things until the chips actually get announced/released though, but a lot of my criticism is carrying over from my problems with Kepler too and restrictions Nvidia placed on them while still charging top dollar (voltage restrictions for overclocking, VRAM limitations for SLI-ing). Node shrink problems are a big issue though, I should be more mindful of them, but I do think Nvidia could still certainly do better than they have been with them, even if it has to come a bit more at their own expense (i.e., lower yields for a slightly larger 980 die).
 
Depressing. As an old school 3dfx fan I didn't expect the GPU market to end up in this stagnant situation so soon.

To be fair, there's not much motivation for either sides to innovate. AAA games are being developed for console first, and the top end GPU's handle anything you throw at them with ease (granted that devs port them properly first)

We'll see some drastic improvements once 4K becomes widespread. Pretty similar to when 16:10/16:9 started to catch on.
 
There will be a 8GB version apparently. Probably not at launch though.
ooof, 8gb is probably overkill unless the 980 (or 980ti if the 8gb ships with that model) are good enough to stick around until 4k starts becoming more widely adopted.
 
Historical trends are based on a manufacturing feature size progression which is simply not happening anymore.

Physics don't care what you or I think we "deserve".

are there any alternatives to one massive chip? eg lots of super cheap small ones? Will stacked ram help or are we going to be held back by computation rather than bandwidth?
 
I guess I will be sticking with my 770 sli for now then...

Hoping for a single gpu to beat 2x770s before i even consider upgrading...
 
980 appears to have a 30% increase over the 780, so seems like a buy for me. Although there appears to be some, perhaps driver based, low SLI scaling relative to the AMD cards.
Although I hate to go SLI, 4K doesn't come free :P
 
Historical trends are based on a manufacturing feature size progression which is simply not happening anymore.

Physics don't care what you or I think we "deserve".

That still doesn't excuse what Nvidia did with Kepler. Releasing the 680 as their top-end card just because they didn't need to do much to compete with the 7970 doesn't exactly help the consumer.

edit: With that being said, I do realize that we're kinda waiting on 20nm at this point. I'm still hoping the actual benchmarks are a bit more impressive, because I'd like to have a reason to upgrade my 680.
 
The speculated GK110 -> 980 (new Maxwell architecture) jump is a joke in comparison, and no, the $500 pricetag isn't reasonable, it's what should been the reality for this level of performance for at least a year now following past card prices/releases.

*sigh*

You were so close to having it correct. Comparing GK110 (GTX 780Ti) to GM204 (GTX 980) is a bad comparison, because they are fundamentally difference products. What you would actually need to compare to fit your other examples is something like GK104 (GTX 680) to GM204 (GTX 980). Making that comparison the improvement is substantially better.

GK104 had exactly half as many transistors as GK110, so assuming Maxwell at GM204 is in a similar boat, we're looking at a gpu that has about 30% less transistors, uses 30-50% less power, and offers comperable performance. I'd consider that pretty impressive.

Very impressiev numbers. GM204 beating the GK110 and drawing less power.

The GM200 will be a monster.

It will be a nice upgrade for anyone using the 600 series or older.

are there any alternatives to one massive chip? eg lots of super cheap small ones? Will stacked ram help or are we going to be held back by computation rather than bandwidth?

Bandwidth isn't really an issue--hence why Nvidia kind of ignoreeeed it willingly with Kepler. The problem is that to get better performance you have to stack more computational units into a given space. If you just keep dumping more stuff into it you end up with a really expensive product to manufacture that probably has a low yield, resulting in an insanely expensive product for the end users. TSMC and GF have had issues going 32nm and lower, so they can't keep doing node shrinks alone, so they are kind of stuck focusing on efficiency and scaling upwards. If they could get it to the point where running 2 GPU's uses less power than 1 did traditionally for a similar cost they would have no issues producing massive performance gains.
 
Disappointment over technology progress (and prices to be more reasonable) VS understanding the technical reason behind why it isn't so aren't necessarily mutually exclusive..
 
Historical trends are based on a manufacturing feature size progression which is simply not happening anymore.

Physics don't care what you or I think we "deserve".
They're still happening, just more slowly. But 20nm got the greenlight for mass production recently. If Nvidia aren't using it for the supposed 900-line, it doesn't bode particularly well for when we'll see actual 20nm GPUs but in any case, if whatever die is used for the 980 is smaller than GK110 by a notable amount as one supposed leak showed, they could have done better. I don't deny that transistor-shrinking is slowing down significantly, but the issue I'm having right now is more of Nvidia's product margins. I think they're ridiculous. We're not strictly seeing the limitations of physics at play here, but rather how those limitations collide with and impact Nvidia's financial decisions in regards to die sizes and yields, product placement/segmentation, and/or MSRP.

*sigh*

You were so close to having it correct. Comparing GK110 (GTX 780Ti) to GM204 (GTX 980) is a bad comparison, because they are fundamentally difference products. What you would actually need to compare to fit your other examples is something like GK104 (GTX 680) to GM204 (GTX 980). Making that comparison the improvement is substantially better.

GK104 had exactly half as many transistors as GK110, so assuming Maxwell at GM204 is in a similar boat, we're looking at a gpu that has about 30% less transistors, uses 30-50% less power, and offers comperable performance. I'd consider that pretty impressive.

No, I very deliberately used the term "980" for a specific reason. I know the real successor to the GK110 should be the GM200 just as the GK100 should have been the real successor to the GF110 instead of the GK104. The problem is Nvidia themselves are branding the mid-range-intended chip as the flagship and charging flagship money for it. That's my criticism; they're getting it wrong. I agree they're fundamentally different products in their intended design and physical targets, but if Nvidia are going to mash them all together as x80 parts anyway, I'm going to compare them anyway and of course, express my disappointment that a mid-range die is commanding a high-end price. I was intending to highlight how, with all previous architectural debuts, the big-die (for the transistor size, of course) chip was the new flagship whereas now they debut with a mid-range chip. At least with the 680, there was no transistor-shrinking excuse either aside from perhaps Nvidia's decision to stretch out Kepler to maintain a string of new products to tide them over a predicted 20nm drought.
 
If the pricing for the 980 stays under $500, then I'd say it's pretty impressive.

I expected Maxwell to show its biggest leap in mobile GPUs. And they didn't fail in that department. It's actually pretty damn impressive. It's giving almost as much performance as a 290.
 
Historical trends are based on a manufacturing feature size progression which is simply not happening anymore.

Physics don't care what you or I think we "deserve".
What we really need is quantum GPUs that can compute every possible frame simultaneously.
 
Top Bottom