Nvidia GTX 980/970 3DMark Scores Leaked- from Videocardz.com

In an ideal situation I would wait for Rift CV1 to come out before buying a new GPU, but my lovely OC 1GB 560 Ti is starting to show its age even when only gaming at 1080p. I don't need 60FPS in all games I play, but once newer games require you to tweak graphical settings to get a stable 30 an upgrade becomes inevitable.

If the price of a 970 gets around 300 euro at release or soon after I'll probably bite. Near-Titan performance is more than enough for someone who doesn't give a crap about extreme resolutions (yet), so it should last me about two years at the very least.
 
If Nvidia can offer splitscreen render by default (it is in the driver: which adds no frame delay), do you really think an API is stopping them from just extending that mode to VR?
There's probably a way to hack an SLI profile into the driver which works somewhat better than the default with some VR games. But that seems like a lost opportunity to me.

If you had a real "VR mode" in the API, you should be able to have 2 GPUs working on 2 Viewports of the same scene with basically 0 CPU overhead, almost perfect scaling and less end to end latency than a single GPU (which needs to render the 2 viewports sequentially).

It could make even me buy 2 GPUs!
 
Maybe my perspective is off, but if those are 'graphics scores' then my two 780s (stock cards and coolers, but overclocked by myself) beat SLI 980s by a decent margin.

I didn't really expect that, of course that's those 980s at stock clocks themselves.

I wasn't planning on upgrading to this series, maybe I'll go with the Maxwell refresh next year depending on how memory requirements pan out (I suspect 3GB to be the new minimum for high quality settings in a year or two).
 
There's probably a way to hack an SLI profile into the driver which works somewhat better than the default with some VR games. But that seems like a lost opportunity to me.

If you had a real "VR mode" in the API, you should be able to have 2 GPUs working on 2 Viewports of the same scene with basically 0 CPU overhead, almost perfect scaling and less end to end latency than a single GPU (which needs to render the 2 viewports sequentially).
Yeah a generalized API support of it makes the most sense of course. Hope the DX12 also considers it so as to speed up the adoption. Open GL may not be enough!

It could make even me buy 2 GPUs!
Oh my
 
Well I know pretty much nothing about the costs involved here, so I cant really get into an argument about it.


And I will always vehemently disagree with this. As I explained when I was a manager of a café, it was very frustrating having people complain to you about price increases, accusing you of being greedy or taking advantage of people when there's not a damn thing we could have done about it.

As a result, I have become very understanding that businesses need to be sustainable if we want them to continue to give us products and services we want.

And do realize that there is a difference between 'being understanding' and 'being happy'. I'm not happy about any of this, either. *IF* Nvidia truly are just being greedy and fucking us over for no good reason, then that sucks, but I dont really know that. Maybe the reality is in the middle somewhere. Either way, I still ultimately vote with my wallet and will buy what I feel is worth my money at any given time.


There's a difference between understanding that businesses have needs too and assuming they are your friend.


I know where you come from with the cafe thing, my mother used to run a cafe 9 years ago (after running one successfully for years a decade before that, but selling it off after my little brother was born ) and was in the same situation.
The brewery hiked up the prices and she charged 1.4 euros for a beer instead of 1.3 , the customers didn't like it and stopped coming. (and business wasn't too hot to begin with at that second cafe compared to the first one which was always packed)
They went elsewhere to a place that didn't raise their prices but still managed to be profitable.
My mother went out of business.

Noone shed a tear for her, noone should, she wasn't able to compete, I feel bad for her obviously because she is my mom, but she opened a business in a place where she was an outsider and patrons would rather go elsewhere (before the beer price thing) so it was already struggling. There was no reason for that cafe to exist.

That's how capitalism is supposed to work, a business has no innate reason to exist, it only exists to meet a demand, if that demand is no longer there it has no purpose. Keeping it on life support helps noone.
Businesses are not charities.


It's silly to compare a 7 percent increase in beer prices to a literal doubling of prices in the gpu market due to a lack of competition.
The gpu market is fucking broken as no matter how high prices get noone can rise to compete because the barrier to entry is insane.
We have no obligation to feed these 2 dinosaurs while they choose not to compete, fuck that and fuck them.
This isn't doing good, it's just enabling bad behavior that goes against the whole purpose of them existing.


Also on the topic of cafes, you're really going to use fucking horeca as a measuring stick for how businesses should work? the fucking industry that only thrive on the backs of underpaid and overworked employees? If cafes are struggling it's because there are way too many and they are only sustaining themselves off the back of hard working people who don't get fair pay.
It's an industry that needs to shrink.
 
If you had a real "VR mode" in the API, you should be able to have 2 GPUs working on 2 Viewports of the same scene with basically 0 CPU overhead, almost perfect scaling and less end to end latency than a single GPU (which needs to render the 2 viewports sequentially).

It could make even me buy 2 GPUs!

Yep me too, that sounds amazing.
 
Been wanting to upgrade my laptop with a 555M for a while. Wonder if I can hold out long enough to get a 9xxM GPU laptop. Hmmm.
 
This doesn't impress me. I'm still waiting for GPUs that blow the previous gen out of the water. Predictably these cards will be expensive for a small performance upgrade. Bleh.
 
So it sounds like we'll have to wait until 2016/2017 for Pascal, which is made up of features that Maxwell and Volta were supposed to have. I got my gtx770 4gb for $250, so I don't think I'll have a reason to upgrade unless I can get a similar deal for a decent upgrade.
 
Hope the price is right. Pricing out a new build and it's astonishingly pricy atm, but I really need to upgrade this Core 2 Quad/Radeon 6870 combo.

Hell, even if it drove the 780s down a fair bit I'd be happy. ;)
 
They're having a huge 24 hour "GAME24" live-stream on Sept 18th, so that's been floating around as the announcement date as well.

Do we expect them to start shipping this month though? Or for them to announce a release date at that point. I presume they're shipping, but obviously I don't know.

And I need to know!
 
Well, not all DX11 GPUs (5000 and 6000 series AMD I think are not supported).

Yeah, after posting I wasn't sure on that so I double-checked and it seems only AMD's GCN GPUs will support DX12, so 7xxx and above. I'll edit my post.

Edit: Done.
 
If I have a GTX770 2Gb, should I get this? I play PCARS and plan on getting W3.

I could probably sell my 770 for $225-250. Going SLI would require a better psu so that won't be economical. Plus, I only run 1080p 60fps.
 
If I have a GTX770 2Gb, should I get this? I play PCARS and plan on getting W3.

I could probably sell my 770 for $225-250. Going SLI would require a better psu so that won't be economical. Plus, I only run 1080p 60fps.
If the price is right, especially for the 8GB versions of the cards.
 
If the price is right, especially for the 8GB versions of the cards.

8GB versions? You believe that baseless OverclockersUK forum rumour? That's assuredly fake. He claims the 980 is only as fast as the 780 not the Ti, while the rumour in the OP put it ahead of the Ti. Speculation of the new flagship not being as fast as the 780Ti is something I've heard a lot on forums.
 
Isn't 224GB/s memory bandwidth a bit poor especially for a 4GB card? Why not 384-bit bus width?

It's not an apples to apples comparison as this is a new architecture and Maxwell, based on the 750 Ti, is much more bandwidth efficient compared to Kepler.
 
8GB versions? You believe that baseless OverclockersUK forum rumour? That's assuredly fake. He claims the 980 is only as fast as the 780 not the Ti, while the rumour in the OP put it ahead of the Ti. Speculation of the new flagship not being as fast as the 780Ti is something I've heard a lot on forums.

You realise that Overclockers guy is part of company so he actually has access to information that most people don't ?

And he never said anything about performance all he talked about was replacing 780 in context of price.
 
8GB versions? You believe that baseless OverclockersUK forum rumour? That's assuredly fake. He claims the 980 is only as fast as the 780 not the Ti, while the rumour in the OP put it ahead of the Ti. Speculation of the new flagship not being as fast as the 780Ti is something I've heard a lot on forums.
Considering Nvidia's trend of releasing versions of cards with twice the VRAM they debut with, I see nothing implausible about it.
 
Guys,

Isnt the big story here that GTX 980M outputs the same performance as a R9 290?
that seems quite impressive (?)
 
Will not buy a new laptop until Fallout 4 or Elder Scrolls VI arrives. Should probably be winter 2015 for the first one.
Perfect!
 
Guys,

Isnt the big story here that GTX 980M outputs the same performance as a R9 290?
that seems quite impressive (?)

Yep. Thats a $400 card in a laptop.

Back in 2011, the GTX 485m was a GTX 460 ($200 card) with more vram and slower clocks. The GTX 980m is massively impressive, even relatively speaking.
 
Sorry for the many quotes:

TDP matters when you have it limit how powerful you can make a gpu
The thing is that new generations of gpus have ALWAYS had better performance/watt, and the problem is that they aren't making a new 250W card to replace the old 250W card, but instead just make a midrange one and price it at the high end.

Also 'gpu wars' 'fans'? ugh, please take that shit to gamefaqs or the console threads, thanks.




You're contradicting yourself, this new maxwell gpu only has a 170W TDP, the 780ti had almost 300W power consumption, they have a much more power efficient architecture but don't do anything with it on the high end...





thank you!





And this, also to chalk it all up to lack of technolocigal progress alone is dead wrong.
Clearly nvidia seem to have a much improved architecture here, they could sell us a proper high end card on it but they won't until next year because fuck us (and no competition from amd)



You're giving me a headache, 500dollars IS flagship money, hell it is pretty close to what used to be dual gpu money!
no it is not necessary for them to charge more money, they do so because they can (when kepler came out people were desperate to move on from 40nm 300W old 28nm gpus to more powerful 28nm due to all the delays and nvidia took advantage of them and made the 680 in its current form)
Gtx 580 was a 300watt, giant die with very very poor yields, it was sold at 500 euros and they still had massive margins on it despite the wide bus, large die and super low yields (120dollars production cost vs 500 dollars retail price)



Please people, at least aknowledge when you're getting fucked, instead of making up excuses for these companies.
They can make them up for themselves just fine...
Don't pretend Nvidia haven't doubled gpu prices over the past 3 years and don't pretend they aren't spreading out their releases within one architecture over a 2 year period just to keep doing that.

The staggered releases are what enables them to manipulate perception and keep these prices doubled.
Titan was the kepler geforce 580, yet they managed to give the perception that it was some kind of ubergpu...

Fact of the matter is they have a much improved architecture on a very mature 28nm process and they aren't passing on the savings and benifits to the people who buy their shit.

Pay what you want for these things but don't pretend they're doing you a favor, it's insulting.


edit: just to spell out some of the rationalisations and misconceptions:

-kepler released: it's a new process node it's more expensive as the process hasn't 'matured' yet : reality: gtx 580 on 40nm had super low yields anyhow and the 680 had a very small die no doubt making for good yields negating any difference.
Now the 28nm process is mature so going by the same excuse for why new cards on a new process node should be more expensive, they should by now be cheaper

-gtx 580 releases costs 500 euros, people rationalise paying 500 euros for it because of the large die, 384 bit bus (every time a gpu with a larger bus releases people go on about how the makes the pcb marginally more expensive and warrants a massive price premium)
gtx 680 releases , costs 500 euros, hey wait a minute 256bit bus (wow it must be so much cheaper to make right?) and a far smaller die, why are you paying 500 euros again? because it's called 680

-we can't have more powerful gpus because we are running into thermal limits, gpu makers are making bigger and bigger and hotter and hotter graphics cards so it doesn't follow moore's law
(this was the excuse for a 500dollar gtx 580)
Except, you know, 28nm was massively much more power efficient than 40nm fermi, we had everything we needed for a proper moore's law style jump in performance/price
And , you know, apparently maxwell is also massively much more power efficient than kepler, despite being on the same 28nm process (excuse used for the small performance increase), but we are fed another gtx 680....

All I see reasons for why prices go up, which are then promptly forgotten when they should make prices go down.

We're being sold midrange dies and midrange memory busses on a very mature process with a very efficient and much improved architecture at insane high end prices.

This is an amazing post, bravo.
 
It's not an apples to apples comparison as this is a new architecture and Maxwell, based on the 750 Ti, is much more bandwidth efficient compared to Kepler.

This. People still don't get that Maxwell is going to use less wider buses.
 
Top Bottom