• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia announces the GTX 980 Ti | $650 £550 €605

I don't understand THIS mentality. By the time anything comes out that a 970 can't max at 1080p you'll be able to buy a card more powerful than a 980 Ti at a lower price. "Future proof" should only be used ironically when talking about PC gaming.

What? There's already a dozen games the 970 can't handle maxed out @60fps.
 
This is still at least a whole year away.

Keywords "at least" because the number of unproven factrors that all need to work together for that gpu to be possible is huge and might lead to GK100/GK110 type of disaster launch where every working unit is being sent to professional customers buying cards for several thousands USD
 

DBT85

Member
My Zotac 980ti Amp! edition arrived yesterday, £500 form OCUK in their weekend sale.

Coming from a 6950 so quite a jump when running at 2650x1600.

Firestrike Extreme on both. Couldn't run Ultra on the old one! 4.8x more FPS in the combined test.

OgVdK3L.jpg

mqHgNNO.jpg

Heaven 4.0 scores for both too. Also 4.8x more FPS on average, 4.7x on Min and 4.8 on Max.


The Final Fantasy benchmark also reported a 4.7x gain in average fps.

Both my 2500k and my 980ti are running at stock speeds. My 2500k was fine at about 4.5 on air without much fiddling when I first got it and even managed a 5.0 SuperPi run so when I get time I'll aim for 4.5 again and try some more tests. My new case should arrive at the end of next month and then some custom WC will follow.

Very happy with things right out of the box though.
 
Can't believe many of there just upgrade something like 780 or 970 to 980 ti. It seem lot of impatience here heh.

I just glad because upgrade from 570 to 980ti like holy grail. Make me back to gaming even more, as I was almost gave up and stay with mobile gaming only.

Well I sold my 970 after I got my 980 Ti, there's plenty of demand for used 970's because they are the premier price/performance bang for buck value card for 1080p gaming.

The 980 Ti gives me almost double the framerates of the 970 at 4K, it's an incredible performance difference which feels almost generational even though the 970 and 980 Ti are actually in the same generation of cards.

I have found the upgrade more than worthwhile, it's basically made 4K feasible right now in 2015.

Are Nvidia's hotfix drivers worth trying I'm currently on 353.06

They are for fixing the infamous Chrome crashes if you are getting those
 

Thorgal

Member
My 780 ti fan went bust while under warranty and since returning from repairs is takin too long they offered me my money back which I then spend on a 980 ti.

I feel almost grateful my card went bust with awesome customer support like that.
 

Lexxism

Member
My 780 ti fan went bust while under warranty and since returning from repairs is takin too long they offered me my money back which I then spend on a 980 ti.

I feel almost grateful my card went bust with awesome customer support like that.
What brand?
 
Anyone tried EVGA Hybrid NVIDIA GTX GTX 980Ti Cooler, for 980 ti reference cards
Thinking of getting one but not sure how quiet they are
 
Benchmarking my new 980TI...

Every time I increase the memory clock, the score drops.... *confused*

Kinda disappointed that I could only push the core clock on this thing to +175. Anything higher and firestrike crashes within the first 5 seconds. Pushing the voltage up does nothing.
 

Durante

Member
Every time I increase the memory clock, the score drops.... *confused*
On any card which boosts up to a power limit, this can well happen. The higher memory clock (and thus power consumption) will prevent the GPU from reaching the same boost clocks. If a given workload is not memory limited, then the memory BW increase will have no effect and the lack of GPU clock will reduce overall performance.

Should just be a few % though.
 
On any card which boosts up to a power limit, this can well happen. The higher memory clock (and thus power consumption) will prevent the GPU from reaching the same boost clocks. If a given workload is not memory limited, then the memory BW increase will have no effect and the lack of GPU clock will reduce overall performance.

Should just be a few % though.

Interesting.

So it's definitely beneficial to increase the power limit up to 110%?

Also, even though I couldn't push the core too high, I was still able to finally break 16000 in Firestrike. Something I could never do with the TitanX.

http://www.3dmark.com/3dm/7534259?

This was the best bench I could get on the TitanX

http://www.3dmark.com/3dm/6379517
 

Grassy

Member
Anyone tried EVGA Hybrid NVIDIA GTX GTX 980Ti Cooler, for 980 ti reference cards
Thinking of getting one but not sure how quiet they are

There's a good thread on the EVGA forums about the cooler, with plenty of info/installs from people who have them - http://forums.evga.com/Titan-X-and-980-hybrid-cooler-official-thread-m2327117.aspx

Benchmarking my new 980TI...

Every time I increase the memory clock, the score drops.... *confused*

Kinda disappointed that I could only push the core clock on this thing to +175. Anything higher and firestrike crashes within the first 5 seconds. Pushing the voltage up does nothing.

You have the EVGA SC? +175 on the core is about the normal for these cards I think, it should still boost up to ~1450-1500mhz. Voltage doesn't seem to affect mine at all either, I'm not willing to push it too hard since I have two cards anyway.
 
There's a good thread on the EVGA forums about the cooler, with plenty of info/installs from people who have them - http://forums.evga.com/Titan-X-and-980-hybrid-cooler-official-thread-m2327117.aspx



You have the EVGA SC? +175 on the core is about the normal for these cards I think, it should still boost up to ~1450-1500mhz. Voltage doesn't seem to affect mine at all either, I'm not willing to push it too hard since I have two cards anyway.

Yup, the EVGA SC+ w/ACX 2.0 cooler. GPUZ says the boost clock is 1365 MHz.

I thought having an ASIC quality of 79.6% would mean it would go higher on the overclocks, but I also have no idea what that actually means :p

I'm still pretty happy with the fact that it's faster than my TitanX was capable of, and now I can save a bit of money going SLI with these (still need a new power supply though).
 

longdi

Banned
So exciting, and disappointing at the same time, to see almost $1500 leave your account for graphics cards :D

And in another 12 months, that $1500 graphics card will be worth $399 (pascal! pascal!). :'P

Going to order a 980Ti to replace my 290X, AMD Furries X sucks at 1440/1600p and FCAT times.
 

Wag

Member
Wild. I'm able to boost my memory by 500MHz and beyond and it's still stable. My core craps out somewhere around 220-230MHz tho :(

It doesn't seem to matter how much voltage I up it by too. Even if I don't boost the voltage it's still pretty much the same. Temp doesn't seem to go over 80C too.
 
Yup, the EVGA SC+ w/ACX 2.0 cooler. GPUZ says the boost clock is 1365 MHz.

I thought having an ASIC quality of 79.6% would mean it would go higher on the overclocks, but I also have no idea what that actually means :p

I'm still pretty happy with the fact that it's faster than my TitanX was capable of, and now I can save a bit of money going SLI with these (still need a new power supply though).

im still not sure how valid that asic quality rating is. i remember various engineers or other company reps for amd/nvidia saying its pretty useless
 

Wag

Member
I get coil whine when my fan model is working hard like in benchmarks so there's that (but I game mostly using headphones so I rarely notice).
 

Wag

Member
I have ASIC rating of 61.9% if I try to overclock the core clock more then 200mhz it crashes :/ this is with reference Gigabyte 980 ti

Mine is 70.6. EVGA ACX 2.0+. Tops out ~220Mhz. 200 isn't too bad. You're not going to get anyone to RMA based on ASIC. At least it OC's.
 

Wag

Member
Thanks that's reassuring to know. If I wasn't using ITX case I would of went for MSI non reference.

That's an almost 20% clock boost. That's pretty good. That seems to be what most of the 980Ti's get. Rarely some of them hit the 1500s, but for the most part they hit ~1400.
 

longdi

Banned
So EVGA 980 TI Classifieds appeared on Amazon sometime late yesterday and sold out fast. I was planning on picking one up so I'm sort of annoyed I missed out. Nothing about it on EVGA's site itself which is what I'd been checking.

Woah..normally how long does Amazon 'temporary out of stocks' last?

Most of the non-ref 980Ti have 1-2 months shipping time. :/
 

finalflame

Member
Woah..normally how long does Amazon 'temporary out of stocks' last?

Most of the non-ref 980Ti have 1-2 months shipping time. :/

Until they get more stock. This will vary based on the manufacturer and overall availability. There's no formulaic way to determine how long any given item on Amazon that is temporarily out of stock will remain out of stock.
 
So guys, I'm contemplating on leaving SLI for good(unless dx12 fixes sli support) and I'm thinking of getting rid of my GTX 970s,however I cant sell them as I'm going to give one to my brother while the other might be used in my backup pc or gathering dust.

I'm thinking of buying the following card: Gigabyte GeForce GTX 980 Ti G1 Gaming

6GB is enough right? I probably wont bother going for 4K that often, however this card should be good for Arkham Knight/Witcher 3 for the foreseeable future right?

I'll be properly upgrading my GPU when the new line of cards come as hopefully DX12 will improve SLI support.

Is this wise or am I just wasting money and should stick with my GTX 970s?

Specs:

i7 5820K@ 4.3ghz
16GB DDR4@2100MHZ
GTX 970 SLI
1TB+256GB 840 evo SSD
Windows 8.1
 

Gizuko

Member
Quick question, do you guys recon with the new technologies that could potentially be included in Pascal/AMD's next (HBM2, die shrink), the resale value of the 980 ti could go down even more than it usually does when the next generation of cards launches (780 ti -> 980 / 980 ti, for example)?
 

cyen

Member
Quick question, do you guys recon with the new technologies that could potentially be included in Pascal/AMD's next (HBM2, die shrink), the resale value of the 980 ti could go down even more than it usually does when the next generation of cards launches (780 ti -> 980 / 980 ti, for example)?

It depends on the lineup of pascal\amd next, they normally dont pump out a full line of products on a new technology so i think the resale value will be there for a few months.
 

Matty8787

Member
Might trade out my 2 month old 970's for one of these! The vram is swaying me.

I only game at 1080p but my monitors are both 144hz, and although I do get 120 fps in most games at ultra, that 6gb vram is way more future proof!

Only thing that is holding me back is the possibility of vram stacking on DX12
 

Gizuko

Member
It depends on the lineup of pascal\amd next, they normally dont pump out a full line of products on a new technology so i think the resale value will be there for a few months.

Like they did with the 980 to 980 ti, then? Well, it's better than nothing, I guess.

Thank you for the input.
 

Tovarisc

Member
And in another 12 months, that $1500 graphics card will be worth $399 (pascal! pascal!). :'P

And soon after Pascal card releases another iteration on it will release and make that first card drop like a rock in value. It's how PC HW market has always worked and will keep working. I don't see reason to take piss because someone upgraded now and not after "next thing" releases, but even then there is "next thing" instantly around corner.
 

cyen

Member
And soon after Pascal card releases another iteration on it will release and make that first card drop like a rock in value. It's how PC HW market has always worked and will keep working. I don't see reason to take piss because someone upgraded now and not after "next thing" releases, but even then there is "next thing" instantly around corner.

yeah, always works like that, the only diference this time is that since the shrink to 20nm failed, 16nm finfet with hbm2 still a year away and AMD\NVIDIA are already taping out gigantic pieces of silicon at 28nm we will probably have a whole year without new products (unless they rebrand the current series).
 

riflen

Member
Might trade out my 2 month old 970's for one of these! The vram is swaying me.

I only game at 1080p but my monitors are both 144hz, and although I do get 120 fps in most games at ultra, that 6gb vram is way more future proof!

Only thing that is holding me back is the possibility of vram stacking on DX12

Don't base any purchasing decision on a feature of a graphics API. Not only is DirectX 12 an optional API for developers (Direct X 11.3 will exist alongside it), multi-GPU support is an optional feature of that API.

I doubt the multi-GPU situation will change much when Direct X 12 releases next month, because the size of the potential audience has not changed. Multi-GPU is a tiny percentage of potential customers.
Also, any kind of pooling of VRAM will likely always require some duplication of assets or datasets across the the GPUs, so even if all these features are implemented in your favourite game, you may only see a modest increase in available VRAM.
 

longdi

Banned
Pascal is coming sept16 ya? It should be one interesting card, 8gb hbm2 16nm, we may be looking at 70-80% gain than the current 40& increase card on card
 

Tovarisc

Member
Pascal is coming sept16 ya? It should be one interesting card, 8gb hbm2 16nm, we may be looking at 70-80% gain than the current 40& increase card on card

If we are lucky, yes. It's yet to be seen if they encounter yield issues with 16/14nm and how much HBM2 there is to go around when mass production of cards is supposed to begin. Some are still afraid that 16/14nm will get postponed once again. 28nm => 16/14nm should give very noticeable, even huge, performance increase right off the bat yes, but that is assuming Nvidia actually releases full Pascal and not so hacked up job for milking purposes.
 

mintylurb

Member
So..EVGA sends me a 980 ti with chipped pcb board. Asks me to send pictures which I do. Amusingly, they're having issue receiving my pictures due to their company firewall..I sent the pictures using their website's attach picture form..wth..
So, after about 7 emails going back and forth, my case has been escalated to their management..
Great service, EVGA.
 

dr_rus

Member
Pascal is coming sept16 ya? It should be one interesting card, 8gb hbm2 16nm, we may be looking at 70-80% gain than the current 40& increase card on card

I'm expecting a first Pascal to hit the market in 2Q16. But it's not a given that it will use HBM2 at all. GM200 vs Fiji situation is clearly showing that there's still some juice left in GDDR5 and considering that HBM2 is a new and expensive tech the first Pascal which will end up in GeForce may be using GDDR5 again.

We really don't have anything to even speculate about yet.
 
I'm expecting a first Pascal to hit the market in 2Q16. But it's not a given that it will use HBM2 at all. GM200 vs Fiji situation is clearly showing that there's still some juice left in GDDR5 and considering that HBM2 is a new and expensive tech the first Pascal which will end up in GeForce may be using GDDR5 again.

Nvidia's roadmap clearly stated 3D memory is coming with Pascal. Juice left in GDDR5? Meh. Yes and no. With current games? They're not anywhere near the bandwidth limit of the cards even at the highest texture resolutions. If you slap a theoretical game on a Titan X and start using say 9 GB of assets in a frame, despite having a 12GB frame buffer, then you're immediately going to start seeing VRAM constraining performance.
 

Seanspeed

Banned
Fury X clearly shows improved competitiveness the higher the resolution. If we can put a lot of that down to HBM, then I think we're already looking at *the* tech to have for high res gaming. And this is also just the very first generation, with immature driver support and plenty if room for improvement. Once we get past the 4GB limitation, I think it'll put GDDR5 to rest in the high end game at least. Nvidia would be dropping the ball hard by passing on it in 2016.
 

dr_rus

Member
Nvidia's roadmap clearly stated 3D memory is coming with Pascal. Juice left in GDDR5? Meh. Yes and no. With current games? They're not anywhere near the bandwidth limit of the cards even at the highest texture resolutions. If you slap a theoretical game on a Titan X and start using say 9 GB of assets in a frame, despite having a 12GB frame buffer, then you're immediately going to start seeing VRAM constraining performance.

NV's roadmap highlight new tech, it doesn't state anything about the end products. 3D memory is a technology that may or may not be used in a product based on Pascal architecture. Pascal generation will have lots of chips in it and I'm pretty sure that most of them will end up with LPDDR3/GDDR5 and not HBM2. There is a reason why Fiji is the only GPU with HBM in AMD's 300 series. Same reason will apply to Pascal and HBM2 as well I think.

When you're thinking about bandwidth usage per frame you have to remember that for the next 3-5 years we'll be limited by that DDR3 pool of Xbox One here.
 

Tovarisc

Member
I'm expecting a first Pascal to hit the market in 2Q16. But it's not a given that it will use HBM2 at all. GM200 vs Fiji situation is clearly showing that there's still some juice left in GDDR5 and considering that HBM2 is a new and expensive tech the first Pascal which will end up in GeForce may be using GDDR5 again.

We really don't have anything to even speculate about yet.

NV's roadmap highlight new tech, it doesn't state anything about the end products. 3D memory is a technology that may or may not be used in a product based on Pascal architecture. Pascal generation will have lots of chips in it and I'm pretty sure that most of them will end up with LPDDR3/GDDR5 and not HBM2. There is a reason why Fiji is the only GPU with HBM in AMD's 300 series. Same reason will apply to Pascal and HBM2 as well I think.

When you're thinking about bandwidth usage per frame you have to remember that for the next 3-5 years we'll be limited by that DDR3 pool of Xbox One here.

I think 2Q16 is very optimistic for Pascal release considering how much there has been issues with getting 16/14nm pipeline up, getting delayed regularly after 20nm was dud. GDDR5 isn't bandwidth limited, but takes a lot PCB space while being inefficient for cooling and requires more juice than HBM. HBM2, even 1 already, tossed all worries about bandwidth becoming issue out of the window while making cooling it more efficient and takes less juice to run. How expensive HBM is over GDDR5 I don't know for sure, but afaik it's supposedly cheaper solution than GDDR5 in long run.

Pascal is already taped, according somewhat reliable leaks, and it uses HBM2 in its design. HBM brings to table better power usage, hugely increased capacity over GDDR5 while using a lot space on PCB as it gets integrated into die. I will be very disappointed in Nvidia's Pascal line up if after all HBM2 hype they have been generating for it they use GDDR5 for all or mid to low tier cards. HBM will be future mem tech for GPU industry and introduction of 16/14nm is best time to embrace future. We get more powerful and efficient dies with superior mem tech.

Fiji architecture is only AMD arch to use HBM is because it's only AMD arch designed to support such memory. 300 -series is refresh of 200 -series and bases on Hawaii arch that is designed for GDDR5. You could argue that there isn't enough HBM to go around for larger scale release, but we really don't know any factual numbers on it and like I pointed out it's about arch supporting it. You just don't slap next-gen tech to last-gen arch and expect it to work.

VR development will get huge boost from possible performance given by Pascal, and AMD's rivaling next-gen arch, allowing higher resolution solutions etc. Not even mentioning it very likely will make single GPU 4k gaming possible for PC space, as I don't consider 30FPS with lowered settings to be "4k PC gaming".
 
Top Bottom