So when will this thing be out? I need to upgrade!
The rumor is Sept 9-10th for the announcement, Sept 19th for the NDA.
So when will this thing be out? I need to upgrade!
There's probably a way to hack an SLI profile into the driver which works somewhat better than the default with some VR games. But that seems like a lost opportunity to me.If Nvidia can offer splitscreen render by default (it is in the driver: which adds no frame delay), do you really think an API is stopping them from just extending that mode to VR?
Yeah a generalized API support of it makes the most sense of course. Hope the DX12 also considers it so as to speed up the adoption. Open GL may not be enough!There's probably a way to hack an SLI profile into the driver which works somewhat better than the default with some VR games. But that seems like a lost opportunity to me.
If you had a real "VR mode" in the API, you should be able to have 2 GPUs working on 2 Viewports of the same scene with basically 0 CPU overhead, almost perfect scaling and less end to end latency than a single GPU (which needs to render the 2 viewports sequentially).
Oh myIt could make even me buy 2 GPUs!
The rumor is Sept 9-10th for the announcement, Sept 19th for the NDA.
Looking to upgrade to SLI 980s from my single 2GB 670.
Will spare no expense with VR this close.
Well I know pretty much nothing about the costs involved here, so I cant really get into an argument about it.
And I will always vehemently disagree with this. As I explained when I was a manager of a café, it was very frustrating having people complain to you about price increases, accusing you of being greedy or taking advantage of people when there's not a damn thing we could have done about it.
As a result, I have become very understanding that businesses need to be sustainable if we want them to continue to give us products and services we want.
And do realize that there is a difference between 'being understanding' and 'being happy'. I'm not happy about any of this, either. *IF* Nvidia truly are just being greedy and fucking us over for no good reason, then that sucks, but I dont really know that. Maybe the reality is in the middle somewhere. Either way, I still ultimately vote with my wallet and will buy what I feel is worth my money at any given time.
There's a difference between understanding that businesses have needs too and assuming they are your friend.
If you had a real "VR mode" in the API, you should be able to have 2 GPUs working on 2 Viewports of the same scene with basically 0 CPU overhead, almost perfect scaling and less end to end latency than a single GPU (which needs to render the 2 viewports sequentially).
It could make even me buy 2 GPUs!
They're having a huge 24 hour "GAME24" live-stream on Sept 18th, so that's been floating around as the announcement date as well.
Want to finally upgrade from my 560ti.
C'mon nvidia, I'm ready for it.
So is DX12 really the only notable feature to expect from Maxwell without Unified memory in the picture?
So is DX12 really the only notable feature to expect from Maxwell without Unified memory in the picture?
No, technically, as DX12 will be backwards-compatible with all DX11-capable GPUs.
My 780ti will do for now.
They're having a huge 24 hour "GAME24" live-stream on Sept 18th, so that's been floating around as the announcement date as well.
Well, not all DX11 GPUs (5000 and 6000 series AMD I think are not supported).
If the price is right, especially for the 8GB versions of the cards.If I have a GTX770 2Gb, should I get this? I play PCARS and plan on getting W3.
I could probably sell my 770 for $225-250. Going SLI would require a better psu so that won't be economical. Plus, I only run 1080p 60fps.
If the price is right, especially for the 8GB versions of the cards.
Not right now, but it gives the card extra longevity since games seem to be getting hungry for VRAM. It also helps with ultra-high texture mods for some games.Would I need 8gb at 1080p outside of Watch Dogs?
Isn't 224GB/s memory bandwidth a bit poor especially for a 4GB card? Why not 384-bit bus width?
If the price is right, especially for the 8GB versions of the cards.
Isn't 224GB/s memory bandwidth a bit poor especially for a 4GB card? Why not 384-bit bus width?
Because this is a mid end card :/
8GB versions? You believe that baseless OverclockersUK forum rumour? That's assuredly fake. He claims the 980 is only as fast as the 780 not the Ti, while the rumour in the OP put it ahead of the Ti. Speculation of the new flagship not being as fast as the 780Ti is something I've heard a lot on forums.
Okay, what's with the squinty face?
Considering Nvidia's trend of releasing versions of cards with twice the VRAM they debut with, I see nothing implausible about it.8GB versions? You believe that baseless OverclockersUK forum rumour? That's assuredly fake. He claims the 980 is only as fast as the 780 not the Ti, while the rumour in the OP put it ahead of the Ti. Speculation of the new flagship not being as fast as the 780Ti is something I've heard a lot on forums.
GTX 680/770 is close to twice as fast as 560ti.
Guys,
Isnt the big story here that GTX 980M outputs the same performance as a R9 290?
that seems quite impressive (?)
Lawd, I am going to get a huge upgrade... just hope I don't get bottlenecked too badly by my old 3Ghz i5...
Would I need 8gb at 1080p outside of Watch Dogs?
Me too, but the 2500k is still nice.
What's the source on the price?$399 for GTX 970 ؟
wow
Sorry for the many quotes:
TDP matters when you have it limit how powerful you can make a gpu
The thing is that new generations of gpus have ALWAYS had better performance/watt, and the problem is that they aren't making a new 250W card to replace the old 250W card, but instead just make a midrange one and price it at the high end.
Also 'gpu wars' 'fans'? ugh, please take that shit to gamefaqs or the console threads, thanks.
You're contradicting yourself, this new maxwell gpu only has a 170W TDP, the 780ti had almost 300W power consumption, they have a much more power efficient architecture but don't do anything with it on the high end...
thank you!
And this, also to chalk it all up to lack of technolocigal progress alone is dead wrong.
Clearly nvidia seem to have a much improved architecture here, they could sell us a proper high end card on it but they won't until next year because fuck us (and no competition from amd)
You're giving me a headache, 500dollars IS flagship money, hell it is pretty close to what used to be dual gpu money!
no it is not necessary for them to charge more money, they do so because they can (when kepler came out people were desperate to move on from 40nm 300W old 28nm gpus to more powerful 28nm due to all the delays and nvidia took advantage of them and made the 680 in its current form)
Gtx 580 was a 300watt, giant die with very very poor yields, it was sold at 500 euros and they still had massive margins on it despite the wide bus, large die and super low yields (120dollars production cost vs 500 dollars retail price)
Please people, at least aknowledge when you're getting fucked, instead of making up excuses for these companies.
They can make them up for themselves just fine...
Don't pretend Nvidia haven't doubled gpu prices over the past 3 years and don't pretend they aren't spreading out their releases within one architecture over a 2 year period just to keep doing that.
The staggered releases are what enables them to manipulate perception and keep these prices doubled.
Titan was the kepler geforce 580, yet they managed to give the perception that it was some kind of ubergpu...
Fact of the matter is they have a much improved architecture on a very mature 28nm process and they aren't passing on the savings and benifits to the people who buy their shit.
Pay what you want for these things but don't pretend they're doing you a favor, it's insulting.
edit: just to spell out some of the rationalisations and misconceptions:
-kepler released: it's a new process node it's more expensive as the process hasn't 'matured' yet : reality: gtx 580 on 40nm had super low yields anyhow and the 680 had a very small die no doubt making for good yields negating any difference.
Now the 28nm process is mature so going by the same excuse for why new cards on a new process node should be more expensive, they should by now be cheaper
-gtx 580 releases costs 500 euros, people rationalise paying 500 euros for it because of the large die, 384 bit bus (every time a gpu with a larger bus releases people go on about how the makes the pcb marginally more expensive and warrants a massive price premium)
gtx 680 releases , costs 500 euros, hey wait a minute 256bit bus (wow it must be so much cheaper to make right?) and a far smaller die, why are you paying 500 euros again? because it's called 680
-we can't have more powerful gpus because we are running into thermal limits, gpu makers are making bigger and bigger and hotter and hotter graphics cards so it doesn't follow moore's law
(this was the excuse for a 500dollar gtx 580)
Except, you know, 28nm was massively much more power efficient than 40nm fermi, we had everything we needed for a proper moore's law style jump in performance/price
And , you know, apparently maxwell is also massively much more power efficient than kepler, despite being on the same 28nm process (excuse used for the small performance increase), but we are fed another gtx 680....
All I see reasons for why prices go up, which are then promptly forgotten when they should make prices go down.
We're being sold midrange dies and midrange memory busses on a very mature process with a very efficient and much improved architecture at insane high end prices.
What's the source on the price?
880 is 980, they changed the name.
Looks good to me, my 680 scores 6800 so it will be a nice boost.
It's not an apples to apples comparison as this is a new architecture and Maxwell, based on the 750 Ti, is much more bandwidth efficient compared to Kepler.