Sorry for the many quotes:
Consumers are into that stuff nowadays.
Every GPU war thread, fans will tout Nvidia's power efficiency and TDP.
TDP matters when you have it limit how powerful you can make a gpu
The thing is that new generations of gpus have ALWAYS had better performance/watt, and the problem is that they aren't making a new 250W card to replace the old 250W card, but instead just make a midrange one and price it at the high end.
Also 'gpu wars' 'fans'? ugh, please take that shit to gamefaqs or the console threads, thanks.
That's because Nvidia, AMD, and Intel are kind of hitting the limit in terms of raw power. They are hitting the point where all they can do is really reduce the amount of power required for a certain level of acceptable performance, and shrink the die to make it more affordable. At this point the only way they can go with CPU's is to up the core counts and hope software becomes more heavily threaded.
In the case of GPU's they are shrinking them and making them more cost effective so they can then up the number of SMX/Compute Units/Whatever you call it and end up with more raw power. If you remember Fermi it ran a bit hot and used a lot of energy because it was made on the wrong process node. Once they get designs to a certain point they will just expand outward, but nobody wants a card that requires 600W to run.
You're contradicting yourself, this new maxwell gpu only has a 170W TDP, the 780ti had almost 300W power consumption, they have a much more power efficient architecture but don't do anything with it on the high end...
It's not reasonable, not by historical trends, not by the logic of GPU architecture progression. The 780Ti itself was not reasonable, nor was the 780 before it, nor the Titan before it. The GTX 680, by historical trends, should have been the GK100/110 chip, not a measly GK104. The Titan should never have been $1000, nor the highly-crippled 780 $650. Nor should the 780Ti (read: the full real big Kepler die) have launched at $650 and quite some months late by the trends preceding it. A Maxwell (read: new architecture) part practically no faster than the full die chip of the preceding architecture should never be branded as an x80 part. Before the 680, it was unheard of. Perhaps the conditioning Nvidia so cleverly carried out with relative prices (680 managing to match up to the 7970 despite the mid-range chip, Titan making the 780 seem like a steal in comparison) makes you think the 980 is reasonable, but it's not. It's a disgrace. Here's how new architectures are supposed to be and have been for the longest time before the mess that's become the current microprocessor market.
6800 -> 7800 = ~50% improvement.
7800 -> 8800/9800 (new architecture) = ~200% improvement (and that justified the initial $650 pricetag on the 8800GTXs, not the crap Nvidia pulled with the GK110)
8800/9800 -> 280 (heavily improved Tesla architecture) = ~50-60% improvement
280 -> 480/580 (new architecture) = can't remember rough percentage improvements, but people were comparing 280 SLI to a single 480
The speculated GK110 -> 980 (new Maxwell architecture) jump is a joke in comparison, and no, the $500 pricetag isn't reasonable, it's what should been the reality for this level of performance for at least a year now following past card prices/releases.
The second revision of a given architectural lineup isn't supposed to be where the performance comes in (like the supposed "980Ti", that I really think will be a $800-$1000 Titan II, followed by a heavily crippled version of that for $600-$650 with the new naming scheme), it's supposed to be a revision. If these numbers are true for the 980, the 780Ti to 980 is more like 0-10% improvement (assuming some additional clocking headroom on the 980) and the overpriced-as-hell part we should have gotten in a more reasonable price range will be a huge jump rendering the 980 obsolete and charging a huge premium just to do so... just to progress in a somewhat reasonable manner as new architectures once did. Maybe it can be argued this is just a one-time stop-gap measure because of 20nm production issues or some-such, but I'm skeptical because of what Nvidia have been doing with Kepler and the Kepler architecture has overstayed it's welcome. We're at 2.5 years now. Time for a new architecture that actually means something, not just a new code-name for the continued overpriced drip-feeding Nvidia are intent on making the norm.
thank you!
Disappointment over technology progress (and prices to be more reasonable) VS understanding the technical reason behind why it isn't so aren't necessarily mutually exclusive..
And this, also to chalk it all up to lack of technolocigal progress alone is dead wrong.
Clearly nvidia seem to have a much improved architecture here, they could sell us a proper high end card on it but they won't until next year because fuck us (and no competition from amd)
Things have changed. the x80 cards are not true flagship models anymore. You're basing a lot of your judgement off of what it is ultimately just an arbitrary naming scheme that can be changed as they see fit.
And if the $500 rumor is true for the 980, then they are actually aren't charging flagship money for it if we go by the precedent of the Titan and 780Ti models.
I have no idea what's going in the financial world of these GPU makers, but is it possible that with the slowdown in progress, they are having to spend more and more money to try and achieve the gains that people desire or demand from them? And thus it becomes necessary to charge more for the products in order to justify their continued increased spending?
You're giving me a headache, 500dollars IS flagship money, hell it is pretty close to what used to be dual gpu money!
no it is not necessary for them to charge more money, they do so because they can (when kepler came out people were desperate to move on from 40nm 300W old 28nm gpus to more powerful 28nm due to all the delays and nvidia took advantage of them and made the 680 in its current form)
Gtx 580 was a 300watt, giant die with very very poor yields, it was sold at 500 euros and they still had massive margins on it despite the wide bus, large die and super low yields (120dollars production cost vs 500 dollars retail price)
Please people, at least aknowledge when you're getting fucked, instead of making up excuses for these companies.
They can make them up for themselves just fine...
Don't pretend Nvidia haven't doubled gpu prices over the past 3 years and don't pretend they aren't spreading out their releases within one architecture over a 2 year period just to keep doing that.
The staggered releases are what enables them to manipulate perception and keep these prices doubled.
Titan was the kepler geforce 580, yet they managed to give the perception that it was some kind of ubergpu...
Fact of the matter is they have a much improved architecture on a very mature 28nm process and they aren't passing on the savings and benifits to the people who buy their shit.
Pay what you want for these things but don't pretend they're doing you a favor, it's insulting.
edit: just to spell out some of the rationalisations and misconceptions:
-kepler released: it's a new process node it's more expensive as the process hasn't 'matured' yet : reality: gtx 580 on 40nm had super low yields anyhow and the 680 had a very small die no doubt making for good yields negating any difference.
Now the 28nm process is mature so going by the same excuse for why new cards on a new process node should be more expensive, they should by now be cheaper
-gtx 580 releases costs 500 euros, people rationalise paying 500 euros for it because of the large die, 384 bit bus (every time a gpu with a larger bus releases people go on about how the makes the pcb marginally more expensive and warrants a massive price premium)
gtx 680 releases , costs 500 euros, hey wait a minute 256bit bus (wow it must be so much cheaper to make right?) and a far smaller die, why are you paying 500 euros again? because it's called 680
-we can't have more powerful gpus because we are running into thermal limits, gpu makers are making bigger and bigger and hotter and hotter graphics cards so it doesn't follow moore's law
(this was the excuse for a 500dollar gtx 580)
Except, you know, 28nm was massively much more power efficient than 40nm fermi, we had everything we needed for a proper moore's law style jump in performance/price
And , you know, apparently maxwell is also massively much more power efficient than kepler, despite being on the same 28nm process (excuse used for the small performance increase), but we are fed another gtx 680....
All I see reasons for why prices go up, which are then promptly forgotten when they should make prices go down.
We're being sold midrange dies and midrange memory busses on a very mature process with a very efficient and much improved architecture at insane high end prices.