Nvidia GTX 980/970 3DMark Scores Leaked- from Videocardz.com

Yeah... it might be best not to spoil developers with unlimited resources or something. Its like the more you give them to work with, the less optimizations they do, resulting in average looking games that would work in systems with half the resources.

Shit ports are going to happen no matter how little or many resources you give them sadly:\

Look at last gen consoles, fixed hardware didn't stop them from releasing crysis 1-2 ports on them that ran at a consistent 15-24 fps, if even fixed hardware can't prevent shitty ports then nothing will.
 
Why, because 2 shitty ports have no texture streaming in the pc version and apparently shitty texture compression? There are dozens of games with much better textures than either of those games that make do with 1-1.5 GB at 1080p.
Those 2 are the exception and it's certainly not because they are pushing graphics in any way.

Blame it on whatever you want but those two games aren't the first and won't be the last two games that aren't perfectly designed around PC hardware. You can't go and recommend a card to someone with the caveat that only well programmed games will run on it.

As for not "pushing graphics" I don't think you can make that case against Watch Dogs and Killzone dedicates 3GB of memory to graphics precisely because it is "pushing graphics."

You made the assertion that PS4 can't effectively use more than 2GB on graphics yet we know that even launch titles were using more than 3GB. If launch titles could effectively use more than 2GB on 1080p graphics then that figure is only going to increase.
 
I like how you add up the split ram pool of the ps3 then fail to do the same for the split ram pool of a PC

Most modern midrange/high end gaming pcs will have 2-4GB of vram (with twice the bandwidth of the ram in ps4, but shhhhhh don't mention that) + 8GB of ddr3 ram

In the end you have way more bandwidth as well as more memory to work with on a pc, even with only 2GB vram.


Yep, and after reserving 3.5 GB of that 8GB of shared ram for the OS on ps4 (always conveniently left out of these asinine 8GB gddr5 arguments) and using a sizeable amount as cache for stuff it will need later/data needed by the cpu (just like what you described happens on pc) there is likely not even going to be 2GB worth of memory left for the gpu to work on rendering and buffering the next frame. And again, vram on pc has a much higher bandwidth in high end gpus, and it doesn't need to share that bandwidth with the cpu.

To PS4's defense, it's designed to have a separate bus that gives the CPU access to the memory without reducing the bandwidth since it's a shared pool. I believe the end result leaves PS4 with its advertised memory bandwidth, and the CPU with its own (which isn't a large or advertised number, but it's enough).

I also recall reading about a way devs are able to free up a significant amount of that reserved OS memory. It sounded pretty proprietary, and probably has some stipulations - but I imagine devs like Naughty Dog will be using tools like that to push what the system can do further.

Anyway, the point is that you can't compare apples to oranges. It's wrong to do so when people use it to ignorantly say that PS4 has more VRAM than gaming PCs, but also wrong to then turn that around and use the same logic that rules for different platforms apply to others.

http://www.gamasutra.com/view/feature/191007/inside_the_playstation_4_with_mark_.php?page=2
 
Alright so in light of this news, I think I'm going to hold on to my Radeon 6950 until Witcher 3 is released. I'll use the savings on a new CPU and motherboard and when Witcher 3 is ready, I'll see what's available.

Right now, none of the upcoming games I want are particularly graphically intensive. The only question for me is whether I should upgrade to an X99 system and shell out more for RAM or stick to DDR3 for now.

Thoughts?
 
Alright so in light of this news, I think I'm going to hold on to my Radeon 6950 until Witcher 3 is released. I'll use the savings on a new CPU and motherboard and when Witcher 3 is ready, I'll see what's available.

Right now, none of the upcoming games I want are particularly graphically intensive. The only question for me is whether I should upgrade to an X99 system and shell out more for RAM or stick to DDR3 for now.

Thoughts?
IMO, just focus on the GPU. A 2nd /3rd generation i3/i5 with DDR3 memory should be more than enough for the whole generation. Heck, current gen consoles now have relatively weak CPUs compared to their GPUs.
 
Yup, no streaming whatsoever. "Insane" textures require 3GB of VRAM and they don't look too hot.

Game runs perfectly fine on my 1,5gb of v-ram card. higher v-ram videocards will use more v-ram automatically because the room is there anyway. It doesn't mean it requires it.

It's entirely possible that a PS4 game could call for what would essentially require 5+gb of VRAM in a PC environment, right?

The gpu will then be hopelessly behind on the performance department.

You can bolt 6gb of v-ram on a turd, but that doesn't make the turd go any faster. It just doesn't get bottle necked. 8gb of v-ram on the gpu that's inside the ps4 will never be used fully for the gpu, because it's pointless. There is a reason why there is no additional ram besides the v-ram, because it's also the system ram for it. People just only see numbers but do not understand them. Bigger must be better, that's there logic. Well with that logic i can put 64+6gb v-ram gpu in my pc, that is 72 gb ram in total, my gpu suddently must run 40 times faster.

Don't think that once you got more v-ram you can crank up all kinds of visuals without any gpu overhead cost. Because the gpu still needs to render them and this is why buying budget cards with huge v-ram pools is useless as example.

The minimum requirement probably will be in this generation 1-2gb of v-ram. Maximum is always the best you can get.

V-ram is only a tool for the videocards, it's not where it's performance comes from.
 
Shit ports are going to happen no matter how little or many resources you give them sadly

That's what I mean, though. While it's obvious that a modern gaming PC wipes the floor with these consoles in aggregation, because of situations like these you can't really claim that a current gaming PC or even one with 4gb VRAM will be better in all cases for the remainder of the generation. Not every project has the budget rework a game to make better use of a more tradional ram setup. I'd be more comfortable with 6gb, as brain_stew says, as it'll give me more leeway for ports that haven't received sufficient reworking.
 
To PS4's defense, it's designed to have a separate bus that gives the CPU access to the memory without reducing the bandwidth since it's a shared pool. I believe the end result leaves PS4 with its advertised memory bandwidth, and the CPU with its own (which isn't a large or advertised number, but it's enough).
PS4-GPU-Bandwidth-140-not-176.png

Maybe Sony engineers will find a way to negate all that.
 
I'll be the first to admit I have no idea what PC games use general DDR3 system RAM for. I've asked several times, but have never received anything approaching a technical or detailed response.

I'm not a game developer, but obviously all the data the CPU is working on resides in system memory. Game logic, scene graph, AI, physics.. all the content that is to be displayed (models, textures) also needs to go through the main memory before it is uploaded to VRAM.
 
As someone who's never bought a video card near release, about how long after an announcement (assuming it's on the 18th) should I expect to be able to get my hands on one?
 
As someone who's never bought a video card near release, about how long after an announcement (assuming it's on the 18th) should I expect to be able to get my hands on one?
I'm wondering the same. Every minute with my ROG Swift that I can't use gsync kills me. I need this ASAP.
 
To PS4's defense, it's designed to have a separate bus that gives the CPU access to the memory without reducing the bandwidth since it's a shared pool. I believe the end result leaves PS4 with its advertised memory bandwidth, and the CPU with its own (which isn't a large or advertised number, but it's enough).

I also recall reading about a way devs are able to free up a significant amount of that reserved OS memory. It sounded pretty proprietary, and probably has some stipulations - but I imagine devs like Naughty Dog will be using tools like that to push what the system can do further.

Anyway, the point is that you can't compare apples to oranges. It's wrong to do so when people use it to ignorantly say that PS4 has more VRAM than gaming PCs, but also wrong to then turn that around and use the same logic that rules for different platforms apply to others.

http://www.gamasutra.com/view/feature/191007/inside_the_playstation_4_with_mark_.php?page=2
thanks for the post and info, I didn't know the ps4 had a seperate memory bus for the cpu, that is good ^^
edit: the graph above suggests this is not true?

Blame it on whatever you want but those two games aren't the first and won't be the last two games that aren't perfectly designed around PC hardware. You can't go and recommend a card to someone with the caveat that only well programmed games will run on it.

As for not "pushing graphics" I don't think you can make that case against Watch Dogs and Killzone dedicates 3GB of memory to graphics precisely because it is "pushing graphics."

You made the assertion that PS4 can't effectively use more than 2GB on graphics yet we know that even launch titles were using more than 3GB. If launch titles could effectively use more than 2GB on 1080p graphics then that figure is only going to increase.
You're right killzone uses 3GB (we don't know if it uses it like bf3 on pc and just uses whatever you have free because it's there rather than needing it:p ), my 2GB guess was just a guess, the point was it's not all 8 , nor 5 and it cannot be all 4.5 not used by the os since the cpu still needs some of it too and it's still required to cache some from the hdd)

We'll see in the long run how much vram will be useful on pc at 1080p
As for shitty ports, I will not be coerced by shitty ports into spending more money than I reasonably have to :p but that is me personally

My comments were directed at ps4 somehow being capable of more because it has up to 4.5 5 or 8 of ram available.

My main reason for never getting a 2GB 970/980 would be because you don't buy a gtx780 to settle for 1080p, you buy that kind of gpu to do some serious downsampling in dozens of games.
If I'd replace my 6870 with a 7870 or a gtx 760 or something I'd be happy with 2GB of vram, but not if I'm going to splurge on a powerful gtx 780 (or this new midrange 970 successor) then I want my glorious pc screenshot thread image quality for my money
 
My main reason for never getting a 2GB 970/980 would be because you don't buy a gtx780 to settle for 1080p, you buy that kind of gpu to do some serious downsampling in dozens of games.
If I'd replace my 6870 with a 7870 or a gtx 760 or something I'd be happy with 2GB of vram, but not if I'm going to splurge on a powerful gtx 780 (or this new midrange 970 successor) then I want my glorious pc screenshot thread image quality for my money

970 will not even have a 2GB version from the looks of it, so that wouldn't really be a problem.

And I'm getting a 980 while settling for 1080p, but I do have a 144Hz monitor. :P
 
crts are not limited to low resolutions or 4:3 (if you had an fw900 you'd know that)
A crt does not produce any noise, if it does it is broken.

crt used vga because at the time digital connections like dvi were in their infancy and only had enough bandwidth for a pityful 1600*1200@60hz

I was typing a whole reply explaining how lcd tech is fundamentally flawed and how telling it is that you think your TN panel is superior to a crt..the biggest feature of your monitor is brightness destroying backlight strobing that requires high refresh rates (so a powerful pc) to approach crt motion clarity.
Whatever hardware you have in your pc, it's not enough for 1440p @144hz (resolutions and refresh rates crt monitors reached 12 years before your monitor was released btw, welcome to last decade) , so enjoy your upscaling blur @120hz 1080p or enjoy your awful motion resolution blur at 1440p 60 hz.

But you can research this shit in detail for yourself

The basic gist is backlit pixels + sample and hold+ slow lcd pixels make for a fundamentally flawed technology which was a huge step back from crt tech.
10 years ago you didn't need an unholy monster pc to enjoy pixel perfect sharp motion at higher resolutions, because crt tech didn't suck dick and didn't need asinine bandaids like backlight strobing to produce an acceptable image in motion.
You could also enjoy pixel perfect contrast, black blacks , no black crush and superior color accuracy (at all viewing angles)

Sorry man but if you think a tn panel is worth a shit you are wrong, I understand it in light of having to justify spending 650 euros on one though.

One last thing, input lag is not inherent to lcd tech, it's the image processing the monitor does before sending it to the panel that causes it , so you couldn't even get the one thing you said in favor of crt monitors right.

Curious, how much experience do you have with actually owning monitors?
 
Though I wish nvidia would change the look of the card, while it still looks clean, they have been using this reference design for a while :
Apparently Maxwell's performance improvements are so good that they don't need to replace it, since there's a lot of room for overclocking due to the lower TDP.
 
Wait a sec...The draw power is 180W...Holyshit. That means that the 970 will use less, and it will be able to run fine on the Alienware X51...Oh lord this is good. Because the 670 has a DP of 170W..Which is the max on the X51 I believe.

970 is rumored to be right at 150, I think? Maybe 135. I don't remember.
 
Hypothetically speaking, i wonder how long it would take to get single cards as powerful as quad780ti's at 80w....
 
Hypothetically speaking, i wonder how long it would take to get single cards as powerful as quad780ti's at 80w....
That's the dream, isn't it? I believe we will see something on that level before the end of the decade, but it will be a long time coming. A couple more die shrinks and the addition of stacked DRAM will definitely help.

7nm GPUs will be something to behold, I'm sure. If we can just reach that point...
 
That's what I'm thinking. Hopefully my 850w p/s will be enough. It should be for two 980s.

Do you need 850w for two 980s if you have a simple rig? I was thinking my EVGA 600B would be able to handle it. 600w, 80 plus bronze, 588w on 12v.

It's 340w for both GPUs. I can't imagine my i5, HDD, motherboard, disc drive, RAM and fans would use more than 200ish? Am I being too optimistic?
 
Do you need 850w for two 980s if you have a simple rig? I was thinking my EVGA 600B would be able to handle it. 600w, 80 plus bronze, 588w on 12v.

It's 340w for both GPUs. I can't imagine my i5, HDD, motherboard, disc drive, RAM and fans would use more than 200ish? Am I being too optimistic?

600w will be fine. I ran two heavily OC'd, overvolted GTX 780's, a 4.7 Ghz 3570k etc on my 760w PSU.
 
In light of higher resutions and the supposed 256-bit bus (with stagnant memory clocks) and only 32 ROPs of the 980/970, I can only question WTF Nvidia think they're doing... ya know, aside from rebranding a mid-range chip. I bet Hawaii/GK110 holds up especially nicely to these at 4K, unfortunately.

I really don't want to be ripped off and buy into this nonsense, but I'm pushing 2560x1440 and want the best single card available in preparation for TW3. Nvidia aren't putting up much of a fight for above 1920x1080 resolutions, I will be so pissed off if this is the best out short of selling a kidney come February. They better overclock like monsters, it's looking difficult to make a case for replacing my 780.
 
Why are these new cards not pushing forwards more from previous versions?

Is there a genuine technical reason or is it because nVidia know people just end up buying new cards regardless so just bring out small iterations? Extract more cash more often?

I have a GTX 670 running at 1080p and I really cannot see any point in upgrading any time soon. I am waiting for a real significant jump for the money before making monitor upgrades / gpu upgrades.
 
Why are these new cards not pushing forwards more from previous versions?

Is there a genuine technical reason or is it because nVidia know people just end up buying new cards regardless so just bring out small iterations? Extract more cash more often?
Tech just hasn't progressed enough for them to efficiently make a much faster card. They could always build something a bit stronger at the cost of power efficiency and higher manufacturing expense, but it's a short term gain they have no reason to pursue unless forced by the competition.
 
I'm gonna wait for a TI version with at least 3000 ALUs. I need 4xPS4 power!

You think that tehre will be a 980 Ti? Does anyone know what a full GM204 die even entails yet really?

The 980, like the 680, could be in all likelihood be the best die of little Maxwell.
 
Top Bottom