maabus1999
Member
Yes. I've seen many threads in PC forums with people hitting CPU limits.doesn't CPU matter in online games like battlefield?
Yes. I've seen many threads in PC forums with people hitting CPU limits.doesn't CPU matter in online games like battlefield?
this slight boost is like going from a 2.8 cpu to a 3.1, it matters, might just be a few fps, but it helps.CPU matters a lot, especially for gameplay.
Battlefield 3 is definitely a computational intensive game, very large scale and lots of simulation.
Yeah, it is just a nice boost.this slight boost is like going from a 2.8 cpu to a 3.1, it matters, might just be a few fps, but it helps.
WTF is this? are you trying to outdo Thekayle?
At least in the PC world with benchmarks it is easy to figure out which games are bottlenecked on different components. There are some surprising results sometimes.this slight boost is like going from a 2.8 cpu to a 3.1, it matters, might just be a few fps, but it helps.
At least he uses some punctuation.WTF is this? are you trying to outdo Thekayle?
Yes, it matters more if the GPGPU side is weaker.
I dont know how you got gameplay in there, unless you were trying to pull another graphics < gameplay card.
Nice? Even my FX 6300 has a 3.5->4.1GHz "boost" (600MHz) out of the box. This, however, is a fucking joke and a troll on loyal MS consumers.Yeah, it is just a nice boost.
Yep, but shhhhhhhhht!These insignificant changes are nothing but chest beating that will have no significant impact on ingame performance at the end of the day.
I am gonna leave this for you confused sony fans and girls.
The context was destruction in BF, you're better off running that code on the GPU.No, it matters no matter how strong your GPU is. Some code just doesn't work in massive parallel as in gpgpu. There is no sugar coating a jaguar. However it is basically a good choice for a lower powered device such as a console that isn't a huge 250w monster
Join Date: 06-18-2013I am gonna leave this for you confused sony fans and girls.
I am gonna leave this for you confused sony fans and girls.
The principle differences are:
DDR3 runs at a higher voltage that GDDR5 (typically 1.25-1.65V versus ~1V)
DDR3 uses a 64-bit memory controller per channel ( so, 128-bit bus for dual channel, 256-bit for quad channel), whereas GDDR5 is paired with controllers of a nominal 32-bit (16 bit each for input and output), but whereas the CPU's memory contoller is 64-bit per channel, a GPU can utilise any number of 32-bit I/O's (at the cost of die size) depending upon application ( 2 for 64-bit bus, 4 for 128-bit, 6 for 192-bit, 8 for 256-bit, 12 for 384-bit etc...). The GDDR5 setup also allows for doubling or asymetric memory configurations. Normally (using this generation of cards as example) GDDR5 memory uses 2Gbit memory chips for each 32-bit I/O (I.e for a 256-bit bus/2GB card: 8 x 32-bit I/O each connected by a circuit to a 2Gbit IC = 8 x 2Gbit = 16Gbit = 2GB), but GDDR5 can also operate in what is known as clamshell mode, where the 32-bit I/O instead of being connected to one IC is split between two (one on each side of the PCB) allowing for a doubling up of memory capacity. Mixing the arrangement of 32-bit memory controllers, memory IC density, and memory circuit splitting allows of asymetric configurations ( 192-bit, 2GB VRAM for example)
Physically, a GDDR5 controller/IC doubles the I/O of DDR3 - With DDR, I/O handles an input (written to memory), or output (read from memory) but not both on the same cycle. GDDR handles input and output on the same cycle.
The memory is also fundamentally set up specifically for the application it uses:
System memory (DDR3) benefits from low latency (tight timings) at the expense of bandwidth, GDDR5's case is the opposite. Timings for GDDR5 would seems unbelieveably slow in relation to DDR3, but the speed of VRAM is blazing fast in comparison with desktop RAM- this has resulted from the relative workloads that a CPU and GPU undertake. Latency isn't much of an issue with GPU's since their parallel nature allows them to move to other calculation when latency cycles cause a stall in the current workload/thread. The performance of a graphics card for instance is greatly affected (as a percentage) by altering the internal bandwidth, yet altering the external bandwidth (the PCI-Express bus, say lowering from x16 to x8 or x4 lanes) has a minimal effect. This is because there is a great deal of I/O (textures for examples) that get swapped in and out of VRAM continuously- the nature of a GPU is many parallel computations, whereas a CPU computes in a basically linear way.
DDR3 is double data rate and GDDR 5 graphics double data rate 3. Two different type of hardware for different purpose. Woooow wooow ddr5 where did yu get that from were not on ddr4 yet. Dont forget yur g infront of the ps4 specs. Correct for yu gddr5.
So how long have you searched on google until you found this here (and copy/paste it): *CLICK*I am gonna leave this for you confused sony fans and girls.
The principle differences are:
•DDR3 runs at a higher voltage that GDDR5 (typically 1.25-1.65V versus ~1V)
•DDR3 uses a 64-bit memory controller per channel ( so, 128-bit bus for dual channel, 256-bit for quad channel), whereas GDDR5 is paired with controllers of a nominal 32-bit (16 bit each for input and output), but whereas the CPU's memory contoller is 64-bit per channel, a GPU can utilise any number of 32-bit I/O's (at the cost of die size) depending upon application ( 2 for 64-bit bus, 4 for 128-bit, 6 for 192-bit, 8 for 256-bit, 12 for 384-bit etc...). The GDDR5 setup also allows for doubling or asymetric memory configurations. Normally (using this generation of cards as example) GDDR5 memory uses 2Gbit memory chips for each 32-bit I/O (I.e for a 256-bit bus/2GB card: 8 x 32-bit I/O each connected by a circuit to a 2Gbit IC = 8 x 2Gbit = 16Gbit = 2GB), but GDDR5 can also operate in what is known as clamshell mode, where the 32-bit I/O instead of being connected to one IC is split between two (one on each side of the PCB) allowing for a doubling up of memory capacity. Mixing the arrangement of 32-bit memory controllers, memory IC density, and memory circuit splitting allows of asymetric configurations ( 192-bit, 2GB VRAM for example)
•Physically, a GDDR5 controller/IC doubles the I/O of DDR3 - With DDR, I/O handles an input (written to memory), or output (read from memory) but not both on the same cycle. GDDR handles input and output on the same cycle.
The memory is also fundamentally set up specifically for the application it uses:
System memory (DDR3) benefits from low latency (tight timings) at the expense of bandwidth, GDDR5's case is the opposite. Timings for GDDR5 would seems unbelieveably slow in relation to DDR3, but the speed of VRAM is blazing fast in comparison with desktop RAM- this has resulted from the relative workloads that a CPU and GPU undertake. Latency isn't much of an issue with GPU's since their parallel nature allows them to move to other calculation when latency cycles cause a stall in the current workload/thread. The performance of a graphics card for instance is greatly affected (as a percentage) by altering the internal bandwidth, yet altering the external bandwidth (the PCI-Express bus, say lowering from x16 to x8 or x4 lanes) has a minimal effect. This is because there is a great deal of I/O (textures for examples) that get swapped in and out of VRAM continuously- the nature of a GPU is many parallel computations, whereas a CPU computes in a basically linear way.
Busted.So how long have you searched on google until you found this here: *CLICK*
?
doesn't CPU matter in online games like battlefield?
Sigh... Yet another person championing latency in the DDR3 vs GDDR5 battle...
So how long have you searched on google until you found this here (and copy/paste it): *CLICK*
?
I can't see it doing anything for the UI. If it is properly optimized then it won't make a difference when we're talking <200MHz overall system clock gains. I think it really has more to do with realizing they can get away with giving it a crappy overclock it out of the box for shits and giggles OR realizing they can fab them at a higher clock with the same yield. Good on them either way. But I hope they aren't just being desperate and scrapping this together in hopes of saving face to those who don't realize how insignificant this is in the first place. Especially if the cooling ends up leaving much to be desired. But given their track record I'd think they want to make it right this gen.I actually have a theory about this. I think Microsoft is making these incremental hardware upgrades to make their system UI smoother. We have seen in the past the snap feature lagging and not giving a good experience and I think these upgrades will help with that.
In terms of FPS in games this eeks out maybe 1-2 FPS (Seriously, you can do a similar overclock on your computer and test the difference). Even if we give them extra performance for console optimization this ends up being like 2-5 FPS difference. Definitely not worth the upclock if it was specifically for games.
Dude, give it upBasically there isn't going to be a huge advantage between the ps4 and the xbox one. The reason why Microsoft chose to go with ddr3 instead of gddr5 is because they wanted to have a multitasking system that would quick switch the best hardware for holding apps into back ground was ddr3. gddr5 wouldn't have been the best option for that type of task as it really is more based for graphics. On large scale games like battlefield or destiny would take advantage of gddr5 memory therefore you would have a smoother gameplay.
didn't want to explain or go into details. So i figured let me just find a dumb down explanation for the people that fighting for things they don't understand.
I met this dude at a club in san diego, he was paid to say "whats up" on mic and dance for a few songs. I ended up near him like a few songs after he got there. I shit you not this makes kevin hart look like a giant, can't be more than 5'3, it was unbelievable. That makes Dudes dick like half his height.
lol @ this thread. A 150MHz boost isn't going to make a difference one way or the other. It won't hurt anything unless it has poor cooling.
Considering both Xbone and PS4 have very PC-like architectures, seriously, 150MHz is nothing to brag about. The general rule of thumb when it comes to overclocking x86 CPU's, for example, is that anything <300-400MHz really isn't worth the time and added stress on the hardware. You barely notice anything. Also 800->853MHz on the GPU side is even more of a joke. Overclock your CPU by 150Mhz and your GPU by 50MHz, run some benchmarks and note the differences. You will see.
Enjoy your extra ~2-3 average fps on Xbone.
Things like shaders, memory bandwidth, etc. are far more important than raw MHz for gaming, unless MS put a top-secret world class 7GHz or something CPU in the box all of the sudden.
If nothing else this gives false comfort to fanboys in the ongoing systemwars fiasco that carries on with every passing generation. But something tells me that is part of the idea; make the most popular marketing feature in computing hardware next to memory and operating system, which is -- you guessed it -- clock speed, look higher while disregarding the meat and potatoes of the overall system power which is still lagging behind what PS4 can do.
I am more intimidated by MS's money hats than their laughable hardware "upgrade" band-aids. MS will probably have a stranglehold on the US market because of subsidies through ISP's, acquiring hot exclusive content/games, and so forth. Their power isn't in the discipline of their development culture or the investments they made under the hood (lol) but their brute financial muscle, something Sony does not have.
I actually have a theory about this. I think Microsoft is making these incremental hardware upgrades to make their system UI smoother. We have seen in the past the snap feature lagging and not giving a good experience and I think these upgrades will help with that.
In terms of FPS in games this eeks out maybe 1-2 FPS (Seriously, you can do a similar overclock on your computer and test the difference). Even if we give them extra performance for console optimization this ends up being like 2-5 FPS difference. Definitely not worth the upclock if it was specifically for games.
Yep, increased performance is a joke/troll.Nice? Even my FX 6300 has a 3.5->4.1GHz "boost" (600MHz) out of the box. This, however, is a fucking joke and a troll on loyal MS consumers.
Basically there isn't going to be a huge advantage between the ps4 and the xbox one. The reason why Microsoft chose to go with ddr3 instead of gddr5 is because they wanted to have a multitasking system that would quick switch the best hardware for holding apps into back ground was ddr3. gddr5 wouldn't have been the best option for that type of task as it really is more based for graphics. On large scale games like battlefield or destiny would take advantage of gddr5 memory therefore you would have a smoother gameplay.
Dude, give it up
We know the DDR3/GDDR5 story about MS and SONY.
The "GPU" gap between the PS4 and XBone is still around 40%.
And yet, the PS4 has the better memory solution.
A 1-5fps difference would absolutely be worth an upclock. What makes it not worth it (it's obviously not having a negative effect on yields or heat).
5fps can be the difference in making a game playable.
I agree with that. i have clocked up on my gpu and it has made games like crysis 1 playable for me with just 5fps extra.
Who's "GAF"?CBOAT
"MS are having yield issues and are down clocking the xbone"
GAF
"Ha ha Xbone is worst thing Eva, All hail CBOAT for telling us of M$'s problems"
MS
"We've entered production and we've been able to upclock the CPU and the gpu over original specs"
GAF
"Up clocking doesn't help at all it's all PR, also did you know infamous runs at 1080p 60fps on PS4!"
I agree with that. i have clocked up on my gpu and it has made games like crysis 1 playable for me with just 5fps extra.
If all of these games end up running at a targeted ~60+fps it won't matter. If it is in the 30's it still won't matter. If the framerate is so poor that it dips into the mid/high 20's then it may very well finally start to matter, sure. But it's not going to do a damn thing for making it somehow more graphically capable than the competition.A 1-5fps difference would absolutely be worth an upclock. What makes it not worth it (it's obviously not having a negative effect on yields or heat).
5fps can be the difference in making a game playable.
CBOAT
"MS are having yield issues and are down clocking the xbone"
GAF
"Ha ha Xbone is worst thing Eva, All hail CBOAT for telling us of M$'s problems"
MS
"We've entered production and we've been able to upclock the CPU and the gpu over original specs"
GAF
"Up clocking doesn't help at all it's all PR, also did you know infamous runs at 1080p 60fps on PS4!"
If all of these games end up running at a targeted ~60+fps it won't matter. If it is in the 30's it still won't matter. If the framerate is so poor that it dips into the mid/high 20's then it may very well finally start to matter, sure. But it's not going to do a damn thing for making it somehow more graphically capable than the competition.
But really given the power of both PS4 and Xbone there's no reason for any game to be <30fps at any time. 40-60fps should be what to expect at the very least I'd think, constant 60fps being the most practical. But we shall see.
Hell, I get 40-60fps on my FX 6300 with a GTX 660.
CBOAT
"MS are having yield issues and are down clocking the xbone"
GAF
"Ha ha Xbone is worst thing Eva, All hail CBOAT for telling us of M$'s problems"
MS
"We've entered production and we've been able to upclock the CPU and the gpu over original specs"
GAF
"Up clocking doesn't help at all it's all PR, also did you know infamous runs at 1080p 60fps on PS4!"
CBOAT
"MS are having yield issues and are down clocking the xbone"
GAF
"Ha ha Xbone is worst thing Eva, All hail CBOAT for telling us of M$'s problems"
MS
"We've entered production and we've been able to upclock the CPU and the gpu over original specs"
GAF
"Up clocking doesn't help at all it's all PR, also did you know infamous runs at 1080p 60fps on PS4!"
Games dip below 30 on my PC at times but if it isn't consistent then it is hard to notice. Even OCing my CPU to a constant 3.9GHz from 3.5GHz basically keeps the framerate exactly the same, and you sure as hell know a 53MHz OC to the GPU is so insignificant it shouldn't even happen to begin with. Average fps is what matters most. Not like 43fps opposed to 41fps when it dipped because you are OCing your CPU by 400MHz or some shit is going to make a world of difference...There are going to be many, many, many many games this generation that dip below 30fps. You'll see some of them at launch.
No hivemind here, I'm telling you as a Sony fanboy this is literally one of the dumbest things I have ever seen people get excited about. If I was a MS fanboy I'd be beating my chest over how much fucking money they have over Sony, it's pretty scary. I don't care what Infamous is running at right now, we'll find out closer to release.Oh, I didn't know GAF was a hivemind... or the Geth or something.
Come on.
Just like "There is a Bigfoot! We just haven't seen it!"