• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Ryzen Thread: Affordable Core Act

If you are comparing the 3GB version then it is really too low VRAM in my view but the 6GB version won't lose to 8GB RX 480 even in future games... in this case I don't see any issue with the different VRAM.

Said that GTX 1060 6GB vs RX 480 8GB lies more in personal preference and price... you won't go wrong with any of them... both are solid cards.

If you don't have preference then go with what you find cheaper imo.

Uh christ I'm so indecisive haha. I honestly can't decide.

This Gigabyte 6GB is £242.92 right now... https://www.amazon.co.uk/dp/B01K7I7M00/

UGH.
 
Übermatik;231697701 said:
Not that I'd expect you to, but you'll find I've detailed my intentions earlier in the thread.

The machine is primarily for creation/productivity. 3D modelling and rendering, VFX, game development and digital art. Gaming comes second to these priorities, and I'll be looking at either 1920x1080 60fps or 2560x1080 60fps - and even then, mostly in less demanding games like DOTA 2.

For this, a Ryzen 1700 and RX 480 seem to be making the most sense to me.

Sounds like you have your ducks in a row. Sorry I didn't dig through your post history. The Sapphire probably doesn't offer enough advantage to you to make the difference in the price justifiable. The ASUS does look like the better buy.
 
Still very dissapointing for games.

If you have a 60hz monitor, how many frames are you going to notice below that threshold? If you have a freesync or gsync monitor, do you think you'd notice the difference between the two when the lowest frames you get are generally above 60 fps? Dissapointing it may be, but very dissapointing is stretching it to me. Besides the frame time distribution of the histograms should both follow something of a bell curve, something is wrong with what the Ryzen system is doing.
 
Sounds like you have your ducks in a row. Sorry I didn't dig through your post history. The Sapphire probably doesn't offer enough advantage to you to make the difference in the price justifiable. The ASUS does look like the better buy.

BUT NOW I'M CONSIDERING THE 1060 6GB TOO

GAGH SOMEONE KILL ME.
 
Übermatik;231698621 said:
BUT NOW I'M CONSIDERING THE 1060 6GB TOO

GAGH SOMEONE KILL ME.

Having choices isn't a bad thing. The Asus looks like the deal of the bunch to me. But CUDA support could become a thing for you based on your anticipated workload.

Also consider that people responding on a forum are giving mere seconds of thought to a decision that should probably take you hours to make.
 

shark sandwich

tenuously links anime, pedophile and incels
On release this may have been true, current numbers suggest they are neck and neck. AMD has "finewine" tech.

Übermatik;231699101 said:
'Cos it is - this 480 is £192:

https://www.amazon.co.uk/dp/B01N9GQY5T/



This is what I suspected... Getting the 1700 is pretty much a lock for me at this point, so...


Wow I just checked out some more recent benchmarks with the new drivers. 480 made some major improvements. I would definitely go with the 480 for that price

http://www.hardwarecanucks.com/foru.../73945-gtx-1060-vs-rx-480-updated-review.html
 

GodofWine

Member
Übermatik;231698621 said:
BUT NOW I'M CONSIDERING THE 1060 6GB TOO

GAGH SOMEONE KILL ME.

lulz...you are my kindred spirit, I've missed really good CPU/GPU/Monitor and Ram deals because I DON'T KNOW WHAT I WANT....had I just pieced this thing together based on deals I missed, i'd have had a 480 8GB, i5-7500, 16GB DDR4, 23 inch IPS 1080/75 free sync monitor, 1080/60 beast for like $600...right the list of things I've managed to buy are as follows:

1- A mouse
2 - THERE IS NO #2! Not even the freaking keyboard!

Edit - it is also partly because Im new to this and MOBO's confuse me still, but at some point Im gonna buy a CPU/GPU and design around them
 
Wow I just checked out some more recent benchmarks with the new drivers. 480 made some major improvements. I would definitely go with the 480 for that price

http://www.hardwarecanucks.com/foru.../73945-gtx-1060-vs-rx-480-updated-review.html

They do seem so well matched, so the devil's in the details as they say... Depends how much I value Nvidia's feature set. HMMM.

lulz...you are my kindred spirit, I've missed really good CPU/GPU/Monitor and Ram deals because I DON'T KNOW WHAT I WANT....had I just pieced this thing together based on deals I missed, i'd have had a 480 8GB, i5-7500, 16GB DDR4, 23 inch IPS 1080/75 free sync monitor, 1080/60 beast for like $600...right the list of things I've managed to buy are as follows:

1- A mouse
2 - THERE IS NO #2! Not even the freaking keyboard!

Edit - it is also partly because Im new to this and MOBO's confuse me still, but at some point Im gonna buy a CPU/GPU and design around them

Holy shit haha - this is what I'm scared of! I think I'm just gonna have to take the plunge...
 

Steel

Banned
Wow I just checked out some more recent benchmarks with the new drivers. 480 made some major improvements. I would definitely go with the 480 for that price

http://www.hardwarecanucks.com/foru.../73945-gtx-1060-vs-rx-480-updated-review.html

It really is amazing how crappy the drivers the 480 released with were.

On release this may have been true, current numbers suggest they are neck and neck. AMD has "finewine" tech.

It's really not "finewine" tech, it's crappy software support that gets better overtime.
 
Übermatik;231700965 said:
They do seem so well matched, so the devil's in the details as they say... Depends how much I value Nvidia's feature set. HMMM.

There really isn't much difference now, AMD has matched Nvidia's feature set with their Relive update plus some of their own extra features that are nice additions. Ansel and the multi-monitor distortion correction are the only things off the top of my head that Nvidia now has over AMD and those are application specific and fairly niche. So focus on cost and performance and whether you prefer gsync or freesync.
 
If you are comparing the 3GB version then it is really too low VRAM in my view but the 6GB version won't lose to 8GB RX 480 even in future games... in this case I don't see any issue with the different VRAM.

Said that GTX 1060 6GB vs RX 480 8GB lies more in personal preference and price... you won't go wrong with any of them... both are solid cards.

If you don't have preference then go with what you find cheaper imo.

The 8GB of the RX 480 will be taken advantage of and the GTX 1060 6GB will lose to it in this aspect.

Higher VRAM capacities offer things such as the ability to use higher quality textures without running out of memory and encountering stuttering in games which are VRAM intensive.
 

kotodama

Member
Looks like The Tech Report have updated the article with normalized charts now:

gtav-ryzen-normalized2fuip.png


gtav-7700k-normalizeds0ue4.png


Source

If I'm reading the X axis right doesn't this mean that the frame times only beyond 60 fps suffer, but if I'm running a G-Sync or FreeSync system, why should I even really care? Especially if I'm not a twitch type gamer.

---

Seems like Ryzen really likes fast memory. I guess I can't cheap out on this like I use to. Hope we get some BIOS updates that unlock 4000mhz+
 

ethomaz

Banned
The 8GB of the RX 480 will be taken advantage of and the GTX 1060 6GB will lose to it in this aspect.

Higher VRAM capacities offer things such as the ability to use higher quality textures without running out of memory and encountering stuttering in games which are VRAM intensive.
For the segment power... 8GB wont't make difference over 6GB.

If we are talking about GTX 1080 or Vega... well that is another story because there are supposed to be 4k GPUs.
 

theultimo

Member
For the segment power... 8GB wont't make difference over 6GB.

If we are talking about GTX 1080 or Vega... well that is another story because there are supposed to be 4k GPUs.
I agree with this. The 1060/480 do have processing limitations, granted its fine at 1080p currently, and i can get new titles to max the 6gb i have. If it was fury class performance, the 4gb would be hampered, but at this res 6/8 gb isn't a dealbreaker.


And there is a new driver coming for pascal hopefully to address the dx12 shortfalls. Havent heard when, but soon.
 

Sinistral

Member
Übermatik;231697701 said:
Not that I'd expect you to, but you'll find I've detailed my intentions earlier in the thread.

The machine is primarily for creation/productivity. 3D modelling and rendering, VFX, game development and digital art. Gaming comes second to these priorities, and I'll be looking at either 1920x1080 60fps or 2560x1080 60fps - and even then, mostly in less demanding games like DOTA 2.

For this, a Ryzen 1700 and RX 480 seem to be making the most sense to me.

If you're still debating over the 480 vs 1060, then nVidia will serve you better with rendering and VFX. While you are getting a beefy 3D rendering CPU with the 1700, GPU rendering with CUDA and the 1060 might still take the cake where you can use it. Especially with Redshift.

Though I've yet to find any direct comparisons lately especially with a 6900K class CPU, going with the 480 will cut you off entirely here.

Nuke, Renderman also have CUDA driven tools. Houdini and Adobe have OpenCL driven segments but again, they'll run fine on the 1060.
 

Thraktor

Member
Looks like The Tech Report have updated the article with normalized charts now:

gtav-ryzen-normalized2fuip.png


gtav-7700k-normalizeds0ue4.png


Source

Thanks for posting this. Ignoring the 7700K results for a second, it's worth noting that both of the games they tested exhibited a bimodal distribution while running on Ryzen (i.e. there are two "peaks" in the graph):

gtav-ryzen-normalized2.png


cry3-1800x-normalized2.png


As a general rule, videogame frame times should converge to a log-normal distribution when measured over a sufficiently long test run (although a regular normal distribution will often be a sufficient approximation). Both normal and log-normal distributions are unimodal (i.e. only have one "peak"), so a bimodal distribution means something is most definitely wrong. More specifically, a bimodal distribution is generally an indication that there is a fixed* delay affecting a large proportion of frames. That is, there is some kind of software or hardware issue which is occurring during certain frames and stalling progress for some amount of time. What we're actually seeing in a bimodal histogram is not one distribution, but two separate log-normal distributions layered on top of each other, one where the delay doesn't occur and the other where it does. Analysing the difference between these two distributions can tell us something about the nature of the issue.

With the full frametime data it would be possible to properly separate the two distributions and figure out the precise time of the delay (and variance to that time, if any), along with its frequency, regularity and other useful information. In the absence of full data, though, it's still possible to make some inferences from Tech Report's graphs.

Firstly, in GTA V the delay is somewhere in the range of 1.04ms to 2.08ms. In Crysis 3, it appears to be somewhere from 0.92ms to 1.39ms. These are overlapping ranges, so it's possible that the delay is independent of the game (in which case it should be in the range of 1.04ms to 1.39ms).

Looking at the histograms (the left peak in each is where the delay has occurred), it seems like the delay happens in a very large proportion of frames rendered, possibly even a majority. It's not possible to say whether there's any difference between the two games in this regard with the data we've got.

As to the effect this is having on frame times, if the delay were to be removed, GTA V's 99th percentile frame times should be expected to improve from 14.2ms to somewhere around 12.7ms, and average FPS from 88 FPS to around 94 FPS. In Crysis 3, we would expect an improvement in 99th percentile frame times from 12.5ms to about 11.4ms and average frame rates to improve from 127 FPS to about 137 FPS.

This is a very large effect, and it's quite possible that it's affecting a very large proportion of (or even all) games running on Ryzen, not to mention other software. From what we're aware of, it would seem like the Win10 task scheduler is the most likely culprit here, either in migrating threads between clusters, forcing multiple threads onto the a small number of heavily loaded cores, or something else. There's definitely a big performance improvement to be gained by fixing whatever this issue is, so lets hope MS and AMD get together to sort it out sooner rather than later.


*Strictly speaking the delay doesn't have to be fixed, but rather has a much lower variance than the overall distribution. Given the measurement accuracy here the difference is largely semantic, though.
 
For the segment power... 8GB wont't make difference over 6GB.

If we are talking about GTX 1080 or Vega... well that is another story because there are supposed to be 4k GPUs.

I agree with this. The 1060/480 do have processing limitations, granted its fine at 1080p currently, and i can get new titles to max the 6gb i have. If it was fury class performance, the 4gb would be hampered, but at this res 6/8 gb isn't a dealbreaker.


And there is a new driver coming for pascal hopefully to address the dx12 shortfalls. Havent heard when, but soon.

Segment power is irrelevant, you don't need more GPU power to run higher quality textures. You just need an adequate amount of memory as-well as memory bandwidth.
Even a R9 380 benefits from having 4GB of memory, a GPU which is essentially an upgraded Radeon 7970 built using 3rd generation GCN technology.

There's always the rhetoric of not having a use of more vram and this always turns out to be false. There are many ways to take advantage of VRAM, and texture quality settings are often one of the largest consumers of memory.
 

thuway

Member
Without minutia is it worth buying Ryzen right now or not? I'm mostly looking for 4k 60 FPS. What attracts me to AMD is potential scalability with their AM4 platform and not having to buy an entirely new motherboard whenever something changes.
 
Without minutia is it worth buying Ryzen right now or not? I'm mostly looking for 4k 60 FPS. What attracts me to AMD is potential scalability with their AM4 platform and not having to buy an entirely new motherboard whenever something changes.

What games are you looking forward to playing and what other things do you do intend do with your PC that are intensive? For example, video editing, music production etc.

There appears to be software issues to be worked out with Ryzen, it should become a more attractive option when these issues have been resolved. The time-frame of which these issues will be resolved is uncertain to my knowledge.
 
Without minutia is it worth buying Ryzen right now or not? I'm mostly looking for 4k 60 FPS. What attracts me to AMD is potential scalability with their AM4 platform and not having to buy an entirely new motherboard whenever something changes.

If it's just for gaming I would go Intel.
 
Without minutia is it worth buying Ryzen right now or not? I'm mostly looking for 4k 60 FPS. What attracts me to AMD is potential scalability with their AM4 platform and not having to buy an entirely new motherboard whenever something changes.
I'd say wait a month or two in any case, see if AMD and the motherboard manufacturers can get the initial bugs sorted.

In your case, 4k means that the graphics card is vastly more important, and you'll be able to get by with a Ryzen 5 or even a Ryzen 3 (although pairing a 1080 Ti or a RX Vega with a sub $150 CPU does strike me as pretty silly).

But yeah, you definitely want to spend the big bucks on your graphics card if you want 4k gaming.
 
If you're still debating over the 480 vs 1060, then nVidia will serve you better with rendering and VFX. While you are getting a beefy 3D rendering CPU with the 1700, GPU rendering with CUDA and the 1060 might still take the cake where you can use it. Especially with Redshift.

Though I've yet to find any direct comparisons lately especially with a 6900K class CPU, going with the 480 will cut you off entirely here.

Nuke, Renderman also have CUDA driven tools. Houdini and Adobe have OpenCL driven segments but again, they'll run fine on the 1060.

What do you mean by 'cut you off'?
As for applications, I'll be using Maya, Mental Ray/Arnold, UE4, PS and Ae, mostly... Maya has some CUDA accelerated tools if I remember rightly, and After Effects make use somewhere, but aside from that is CUDA gonna be really important for me?
 

ethomaz

Banned
Segment power is irrelevant, you don't need more GPU power to run higher quality textures. You just need an adequate amount of memory as-well as memory bandwidth.
Even a R9 380 benefits from having 4GB of memory, a GPU which is essentially an upgraded Radeon 7970 built using 3rd generation GCN technology.

There's always the rhetoric of not having a use of more vram and this always turns out to be false. There are many ways to take advantage of VRAM, and texture quality settings are often one of the largest consumers of memory.
Show me some? Show me any RX 480 using over 5GB or VRAM please.

4GB vs 8GB: http://www.eurogamer.net/articles/digitalfoundry-2016-amd-radeon-rx-480-4gb-vs-8gb-review (the difference is mostly because the 4GB RAM runs at slower speeds)
FallOut 4 High Texture Resolution pack: http://www.hardocp.com/article/2017/02/12/fallout_4_high_resolution_texture_pack_review/5 (around 5GB VRAM use... these are texture you will has better use in 4k something 1060/480 won't delivery).
 
Show me some? Show me any RX 480 using over 5GB or VRAM please.

4GB vs 8GB: http://www.eurogamer.net/articles/digitalfoundry-2016-amd-radeon-rx-480-4gb-vs-8gb-review (the difference is mostly because the 4GB RAM runs at slower speeds)
FallOut 4 High Texture Resolution pack: http://www.hardocp.com/article/2017/02/12/fallout_4_high_resolution_texture_pack_review/5 (around 5GB VRAM use... that is 4k textures).
It's not the amount, it's the bandwidth. Each VRAM chip increases your memory bandwidth. You never fully load a module, you split the memory between them.
An 8GB card has twice the memory bandwidth that a 4GB card has.
 

Ragnarok

Member
Übermatik;231710631 said:
What do you mean by 'cut you off'?
As for applications, I'll be using Maya, Mental Ray/Arnold, UE4, PS and Ae, mostly... Maya has some CUDA accelerated tools if I remember rightly, and After Effects make use somewhere, but aside from that is CUDA gonna be really important for me?


If you're using ANY GPU-based renderer (even if it's just something like V-Ray RT for lookdev) , CUDA is your only option. Pretty much any VFX software that utilizes any sort of GPU computation will only use CUDA and I don't see that changing any time soon.

I would never use an AMD GPU for a VFX workstation. Just don't do it!
 

Sinistral

Member
Übermatik;231710631 said:
What do you mean by 'cut you off'?
As for applications, I'll be using Maya, Mental Ray/Arnold, UE4, PS and Ae, mostly... Maya has some CUDA accelerated tools if I remember rightly, and After Effects make use somewhere, but aside from that is CUDA gonna be really important for me?

You can't use CUDA accelerated features/software on the RX480. Whereas you can still use OpenCL accelerated features on the GTX1060.

IRay from Mental Ray, now owned by nVidia is strictly CUDA, from your list.

CUDA will be as important to you as you make it, it's an option, and having that will be best.

Though I'll hold back my disdain for Autodesk and Mental ray here, Arnold is a decent renderer, that will make lovely use of the R7 1700. SolidAngle are planning GPU support in the future but, that's been pretty low key endeavor. But given the industry trends, I'm going to guess it will be CUDA based.


If you're using ANY GPU-based renderer (even if it's just something like V-Ray RT for lookdev) , CUDA is your only option. Pretty much any VFX software that utilizes any sort of GPU computation will only use CUDA and I don't see that changing any time soon.

I would never use an AMD GPU for a VFX workstation. Just don't do it!

FirePros are pretty solid. And AMD OpenCL acceleration is pretty good. When you say VFX, I think you're casting too wide a net. A lot of actual simulations are OpenCL accelerated, but a lot of compositing and rendering utilizes CUDA. Your point still stands, but it's hyperbole, much like this thread in relation to Ryzen Gaming.
 

SRG01

Member
If you're using ANY GPU-based renderer (even if it's just something like V-Ray RT for lookdev) , CUDA is your only option. Pretty much any VFX software that utilizes any sort of GPU computation will only use CUDA and I don't see that changing any time soon.

I would never use an AMD GPU for a VFX workstation. Just don't do it!

100% this. CUDA is pretty much the defacto standard for GPU-accelerated renderers at this point in time. Iray is getting there too.

edit: Mental Ray is owned by nVidia now?! Wow.
 

ethomaz

Banned
It's not the amount, it's the bandwidth. Each VRAM chip increases your memory bandwidth. You never fully load a module, you split the memory between them.
An 8GB card has twice the memory bandwidth that a 4GB card has.
That is totally false.

4GB RX480 only has less bandwidth because AMD choose to use 7Ghz GDDR5 to create market segment (if not everybody will buy the cheaper 4GB with same performance than 8GB).

RX 480 4GB cards uses 8x GDDR5 4Gb modules (512MB each) running at 7Gbps (7Ghz) = 224GB/s
RX 480 8GB cards uses 8x GDDR5 8Gb modules (1GB each) running at 8Gbps (8Ghz) = 256GB/s

If you upclock the GDDR5 of RX 480 4GB to 8Ghz it will reach 256GB/s of bandwidth like the 8GB... AMD choose to cap the bandwidth in 4GB version. Memory bandwidth has nothing to do with the amount of it... a 8GB card can have less bandwidth than 4GB card because the speeds of the RAM.

Edit - Fixed the actual RX 480 8GB bandwidth... 256GB/s
 

Sinistral

Member
100% this. CUDA is pretty much the defacto standard for GPU-accelerated renderers at this point in time. Iray is getting there too.

edit: Mental Ray is owned by nVidia now?! Wow.

Yup, and SolidAngle is owned by Autodesk now, which is why Mental ray is no longer a default part of Maya 2017 and Arnold is.
 
That is totally false.

4GB RX480 only has less bandwidth because AMD choose to use 7Ghz GDDR5 to create market segment (if not everybody will buy the cheaper 4GB with same performance than 8GB).

RX 480 4GB cards uses 8x GDDR5 4Gb modules (512MB each) running at 7Gbps (7Ghz) = 224GB/s
RX 480 8GB cards uses 8x GDDR5 8Gb modules (1GB each) running at 8Gbps (8Ghz) = 320GB/s

If you upclock the GDDR5 of RX 480 4GB to 8Ghz it will reach 320GB/s of bandwidth like the 8GB... AMD choose to cap the bandwidth in 4GB version.

Suppose RX 480 4GB cards with 8x GDDR5 4Gb modules (512MB each) running at 8Gbps (8Ghz) = 320GB/s

Memory bandwidth has nothing to do with the amount of it... a 8GB card can have less bandwidth than 4GB card because the speeds of the RAM.
That's completely wrong. The bandwidth of each module is cumulative.

The only reason your math adds up is because you're choosing to count 512MB modules for one calculate and 1GB modules for the other.


Unless you're saying all 4GB 480s use 512MB and all 8GB 480s use 1GB modules? If so, news to me.

The primary reason VRAM increased to 8GB was specifically for the bandwidth increase, not actually storage amount.
 

ethomaz

Banned
That's completely wrong. The bandwidth of each module is cumulative.

The only reason your "math" adds up is because you're choosing to count 512MB modules for one calculate and 1GB modules for the other.


Unless you're saying all 4GB 480s use 512MB and all 8GB 480s use 1GB modules? If so, news to me.
It is only wrong because it is not 320GB/s but 256GB/s vs 224GB/s... sorry by my mistake.

RX 480 4GB uses 4Gb modules at 7Ghz... how other way do you believe it reaches 224GB/s??? 7x modules give you a 224bits bus that gives 196GB/s at 7Ghz... each module is a 32bits bus... 4x module give you a 128bits bus... 256bits bus is 8x modules.

8x GDDR5 @ 5Gbps in a 256bits bus = 160GB/s
8x GDDR5 @ 6Gbps in a 256bits bus = 192GB/s
8x GDDR5 @ 7Gbps in a 256bits bus = 224GB/s
8x GDDR5 @ 8Gbps in a 256bits bus = 256GB/s
8x GDDR5 @ 10Gbps in a 256bits bus = 320GB/s
8x GDDR5 @ 11Gbps in a 256bits bus = 352GB/s
8x GDDR5 @ 12Gbps in a 256bits bus = 384GB/s

There are options available for a 256bits GDDR5 in the market for 2GB, 4GB or 8GB cards.

The primary reason VRAM increased to 8GB was specifically for the bandwidth increase, not actually storage amount.
You have no ideia what you are talking about...

The bandwidth increase is due 7Ghz to 8Ghz memories... 4GB, 8GB or even 2GB amount of memory has nothing to do with that.
 
It is only wrong because it is not 320GB/s but 256GB/s vs 224GB/s... sorry by my mistake.

RX 480 4GB uses 4Gb modules at 7Ghz... how other way do you believe it reaches 224GB/s??? 7x modules give you a 224bits bus that gives 196GB/s at 7Ghz... each module is a 32bits bus... 4x module give you a 128bits bus... 256bits bus is 8x modules.

8x GDDR5 @ 5Gbps in a 256bits bus = 160GB/s (you can have 2GB, 4GB or 8GB cards)
8x GDDR5 @ 6Gbps in a 256bits bus = 192GB/s (you can have 2GB, 4GB or 8GB cards)
8x GDDR5 @ 7Gbps in a 256bits bus = 224GB/s (you can have 2GB, 4GB or 8GB cards)
8x GDDR5 @ 8Gbps in a 256bits bus = 256GB/s (you can have 2GB, 4GB or 8GB cards)
8x GDDR5 @ 10Gbps in a 256bits bus = 320GB/s (you can have 2GB, 4GB or 8GB cards)
8x GDDR5 @ 11Gbps in a 256bits bus = 352GB/s (you can have 2GB, 4GB or 8GB cards)
8x GDDR5 @ 12Gbps in a 256bits bus = 384GB/s (you can have 2GB, 4GB or 8GB cards)

There are options available for a 256bits GDDR5 in the market.


You have no ideia what you are talking about...

The bandwidth increase is due 7Ghz to 8Ghz memories... 4GB, 8GB or even 2GB amount of memory has nothing to do with that.
Each module has a bandwidth. The more modules, the more bandwidth you have. One of the main reasons VRAM sizes increased like they did, was to add more modules, and therefore more bandwidth.

I wasn't aware that AMD used half capacity modules in their 4GB cards compared to the 8GB obes in order to keep the module count, and therefore the bandwidth, high.

The issue is you are doing doing everything X8. If there were only 4 1GB modules instead of 8 512MB modules you'd only have X4.
 
Show me some? Show me any RX 480 using over 5GB or VRAM please.

4GB vs 8GB: http://www.eurogamer.net/articles/digitalfoundry-2016-amd-radeon-rx-480-4gb-vs-8gb-review (the difference is mostly because the 4GB RAM runs at slower speeds)
FallOut 4 High Texture Resolution pack: http://www.hardocp.com/article/2017/02/12/fallout_4_high_resolution_texture_pack_review/5 (around 5GB VRAM use... these are texture you will has better use in 4k something 1060/480 won't delivery).

Okay.

Although even if it says it's using this much VRAM doesn't mean it needs all of it. Some of this 'usage' may be caching and the performance may be similar on a card with comparable GPU power and less VRAM.

A good indicator if the VRAM is actually needed would be if you're encountering stuttering on a card with comparable GPU power but less VRAM at higher texture quality settings.
For example, looking at the performance of a game on a RX 470/480 4GB card and then comparing the performance to the 8GB models, and potentially clocking up or down the memory if necessary. IIRC some 4GB cards had different memory clocks from the 8GB models.

Source

Source

Rise of the Tomb Raider allegedly requires more than 4GB of memory for the highest texture quality setting, IIRC the game recommends 6GB.

My GTX 970 stutters a lot when using this setting however it does have the segmented VRAM setup of 3.5GB of memory, so I'm uncertain whether 4GB is adequate for it outside of the Fury X which supposedly coped well even with it's 4GB of memory judging by the findings over at HardOCP.

The AMD Radeon R9 Fury X kind of backs that statement up since it was able to allocate dynamic VRAM for extra VRAM past its 4GB of dedicated VRAM capacity. We saw up to a 4GB utilization of dynamic VRAM. That allowed the Fury X to keep its 4GB of dedicated VRAM maxed out and then use system RAM for extra storage. In our testing, this did not appear to negatively impact performance. At least we didn't notice anything in terms of choppy framerates or "micro-stutter." The Fury X seems to be using the dynamic VRAM as a cache rather than a direct pool of instant VRAM. This would make sense since it did not cause a performance drain and obviously system RAM is a lot slower than local HBM on the Fury X. If you remember a good while ago that AMD was making claims to this effect, but this is the first time we have actually been able to show results in real world gaming. It is awesome to see some actual validation of these statements a year later.
 

ethomaz

Banned
Each module has a bandwidth. The more modules, the more bandwidth you have. One of the main reasons VRAM sizes increased like they did, was to add more modules, and therefore more bandwidth.
GDDR5 specification

Each module has a 32bits bus:

8x modules = 256bits
4x modules = 128bits
6x modules = 192bits (eureka we found the GTX 1060)

Each modules has a speed: 5, 6, 7, 8, 10, 11 or 12 Ghz (or Gbps)

Each module has a density: 2, 4 or 8 Gb (256MB, 512MB or 1GB)

After that is pure MAGICAL math.

RX 480 4GB = 8x 4Gb modules at 7Ghz = 224GB/s
RX 480 8GB = 8x 8Gb modules at 8Ghz = 256GB/s

Using the numbers I give to you try to reach a 224GB/s bandwidth with 7Ghz GDDR5... it is impossible... there is no bus for GDDR5 that will support that.

I wasn't aware that AMD used half capacity modules in their 4GB cards compared to the 8GB obes in order to keep the module count, and therefore the bandwidth, high.
There is no other way to do that lol

It is impossible to use other way to reach 224GB/s for 4GB with GDDR5 at 7Ghz.

IMPOSSIBLE... against specification... against math.... against logic.
 

Nachtmaer

Member
Each module has a bandwidth. The more modules, the more bandwidth you have.

That's only true when you also increase the width of the memory controller. You can't just throw more memory modules at a chip and expect the bandwidth to go up.

256 bit memory controller means 8 modules because they're 32 bit wide (not counting clamshell). If let's say you want to use 16 modules, you'd have to double the width of the memory controller. In that case the bandwidth would double IF you're keeping the frequency the same. Bandwidth = width of the memory controller * frequency the modules run at.

The only thing using higher density modules does is double the amount of VRAM for a certain number of chips.
 
Ugh, I hate the fact I have to go Nvidia. I almost feel like sticking with the AMD card just to support them.

Is there any discrepancy in having a Ryzen CPU? Not much difference whether I go AMD or Nvidia GPU?

And if I HAVE to go Nvidia for CUDA - I knew this would happen - what's the best value 1060 out there right now?
 
That's only true when you also increase the width of the memory controller. You can't just throw more memory modules at a chip and expect the bandwidth to go up.

256 bit memory controller means 8 modules because they're 32 bit wide (not counting clamshell). If let's say you want to use 16 modules, you'd have to double the width of the memory controller. In that case the bandwdith would double IF you're keeping the frequency the same. Bandwidth = width of the memory controller * frequency the modules run at.

The only thing using higher density modules does is double the amount of VRAM for a certain number of chips.
Like the 290X with its 16 modules?

OK, thanks. I wasn't aware, thank you.
 

ethomaz

Banned
Like the 290X with its 16 modules?

OK, thanks. I wasn't aware, thank you.
There is a easy way to find the bandwidth of any GDDR5 RAM.

Bus width * speed of GDDR5 / 8 = bandwidth

Eg. 256bits * 8Ghz / 8 = 256GB/s

* The division by 8 is because you are using bits for the math but bandwidth conventionally uses byte.

There is no GB amount of memory in the calc... of course you will know the amount of modules using 256bits/32bits = 8 modules.

Here the specs of all GDDR5 modules sold by Micron: https://www.micron.com/products/dram/gddr
 
There is a easy way to find the bandwidth of any GDDR5 RAM.

Bus width * speed of GDDR5 / 8 = bandwidth

Eg. 256bits * 8Ghz / 8 = 256GB/s

There is no GB amount of memory in the calc... of course you will know the amount of modules using 256bits/32bits = 8 modules.
I didn't realize that bus width directly affected the number of modules. I thought the 8GB cards had a different bus width, I guess.
 
Übermatik;231713729 said:
Ugh, I hate the fact I have to go Nvidia. I almost feel like sticking with the AMD card just to support them.

Is there any discrepancy in having a Ryzen CPU? Not much difference whether I go AMD or Nvidia GPU?

And if I HAVE to go Nvidia for CUDA - I knew this would happen - what's the best value 1060 out there right now?

You probably won't notice the CUDA.

480 is also more advanced in its design when it comes to DX12/Vulkan. I would go with the 480 :)
 

Sinistral

Member
Übermatik;231713729 said:
Ugh, I hate the fact I have to go Nvidia. I almost feel like sticking with the AMD card just to support them.

Is there any discrepancy in having a Ryzen CPU? Not much difference whether I go AMD or Nvidia GPU?

And if I HAVE to go Nvidia for CUDA - I knew this would happen - what's the best value 1060 out there right now?

Lol, honestly, if you want to examine what you're doing further in regards to 3D/VFX, you can make a case to go with AMD, but it'll be quite a corner case. But it sounds like you're starting out? It is best not to restrict options then.

I myself am not using any CUDA accelerated features for my 3D/VFX stuff, and am CPU bound, which is why this Ryzen CPU is an amazing deal. All my stuff in Houdini is CPU driven with a quite few OpenCL nodes.

There is no discrepancy in regards to using the Ryzen CPU and nVidia GPU.
 
Lol, honestly, if you want to examine what you're doing further in regards to 3D/VFX, you can make a case to go with AMD, but it'll be quite a corner case. But it sounds like you're starting out? It is best not to restrict options then.

I myself am not using any CUDA accelerated features for my 3D/VFX stuff, and am CPU bound, which is why this Ryzen CPU is an amazing deal. All my stuff in Houdini is CPU driven with a quite few OpenCL nodes.

There is no discrepancy in regards to using the Ryzen CPU and nVidia GPU.

I am starting out yeah, which is why I'm thinking taking the price cut opportunity on this 480 is a good idea now if I find I need to upgrade in the future.
I'll probably see more of a difference in the fact I'm using a Ryzen 1700 in my work than CUDA...

Hm.

-EDIT- Sorry to shit up the thread somewhat, but this is all relevant to Ryzen in the end, and I do need the advice here! Thanks to everyone that's helping!
 
Top Bottom