• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Ryzen Thread: Affordable Core Act

Paragon

Member
There were enough launch day benchmarks showing that it wasn't so no idea where anyone would get this from. Minimal fps being lower is usually a sign of CPU bottleneck (unless the game is GPU limited there of course) but this can mean a single thread performance bottleneck as well as 100% all cores load bottleneck and the former is happening way more often in modern games than the latter.

Yes, I agree. The charts above seem to be easier for people to understand than the frametimes by percentile graphs that some sites post - though I still think those are the best way to present this data.

The way that the performance is split like that on the Ryzen chip does make me wonder how much of that is related to the 2x CCX design, and if that may be improved with OS/EFI updates.
 
Are there any good comparisons out there in the wild of streaming while playing with Ryzen (hopefully the 1700) compared to using the 7700K and quicksync?
 

kotodama

Member
·feist·;231623315 said:
I've had an 8c/16t AMD build since NDA lifted. It's fast, games well and chews up most everything I throw at it.

What's your build and are you reaching 4.1Ghz on air?

Ryzen 1700
Asus B350
16GB Gskill 3000mhz CL15 (leftover from another build)
EVGA 650W G3
Father-in-law's old 760
Phanteks P400s TG
960 Evo 250GB
Ubuntu 16.04 LTS (not for gaming)


Don't mind the cables. :p

Nice, how's it running?

The way that the performance is split like that on the Ryzen chip does make me wonder how much of that is related to the 2x CCX design, and if that may be improved with OS/EFI updates.

It'd be interesting to run that test again on a Windows 7 system to see if that is SMT related.
 
This again, PS4 with 8 cores was released at end of 2013, have we seen much games that use those cores on pc?
I dont think consoles have anything to do how PC games will use their cores.

Cores don't mean as much as clock speed x IPC (instructions per clock).

A 2500K would still eat the PS4's lunch.
 

ethomaz

Banned
So much for the ”but Ryzen is smoother" nonsense that AMD fans have been pushing the past few days.
There are enough reviews showing frametimes are better on 7700k since release... I don't think anybody tried to say it was better on Ryzen.

Said that... this is one really great way to show frametimes.... The Tech Report did a good job.

Cores don't mean as much as clock speed x IPC (instructions per clock).

A 2500K would still eat the PS4's lunch.
A Intel dual core at 4Ghz would eat PS4's CPU.
 

dogen

Member
There are enough reviews showing frametimes are better on 7700k since release... I don't think anybody tried to say it was better on Ryzen.

Said that... this is one really great way to show frametimes.... The Tech Report did a good job.


A Intel dual core at 4Ghz would eat PS4's CPU.

I have one (4.3 even) and it doesn't.

Maybe it's because the software isn't equivalent though.
 

ethomaz

Banned
I have one (4.3 even) and it doesn't.

Maybe it's because the software isn't equivalent though.
Which dual core? I mean after the ix series started. I have a i3-6100 3.7GHz that has no issue to runs Battlefield 1 way better than PS4 with RX 480.

The Pentium Gxxxx series can match PS4's CPU too (if you put a better GPU like 1060 it will even give you better graphics): https://www.youtube.com/watch?v=JxUPJdcChzE

If it is not too old (like 5 years ago or more) I believe most Intel dual-core will indeed hold games better than PS4's CPU. Jaguar cores are really that low standard in CPU power... 8x of them at 1.6Ghz mean nothing at all.
 
Both Sony and Microsoft picked the Jaguar cores for their home consoles. There wasn't anything better given the constraints of the system their engineer's were designing at the time (price included) or their engineer's would have use that instead. The whole package of system and software makes consoles what they are, and consoles are proof that software is designed with hardware in mind to achieve the desired results of the game designers (or more likely the software engineers behind game engines these days).

This is why we can't take Ryzen benchmarks at face value with it being so new to the scene. There are a lot of people saying AMD should have delayed the launch to further polish the release, but tell me, how many people are going to design to unreleased hardware? It's a chicken and the egg problem. Historically, we had game developers that designed for hardware that didn't yet exist. Does it run Crysis anyone? These days they don't bother, Steam survey says not to. Software doesn't push hardware anymore. It's a sad thing for the hardware enthusiast.

The greatest thing to happen in recent years is Vulkan and DX12, which are taking far longer to impact the scene than I had hoped. They take most of the CPU out of the equation from what is currently happening in games, but there's the hope that developers start to find other uses for the spare cycles they get, such as artificial intelligence or better physics.

I've been much more disappointed in gaming software in recent years than I have gaming hardware. Wouldn't it be something if a game came out and was hammered for not pushing top of the line hardware? It happens, but not like the hammering we see when Ryzen falls 5% behind a 7700k in gaming.
 

dogen

Member
Which dual core? I mean after the ix series started. I have a i3-6100 3.7GHz that has no issue to runs Battlefield 1 way better than PS4 with RX 480.

The Pentium Gxxxx series can match PS4's CPU too (if you put a better GPU like 1060 it will even give you better graphics):

If it is not too old (like 5 years ago or more) I believe most Intel dual-core will indeed hold games better than PS4's CPU. Jaguar cores are really that low standard in CPU power... 8x of them at 1.6Ghz mean nothing at all.

G3258, but I can't use DX12, and Vulkan doesn't work in Doom because I only have a GTX 950.
 

dr_rus

Member
Although it's AM4 Bristol Ridge is based on excavator, not zen.

You're right, for some reason I confused them with Raven Ridge.

It came from reports (even from reviewers) of long playing sessions as opposed to 30 second runs.

my BF1 runs >100fps all the time but I know exactly what they are talking about. Some stutters on my 5ghz i7 are plain bad.

Why would there be any difference CPU wise between long playing sessions and 30 seconds runs? Even if we assume that longer playing sessions may catch places in some games where a higher number of weaker cores are preferable to a smaller number of stronger ones you're still looking at a situation where Ryzen is factually worse in minimal fps spikes in those places which were used in benchmarks so it ends in sort of a tie at best.
 
You're right, for some reason I confused them with Raven Ridge.



Why would there be any difference CPU wise between long playing sessions and 30 seconds runs? Even if we assume that longer playing sessions may catch places in some games where a higher number of weaker cores are preferable to a smaller number of stronger ones you're still looking at a situation where Ryzen is factually worse in minimal fps spikes in those places which were used in benchmarks so it ends in sort of a tie at best.

"AMD is stating that Zen implements algorithm learning models for both instruction prediction and prefetch, which will no doubt be interesting to see if they have found the right balance of prefetch aggression and extra work in prediction."

http://www.anandtech.com/show/10907/amd-gives-more-zen-details-ryzen-34-ghz-nvme-neural-net-prediction-25-mhz-boost-steps

Ryzen may perform better after it's run a piece of software for an extended period of time.

I haven't heard much on the learning ability of Ryzen though in any recent reviews. I suspect reviewers don't generally just let a piece of software run for an extended period before collecting data. It could be that Ryzen's learning ability, if it has one as AMD has claimed, resets with each benchmark run. If that's the case performance while gaming for extended periods could increase to be more than what the quick benchmarks show. I doubt there's much effect personally, but that would be a good argument for not accepting the above 40 second benchmark.
 
Ryzen 1700
Asus B350
16GB Gskill 3000mhz CL15 (leftover from another build)
EVGA 650W G3
Father-in-law's old 760
Phanteks P400s TG
960 Evo 250GB
Ubuntu 16.04 LTS (not for gaming)


Don't mind the cables. :p

I'm envious. Nothing like the excitement of a new PC.

Stock AMD cooler red light doesn't look bad.
 
Maybe I should have put "as of now" in quotes and bolded, didn't think I had to say that again. <-<;

So, as of now, Ryzen underwhelms in games that make good use of multiple threads. Better?

Also I guess you missed the part where I say that AMD better have fixes ready by the time the 1600X launches so they can change the current first impressions.
You should, yes. Because otherwise you were just sounding pretty ignorant. The fact that the Ryzen chips are still comparable to Intel chips despite dealing with a major artificial bottleneck with regard to Windows scheduling speaks a lot of good about them. Trying to ignore this makes you look biased.

Maybe you're too young to remember when Intel hyperthreading first came out. Intel had the same issues until Windows scheduling caught up, there were people taking a similar stance as you but aimed at Intel.
 

kotodama

Member
Would be nice if we could get that beastly 128 PCI-E lanes and 16 channel memory from Naples on an X99 like board.

Weren't some people gaming on AMD Opterons back in the day? ASRock might be crazy enough even to make a Naples ITX board, lol.
 

Paragon

Member
Looks like The Tech Report have updated the article with normalized charts now:

gtav-ryzen-normalized2fuip.png


gtav-7700k-normalizeds0ue4.png


Source
 
Would be nice if we could get that beastly 128 PCI-E lanes and 16 channel memory from Naples on an X99 like board.

Weren't some people gaming on AMD Opterons back in the day? ASRock might be crazy enough even to make a Naples ITX board, lol.

These server parts are all clocked very low to maximize efficiency and reduce heat. It's pretty clear that AMD designed Ryzen as a server/workstation platform first and foremost, but at the speeds Naples will run it would be terrible for gaming from a performance/$ perspective. For the same tasks Ryzen is strong on, it'd be amazing however. Pickup truck vs mustang sort of situation.
 
Both Sony and Microsoft picked the Jaguar cores for their home consoles. There wasn't anything better given the constraints of the system their engineer's were designing at the time (price included) or their engineer's would have use that instead. The whole package of system and software makes consoles what they are, and consoles are proof that software is designed with hardware in mind to achieve the desired results of the game designers (or more likely the software engineers behind game engines these days).

Doesn't change the fact that they're still "gimped PCs". Being equipped with Intel Atoms wouldn't change that significantly.

Totally irrelevant to the discussion here with Ryzen, which falls under the standard power and instruction sizes. The 8 core Jaguars are nothing like the 1700/1800s to begin with. Totally different classes.

Also, if Jaguar and the other low power architectures anything like Atom, they can decode less instructions per cycle.
 

PFD

Member
When I bought my 5820k almost 2.5 years ago it was the same price as the 1700 is now -- and right now, all that time later, it's more expensive. Of course, some of that has to do with exchange rates, but it's still a bit sad to see that stagnation in CPU pricing continue.

In any case, buying a 5820k for 350€ in late 2014 was a pretty good decision :p

I also lucked out with the exchange rate on my 4790K. I got it for 300 CAD at the time, and now the 7700K starts at 450 CAD (6700K is 400)

I'm thinking of getting a 1080Ti, but I'm dreading the Canadian prices
 
Totally irrelevant to the discussion here with Ryzen, which falls under the standard power and instruction sizes. The 8 core Jaguars are nothing like the 1700/1800s to begin with. Totally different classes.

It's completely relevent. Ryzen is a new architecture from AMD. It's not Bulldozer, nor Phenom, nor Conroe, nor Jaguar.

If you've been on Neogaf for more than a year or two you'd know how much is said about release games on consoles. Look at PS3 release games vs end of life games, they look a generation apart. Compare early PS2 against late PS2, worlds apart.

I'm not saying Ryzen will absolutely have massive gains with time, but it's a fair statement to say it won't get worse. The current low hanging fruit optimizations aren't even on the game developers' plate, they are on AMD and the operating system providers (Microsoft). Which both will do things to improve Ryzen's performance here quick. How much is yet to be seen; I don't expect it to match intel for raw FPS.

The last major change in intel's architecture was with Conroe about the same time AMD started to decline significantly. Everything has been build to the intel platform from that time onward, not to AMD's platform. Bulldozer could have looked better with software targeted for it, however unlikely it is that such things would have saved the Bulldozer architecture. If Itanium were successful, it would have been a completely different world today, but the software would have had to be redeveloped for the Itanium architecture.

If game developers can make a cruddy weakling Jaguar core do what the PS4 and Xbox One are doing with specific optimizations and hardware targeting, that is pretty sufficient evidence that targeting a platform does make a difference and lends itself as evidence to the current discussion around Ryzen. Nothing out there was built around Ryzen, Ryzen was built around the PC software stack in general. Specific optimizations in software will yield specific gains. Saying it is irrelevant is not accurate.
 

rav

Member
Nothing out there was built around Ryzen, Ryzen was built around the PC software stack in general. Specific optimizations in software will yield specific gains. Saying it is irrelevant is not accurate.
I don't understand this part of what you're saying.
It's still x86_64 compatible, but with more cores maybe even broaching the 'many cores' area.
While yes, this is newer territory, for your argument to be sound you'd have to account for the 10core i7-6950X, which is in the same territory. My point is Ryzen has the same Target, which should be the same as i7 kaby lake or other more recent PC CPUs.

Jaguar for PS4 and XB1 have entirely different targets (platform targets) and MS and Sony could decide they didn't want to have the same amount of registers and operations on the CPU if they didn't want them. (Jaguar is more a scalpel, than general purpose CPU like Ryzen. {but probably* not that far from it})

It's entirely doubtful that there is any low hanging fruit from MS in Windows.

edit: *find your own sources that have leaked, I won't be your source :)
 
The evidence something is wrong is right above you. The double hump histograms is being interpreted as a poor handling by Windows of threads on Ryzen in tech circles right as I type this. Bulldozer required a scheduler change in Windows after it was released which helped a bit. It has happened before. WAY back in the day game designers wrote certain critical bits of code in assembly for specific hardware optimization. Also if all x86 (x64 in reality these days) were the same, why are new CPUs designed? The specification just says things have to work, how can vary.

Additionally, look at compilers and open source projects. You could compile Blender with different compilers and have two different performances between them. Same code, same hardware, different performance. Get into Gentoo Linux and set all the compile flags specific to your hardware and you get some of the best computing performance possible. Think the executable you use for whichever game you're playing is as slim and fast as possible for your machine? Think again, it's compiled to run on a lot of different types of machines. All x86_64. But gamers are typically ignorant of hardware and what is happening in the industry except for what they hear from other gamers on sites like this.
 

Renekton

Member
What grinds my gears is that Mass Effect and Nier are coming out but it may be months before the bugs get fully fixed, or they are shown to be perma and need a HW revision. Vega also vaporware with 1080ti lifting its skirt at me.
 

dr_rus

Member
"AMD is stating that Zen implements algorithm learning models for both instruction prediction and prefetch, which will no doubt be interesting to see if they have found the right balance of prefetch aggression and extra work in prediction."

http://www.anandtech.com/show/10907/amd-gives-more-zen-details-ryzen-34-ghz-nvme-neural-net-prediction-25-mhz-boost-steps

Ryzen may perform better after it's run a piece of software for an extended period of time.

I haven't heard much on the learning ability of Ryzen though in any recent reviews. I suspect reviewers don't generally just let a piece of software run for an extended period before collecting data. It could be that Ryzen's learning ability, if it has one as AMD has claimed, resets with each benchmark run. If that's the case performance while gaming for extended periods could increase to be more than what the quick benchmarks show. I doubt there's much effect personally, but that would be a good argument for not accepting the above 40 second benchmark.

Because it's "learning" on the stream of instructions coming to CPU, not "a piece of software", and the result of such learning is immediately available, not after "an extended period of time".
 
Because it's "learning" on the stream of instructions coming to CPU, not "a piece of software", and the result of such learning is immediately available, not after "an extended period of time".

What do you suppose software is exactly in relation to a CPU? Oxymoron much? I have no idea and made it clear in my post as to how this "learning algorithm" works. I am fairly certain is isn't instantaneous, because that's not how learning works. If it were perfect it would take a minimum of one cycle of experience to change how the next cycle is processed. AI nueral networks have to observe a system of causes and effects for some period of time to form any sort of fuzzy understanding of the system they are observing. They don't do it immediately. Before you reply with, "there's no nueral network in Ryzen," that is pretty damn obvious. Also before you reply that 30 seconds to a CPU is not a short amount of time, that is also pretty damn obvious, but it takes the human brain a freaking long time to figure out simple things. I have no idea what AMD was talking about, I was pointing out there is an argument that you can't take the first thirty seconds of a benchmark and call it a day with a learning system. That argument holds water.

Seeing as "software" is a stream of instructions the cpu receives unable to observe a software "package" as a whole, wouldn't that mean that the CPU would need more time to recognize the pattern of the software and formulate the optimizations for it? From the CPU's perspective there's no telling what the software is about until it's recieved some number of instructions. A pattern of a certain length would require at least the length of time for one cycle of the pattern to occur for it to be positively recognized, no? Learning in general is so inefficient that one cycle is probably far from enough. Of course, like I stated before, I don't think this has any significant impact to Ryzen's FPS benchmarks, but don't discount the argument that a 30 second benchmark does not necessarily provide a good representation of the data it provides.
 

spyshagg

Should not be allowed to breed
Why would there be any difference CPU wise between long playing sessions and 30 seconds runs? Even if we assume that longer playing sessions may catch places in some games where a higher number of weaker cores are preferable to a smaller number of stronger ones you're still looking at a situation where Ryzen is factually worse in minimal fps spikes in those places which were used in benchmarks so it ends in sort of a tie at best.

Ask that question out loud to yourself to see if it makes sense. Of course there are differences. Its a play-through not a bench.

Check whatdogs2 ryzen youtube benches where the reviews were done driving inside the city instead of the default walking around the beach.

I'm not defending ryzen, I am defending longer runs = more data.
 
Thought I'd ask here as I didn't get much response in the PC thread, and this is to be coupled with a Ryzen 1700 and a B350 Prime Mobo - here's my post:

Übermatik;231668953 said:
So is the Sapphire 8 GB NITRO+ the best RX 480 out there? [£237.92]

'Cos I just found this on Amazon, at a £100 (35%) discount:


Asus AMD Radeon DUAL-RX480-O8G 8 GB Gddr5 256 Bit Memory PCI Express 3 DVI/HDMI/DisplayPort Graphics Card - Black (1325 MHz OC if I'm not mistaken?

https://www.amazon.co.uk/dp/B01N9GQY5T/


I just don't know if the 1342 MHz clock on the NITRO card is OC or base, and thus either warrants the extra price or means I should snap up that Asus deal whilst it's going.

Help?

Which card is better here? What should my choice be?
 
Übermatik;231688723 said:
Thought I'd ask here as I didn't get much response in the PC thread, and this is to be coupled with a Ryzen 1700 and a B350 Prime Mobo - here's my post:



Which card is better here? What should my choice be?

I'd get the sapphire probably, but it's probably not going to perform so much better than the Asus as to justify the approximately 15% price difference.

Vega is 2-3 months away. Imo buying any gpu right now is like buying a cpu a couple months before Ryzen was released. I have always waited when so close to a release, even though prices don't ever drop as much as i'd like. Used gpus are holding their value way more than they should these days as well or that's where i'd point you.

Furies were going for around $250 recently and provide much more bang for the buck than 480s. They also run much hotter and don't have quite the latest feature set nor 8GB of vram.

I'd also wait to build a ryzen system myself to let any bios or Windows kinks get ironed out. If I were buying right now and couldn't wait, I might get the cheaper 480 and plan on selling it to raise funds for a vega. You'd lose maybe $50 to have a gpu for two or three months until you could see what vega offers. If I didn't plan on gaming above 1080p 60hz, and had no intention of upgrading soon, i'd get the sapphire.
 
I'd get the sapphire probably, but it's probably not going to perform so much better than the Asus as to justify the approximately 15% price difference.

Vega is 2-3 months away. Imo buying any gpu right now is like buying a cpu a couple months before Ryzen was released. I have always waited when so close to a release, even though prices don't ever drop as much as i'd like. Used gpus are holding their value way more than they should these days as well or that's where i'd point you.

Furies were going for around $250 recently and provide much more bang for the buck than 480s. They also run much hotter and don't have quite the latest feature set nor 8GB of vram.

I'd also wait to build a ryzen system myself to let any bios or Windows kinks get ironed out. If I were buying right now and couldn't wait, I might get the cheaper 480 and plan on selling it to raise funds for a vega. You'd lose maybe $50 to have a gpu for two or three months until you could see what vega offers. If I didn't plan on gaming above 1080p 60hz, and had no intention of upgrading soon, i'd get the sapphire.

But it's only the high-end Vegas coming soon, right? And they're not even detailed yet.
I thought 480 equivalent GPUs in the Vega architecture were hitting much later in the year, if not next.
 

ethomaz

Banned
Übermatik;231694827 said:
But it's only the high-end Vegas coming soon, right? And they're not even detailed yet.
I thought 480 equivalent GPUs in the Vega architecture were hitting much later in the year, if not next.
Yeap... you can buy now... Vega will be a $400+ card for high-end only.

The RX 580 will be a RX 480 rebranded... wait 3-4 months to get the same makes no sense at all even if you get a bit cheaper the RX 480.

From the two you listed ASUS looks to be the better deal.

PS. This wait mentality is a loop ad forever... buy what is best to you right now because when Vega get out you will look that in 3-4 months another card will be launched.
 
Yeap... you can buy now... Vega will be a $400+ card for high-end only.

The RX 580 will be a RX 480 rebranded... wait 3-4 months to get the same makes no sense at all even if you get a bit cheaper the RX 480.

From the two you listed ASUS looks to be the better deal.

PS. This wait mentality is a loop ad forever... buy what is best to you right now because when Vega get out you will look that in 3-4 months another card will be launched.

Yeah, as I thought. If I keep waiting and waiting I'll never buy a thing. And with the price drop, the Asus makes a lot of sense.

Now last question... why on earth would I consider buying a 1060 over that 480 card? Less VRAM, higher price... but higher clock and CUDA...?
 

Pagusas

Elden Member
Übermatik;231696845 said:
Yeah, as I thought. If I keep waiting and waiting I'll never buy a thing. And with the price drop, the Asus makes a lot of sense.

Now last question... why on earth would I consider buying a 1060 over that 480 card? Less VRAM, higher price... but higher clock and CUDA...?

Nvidia driver support, gsync support, specific game effect support, nvidia shield and broadcasting support. Awesome screen shot tools via nvidia arsenal. Lots of reason to prefer nvidia over AMD, wether it's worth the cost premium is another question entirely.

Honestly though I don't feel the 1060 or 480 are good pruchases, 1070 or bust, especially with the price drop.
 
Übermatik;231694827 said:
But it's only the high-end Vegas coming soon, right? And they're not even detailed yet.
I thought 480 equivalent GPUs in the Vega architecture were hitting much later in the year, if not next.

Nobody knows if it's only the high-end Vegas, that's an assumption. Having more data helps you make a better decision. For example, you haven't told us anything about your situation other than that you are putting together a Ryzen 1700 system. You haven't said whether or not you're only gaming, playing at 1080p 60hz, 144hz, 1440p, or might go 4k someday. You haven't told us your financial condition such that we know what money's real value to you might be. $50 to me isn't a big deal, but to you it could be.

Consider that Vega is the first real architecture change for AMD on the graphics card side of things since the first Graphics Core Next (GCN) GPUs came out. They are going to a tile based rendering system that has proven so effective for NVidia these last four or so years. If AMD has made any real significant progress, such that their own products eclipse the Polaris GPUs, you could see the RX480 dropped to a much lower tier in their lineup. Whether this will happen I don't know, but waiting some period of time for the information could be worthwhile to you.

Also, if all you plan to do is game, you could wait for the R5 chips to come out and perhaps save enough money to get a more middle tier Vega. Ask yourself, do you think a 6 core Ryzen with a GTX1070 would outperform an 8 core Ryzen with an RX480 for the same price in gaming applications? I certainly think so. There's probably an 18 months lifespan difference between those systems. If you were to swap the GPU in the future, the 8 core system would be the better way to go today.

Even if you were to just get the Asus 480, you'd have quite a solid gaming machine and probably be very happy. My bet is AMD isn't going to outright discontinue the Polaris series, and it may not even get much of a price drop if recent trends continue, but a couple of months of waiting to know would be worth it to me if I want my purchase to last five years.

You didn't get an answer in the other thread because people are hesitant to answer a question without knowing enough data to form a solid opinion.
 
Nvidia driver support, gsync support, specific game effect support, nvidia shield and broadcasting support. Awesome screen shot tools via nvidia arsenal. Lots of reason to prefer nvidia over AMD, wether it's worth the cost premium is another question entirely.

Honestly though I don't feel the 1060 or 480 are good pruchases, 1070 or bust, especially with the price drop.

What's the difference in drivers, specifically? Just more prolific? And I won't be making use of gsync any time soon. Hairworks, PhysX, Ansel... not particularly bothered.

The 6GB is £350 more expensive and the 1070 is £359.99, out of my range.

Not seeing much sense in that to be honest.
 
Übermatik;231696845 said:
Yeah, as I thought. If I keep waiting and waiting I'll never buy a thing. And with the price drop, the Asus makes a lot of sense.

Now last question... why on earth would I consider buying a 1060 over that 480 card? Less VRAM, higher price... but higher clock and CUDA...?


I don't like to support NVidia, so I don't buy their stuff. Companies trying to create walled gardens irritate me. Currently the 1060 and 480 are more or less equivalent in performance from a real world perspective, so I couldn't recommend one over the other except on price. I will say there are more GSync monitors of high quality than Freesync if that sways your decision, but there are more cheaper Freesync monitors than GSync. I wish NVidia would just support open vesa adaptive sync standards. If they did that, quit with gameworks nonsense, and supported openCL instead of pushing proprietary CUDA, I'd have a much higher opinion of them. I do recommend NVidia gpus when they make sense to others however as my bias is for my own personal decision making.
 
Nobody knows if it's only the high-end Vegas, that's an assumption. Having more data helps you make a better decision. For example, you haven't told us anything about your situation other than that you are putting together a Ryzen 1700 system. You haven't said whether or not you're only gaming, playing at 1080p 60hz, 144hz, 1440p, or might go 4k someday. You haven't told us your financial condition such that we know what money's real value to you might be. $50 to me isn't a big deal, but to you it could be.

Consider that Vega is the first real architecture change for AMD on the graphics card side of things since the first Graphics Core Next (GCN) GPUs came out. They are going to a tile based rendering system that has proven so effective for NVidia these last four or so years. If AMD has made any real significant progress, such that their own products eclipse the Polaris GPUs, you could see the RX480 dropped to a much lower tier in their lineup. Whether this will happen I don't know, but waiting some period of time for the information could be worthwhile to you.

Also, if all you plan to do is game, you could wait for the R5 chips to come out and perhaps save enough money to get a more middle tier Vega. Ask yourself, do you think a 6 core Ryzen with a GTX1070 would outperform an 8 core Ryzen with an RX480 for the same price in gaming applications? I certainly think so. There's probably an 18 months lifespan difference between those systems. If you were to swap the GPU in the future, the 8 core system would be the better way to go today.

Even if you were to just get the Asus 480, you'd have quite a solid gaming machine and probably be very happy. My bet is AMD isn't going to outright discontinue the Polaris series, and it may not even get much of a price drop if recent trends continue, but a couple of months of waiting to know would be worth it to me if I want my purchase to last five years.

You didn't get an answer in the other thread because people are hesitant to answer a question without knowing enough data to form a solid opinion.

Not that I'd expect you to, but you'll find I've detailed my intentions earlier in the thread.

The machine is primarily for creation/productivity. 3D modelling and rendering, VFX, game development and digital art. Gaming comes second to these priorities, and I'll be looking at either 1920x1080 60fps or 2560x1080 60fps - and even then, mostly in less demanding games like DOTA 2.

For this, a Ryzen 1700 and RX 480 seem to be making the most sense to me.
 

ethomaz

Banned
Übermatik;231696845 said:
Yeah, as I thought. If I keep waiting and waiting I'll never buy a thing. And with the price drop, the Asus makes a lot of sense.

Now last question... why on earth would I consider buying a 1060 over that 480 card? Less VRAM, higher price... but higher clock and CUDA...?
If you are comparing the 3GB version then it is really too low VRAM in my view but the 6GB version won't lose to 8GB RX 480 even in future games... in this case I don't see any issue with the different VRAM.

Said that GTX 1060 6GB vs RX 480 8GB lies more in personal preference and price... you won't go wrong with any of them... both are solid cards.

If you don't have preference then go with what you find cheaper imo.
 
Top Bottom