• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Ryzen Thread: Affordable Core Act

Biostar X370-GTN ITX

https://www.facebook.com/BiostarHK/posts/1069865689792162

Biostar HK added 3 new photos.

See what we got! The first #AMD X370 mini-ITX motherboard is coming your way. ��
#BIOSTAR #RACINGseries #SFF #RYZEN
17021476_106986563312u2y33.jpg


17021983_106986563645a4a3o.jpg


17022432_106986564312zflp0.jpg


5-1080.40121250865szkk.jpg

BIOSTAR Shows off First Mini-ITX Socket AM4 Motherboard
https://www.techpowerup.com/231205/biostar-shows-off-first-mini-itx-socket-am4-motherboard
BIOSTAR showed off the industry's first socket AM4 motherboard in the mini-ITX form-factor, the Racing X370-GTN, based on AMD's top of the line X370 chipset. The board draws power from a combination of 24-pin ATX and 4-pin CPU power connectors, and supports all models of Ryzen processors, although we're curious how XFR will work with such slim power inputs. The board conditions power for the SoC using a 7-phase VRM.

The socket AM4 chip is wired to two DDR4 DIMM slots, supporting up to 32 GB of dual-channel DDR4-2666 memory; the PCI-Express 3.0 x16 slot, and since this is an SoC, most of the board's connectivity comes from the processor, too. This includes two out of the board's four SATA 6 Gb/s ports, a 32 Gb/s M.2 slot (reverse side, unseen), 2-4 USB 3.0 ports, and the display I/O. The X370 chipset puts out two additional SATA 6 Gb/s ports, and wires out the HD audio (115 dBA SNR CODEC), and a Realtek DragonLAN GbE controller. The company didn't reveal availability details.
Your turn, Asus Impact.
 

K.Jack

Knowledge is power, guard it well
And the Ryzen 5 1600X which will cost $259 with 6 cores at 4 GHz (boost)?

You think it will be not blown the fuck out by the 7700K?

Being $100 cheaper won't matter, when the performance numbers say that it's worth spending the extra $100.
 

belvedere

Junior Butler
I stopped to read when the start to say both do over 100fps and it is fine to everybody without realize you use the same CPU for years and next year the difference can be one below 30fps and other over 30fps.

All benches shoes the same Intel CPUs having better performance while costing way less.

The opposite is true for production software like encoding where AMD CPU has better performance while costing less.

Ryzen is indeed bad for gaming platform because delivery less costing more like Intel is bad for production plataform delivering less comsting more.

The post is not great... it is just a fan crying over something that won't change using the wrong metrics and arguments.

If you cherry pick one exaggerated, rare instance in a particular benchmark, then maybe? This post was in response to the statements that the Ryzen platform is a failure in general, beyond gaming. When it can outperform competing class chips at half the cost that statement is unequivocally incorrect. Besides, I don't recall seeing one review echoing that sentiment.

Many reviews say Ryzen in its present first generation launch week state puts it in i5 territory in regards to gaming. Linus, Ars and Anandtech (paraphrasing) said the platform is sufficient for gamers who's interests aren't purely based on hitting the FPS ceiling.

So no, at this early stage I can't honestly convince myself that Ryzen is doomed.
 

Steel

Banned
You think it will be not blown the fuck out by the 7700K?

Being $100 cheaper won't matter, when the performance numbers say that it's worth spending the extra $100.

The 7700K will be better, but the R7s are hardly being "blown the fuck out" in gaming, as you put it. Being a $100 cheaper will absolutely be a factor, especially since with the extra cores it'll still be better at multithreaded non-gaming tasks and multi-tasking at the same time.
 
Yeah so maybe can put Ryzen in the same category ��

That's perfectly fine, I think that the main takeaway is that the Ryzen chips that were just released are mainly competitors with chips like the Broadwell E. That is to say, good for things like content creation, serviceable for gaming, though other chips like the Kaby lake i5s and i7s will remain better buys if gaming is the sole concern.
 

tuxfool

Banned
Ryzen is extremely viable for the same kind of things that you would use Broadwell-E for, mainly workloads that don't just involve pure gaming. Even for gaming, it's "good enough", but if your goal was, as he said, "pure gaming rig", it would appear that the 7700k would still be the main recommendation.

Calling it a total failure is a little harsh. What he said wasn't untrue in the sense that it did fail to get the performance crown for gaming, but it's definitely not an "overall" failure. It's a positive development for AMD in the CPU space, and a great first step for their future strategy in the CPU space.

Additionally, I'd say the results so far in terms of power and TDP are very promising. Ryzen should also be good when they release Naples for dense servers and Raven Ridge in laptops assuming those two remain on 14nm LPP.
 

shark sandwich

tenuously links anime, pedophile and incels
·feist·;231452345 said:
This is really strange because X300 is supposed to be the "enthusiast" mini-ITX chipset. Wonder if those motherboards are still a ways away.

Anyway, I agree that Ryzen is somewhat disappointing as a pure gaming chip. As a Broadwell-E competitor though, it's pretty great. I ordered one for myself :)
 
It might be the same scalability issue in heavily communicating threads that is also a factor in other benchmarks.

That's not really true though?
I paid around 350€ for the 5820k in 2014. Now the Ryzen 7 1700 (never mind X or 1800) is 360€.

I agree as to the significance of the WD2 results, but your remark about the 7700k doesn't hold true in frametimes at Computerbase at least:
ryzen_percentile_wd29ok2s.png


That said, Intel's 6- and 8-core CPUs show much better scaling.

Hmm this is very interesting, it will be fascinating to see how Ryzen performs in the future.
 
I'mma be real with you.

Yes, Ryzen is a failure for a pure gaming rig, because the $500 flagship loses to a $350 Intel chip.

That is textbook failure.

What about the 1700, which is $20 cheaper than a 7700K, but can be clocked as high as the 1800X in most situations?


And the Ryzen 5 1600X which will cost $259 with 6 cores at 4 GHz (boost)?

I think $259 is a highball number. Where did it come from? If AMD prices the 1600X over a 7600K they are bonkers.

You think it will be not blown the fuck out by the 7700K?

Being $100 cheaper won't matter, when the performance numbers say that it's worth spending the extra $100.

The 1600X will not be "blown the fuck out" by 7700K, like at all. My guess is that it will perform very similarly to the R7s clock for clock in gaming. So it might be 20% max behind the 7700K in some games and will come with all the other benefits of additional cores/threads.
 
Ok yea, you know that was a bad position for me to take. I was talking out of my ass. We may not be hitting the GPU bottlenecks now, but certainly down the line it would be more important for longevity. AMD trying to defend it as oh we're targeting higher resolutions is a cop out. Even if it holds true in the long run.
 

wildfire

Banned
What about the 1700, which is $20 cheaper than a 7700K, but can be clocked as high as the 1800X in most situations?




I think $259 is a highball number. Where did it come from? If AMD prices the 1600X over a 7600K they are bonkers.



The 1600X will not be "blown the fuck out" by 7700K, like at all. My guess is that it will perform very similarly to the R7s clock for clock in gaming. So it might be 20% behind the 7700K in some games and will come with all the other benefits of additional cores/threads.

Some? It will be most just like the 1700. In fact Gamers nexus went over the feedback they had gotten from AMD and AMD expects to be behind by 20% by default ~8% IPC difference with Kaby Lake and ~13% clock speed difference.

https://www.youtube.com/watch?v=TBf0lwikXyU

I'm not going to rewatch to pinpoint where he goes over that but the entire video is worth listening too if you only skimmed the answers to why Ryzen is having wildly variable gaming performance across reviews.
 
Some? It will be most just like the 1700. In fact Gamers nexus went over the feedback they had gotten from AMD and AMD expects to be behind by 20% by default ~8% IPC difference with Kaby Lake and ~13% clock speed difference.

https://www.youtube.com/watch?v=TBf0lwikXyU

I'm not going to rewatch to pinpoint where he goes over that but the entire video is worth listening too if you only skimmed the answers to why Ryzen is having wildly variable gaming performance across reviews.

I watched the video and I've looked at benchmarks. Techspot Ryzen shows a huge variance from 10% to 30%. What's interesting is that Tom's Hardware shows a similiar spread, but if you look at AotS it goes from a 30% gap to 20% with SMT disabled. Which validates the video you linked of how some games favor Intel's HT vs AMD's SMT. So yes at this point disabling SMT on a per game basis would be a pain in the ass, but going forward games will hopefully support both. So it won't be an issue.
 

IC5

Member
There are almost no games which scale to so many threads. Turning off SMT for gaming, is a no brainer.

The only games I play which scale and show benefit from more than 4 cores, are battlefield games, during muliti-player. And I'm talking real cores. I haven't tested hyperthreading on BF.

A lot of games are just fine one two cores. Far Cry 4 requires a minimum of 3 thread awareness from your CPU. My i3-6100 chews through that game, no problem. Albeit needs hypertheeading activated, since its a dual core. But that's the point. Even a 3 thread game, is fine on two real cores.

12 and 16 thread awareness from your CPU means nothing for gaming right now. Don't feel bad about turning off SMT or hyperthreading to gain raw performance, save power, lower temps, or help squeeze an overclock.

SMT and hyperthreading are mainly for highly thread capable productivity and workplace scenarios.
 
You think it will be not blown the fuck out by the 7700K?

Being $100 cheaper won't matter, when the performance numbers say that it's worth spending the extra $100.

You would need a 1080 and play at 1080p to even see noticeable differences between these CPUs, which is the reason why your hyperbole is ridiculous.
 
Well, the only things I see is that the 8 core line provides good enough gaming performance with the possibility that it will improve with mainboard bios updates and software optimizations, modern engines with improved scabality will also improve the relative performance of the Ryzen and everything for a lower price than comperable 8 core chips.
 
AMD's return to a competitive footing versus Intel in desktop PC processors is quite likely the story of the decade in the computing sector. However, after rushed, day-one reviews of the chips detailed a few soft spots in performance -- specifically lower resolution 1080p gaming -- some have run away with headlines and generalized opinions that don't take into consideration the realities of early release, cutting-edge semiconductor products, especially something as complex as a brand new CPU platform architecture.

If there's one thing I've learned in almost two decades of both selling and covering semiconductor platform technologies as a sales engineer and press media, it's that optimization of hardware, firmware and software can make a world of difference after a new product has a few months of maturity under its belt. It's easy to see why many were alarmed about Ryzen's uncharacteristic early shortcomings in gaming. After all, the massive and growing PC gaming industry and its passionate enthusiast following will be the bread and butter market for AMD's hot new 8-core Ryzen processors. However, too few have taken the collective deep breath to view this data point for what it is, currently at least, a "corner case".

As I spoke with many of my colleagues, both analysts and press media, it was clear that universally everyone was seeing the same lower performance metrics for AMD Ryzen chips. When game benchmarks were dialed back to 1920X1080 resolutions (1080p), Ryzen was uncharacteristically slower than its Intel counterparts, sometimes by as much as 30% - 40%. This shortcoming flies in the face of virtually every other benchmark condition and result we've seen so far with the chips, from heavy-duty workstation number crunching, rendering and content creation, to standard everyday productivity compute tasks that require simple, fast short-burst IO response times. Here, Ryzen goes toe-to-toe with Intel's HEDT and even beats it in spots. And when you dial resolutions back up, Ryzen processors easily keep pace with Intel's core-for-core in gaming. However, in that case, the test condition is essentially a graphics benchmark, since the majority of the processing is being done on the GPU rather than the CPU.

https://www.forbes.com/sites/daveal...-ryzen-in-overly-critical-light/#6a5b3e9013e2

Some common sense from some of the posters spouting hyperbole in here would be beneficial to everyone.
 
You would need a 1080 and play at 1080p to even see noticeable differences between these CPUs, which is the reason why your hyperbole is ridiculous.

It depends on the frame-rate you're targeting and the settings you're using, you could always reduce the settings on lower end GPUs until you're close to the frame-rate target of your desire.

A GTX 980 Ti as-well as future GPUs will be able to expose CPU limitations in some games, especially in GTA V and Battlefield 1.
 

Durante

Member
https://www.forbes.com/sites/daveal...-ryzen-in-overly-critical-light/#6a5b3e9013e2

Some common sense from some of the posters spouting hyperbole in here would be beneficial to everyone.
This shortcoming flies in the face of virtually every other benchmark condition and result we've seen so far with the chips, from heavy-duty workstation number crunching, rendering and content creation, to standard everyday productivity compute tasks that require simple, fast short-burst IO response times.
Then they didn't do enough benchmarking. Look at the Photoshop, SunSpider, WinRar, computational fluid dynamics, or 3D particle mover application benchmarks at Computerbase. Many workloads which aren't embarrassingly parallel or which are memory-heavy display very similar performance patterns to CPU-limited (it feels weird that I have to mention that) game benchmarks. Which also generally aren't embarrassingly parallel workloads outside of rendering, so that makes sense.
 
It depends on the frame-rate you're targeting and the settings you're using, you could always reduce the settings on lower end GPUs until you're close to the frame-rate target of your desire.

A GTX 980 Ti as-well as future GPUs will be able to expose CPU limitations in some games, especially in GTA V and Battlefield 1.

Well here they're benching with a Titan X to expose a performance gap in GTAV.

GTA-V-1080.png


Even then, the difference in average fps (Ryzen's min fps is actually higher) is 91 fps for Ryzen versus 112 fps for the 7700K at 1080p, like I said with a Titan X. Now I don't know about you but I wouldn't be able to tell the difference between 91 fps and 112 fps.

If you sling a 980 Ti in there, the difference is going to be significantly smaller. Maybe 91 fps versus 95 fps. Drop settings in the game low enough and you would end up with 182 fps for the Ryzen versus 190 fps for the 7700K.
 
Then they didn't do enough benchmarking. Look at the Photoshop, SunSpider, WinRar, computational fluid dynamics, or 3D particle mover application benchmarks at Computerbase. Many workloads which aren't embarrassingly parallel or which are memory-heavy display very similar performance patterns to CPU-limited (it feels weird that I have to mention that) game benchmarks. Which also generally aren't embarrassingly parallel workloads outside of rendering, so that makes sense.

Anandtech did a more comprehensive battery of tests:
http://www.anandtech.com/show/11170...review-a-deep-dive-on-1800x-1700x-and-1700/18

I've read that the memory issues are the No.1 priority for AMD to resolve now. The Windows driver isn't apparently fully supporting Ryzen yet.

But simply put, I don't think 5-8% lower ipc and ballpark frequency translates to 40% deficit in performance in 1080p gaming.
 

eva01

Member
My plan is to get the 1700 and try to OC to 3.8 ~ 4.0. I'm currently using an i5 - 4670k @ 4.4ghz with a GTX 980Ti and a 1440p monitor. At $329 that is amazing value for me because I stream my gameplay occasionally and have multiple applications running using a second monitor.
 

ezodagrom

Member
https://www.forbes.com/sites/daveal...-ryzen-in-overly-critical-light/#6a5b3e9013e2

Some common sense from some of the posters spouting hyperbole in here would be beneficial to everyone.
Sure, it could end up getting better in the future, or updates and patches could end up making just a minor difference, we'll see what happens then.

But, this is what we have now, a great CPU for productivity but underwhelming for gaming.

If I were to buy a new CPU now, as a gamer the i7 7700K would be the obvious choice, I'm not gonna buy something based on promises that it'll get better.
 

longdi

Banned
Ryzen ipc is around broadwell level, so it's puzzling why its struggling with games.

Probably down to 2 things.

1. Core clock maxes out at 3.9~4ghz when loading 4~8 threads.

2. Its dual ccx structure, needs games/Windows to be smarter in switching threads.

Can these 2 issues be fix in software?
 
Ryzen ipc is around broadwell level, so it's puzzling why its struggling with games.

Probably down to 2 things.

1. Core clock maxes out at 3.9~4ghz when loading 4~8 threads.

2. Its dual ccx structure, needs games/Windows to be smarter in switching threads.

Can these 2 issues be fix in software?

Apparently switching to Performance Mode in power plan in Windows gives performance gains from 5-15% as well in games. I'm willing to bet some reviewers didn't do that, but how were they to know:

https://www.youtube.com/watch?v=mW1pzcdZxKc&t=537s
 

Colbert

Banned
Don't know if this was already posted here but the (german) website 3dcenter.org made an overview where they averaged the overall performance ratings of several hardware sites to see how the Ryzen CPU ranks compared to the Intel processor counterparts. You should also read the article where they explain their approach and do some remarks about some of the testing is going on (Google translate will be your friend for non-german speakers).

Link: http://www.3dcenter.org/news/amd-ryzen-launchreviews-die-testresultate-zur-anwendungs-performance-im-ueberblick

 

Nachtmaer

Member
Ryzen ipc is around broadwell level, so it's puzzling why its struggling with games.

Probably down to 2 things.

1. Core clock maxes out at 3.9~4ghz when loading 4~8 threads.

2. Its dual ccx structure, needs games/Windows to be smarter in switching threads.

Can these 2 issues be fix in software?

1. Short answer is no. There are probably multiple reasons for why Ryzen doesn't clock as high; GloFo's 14LLP's voltage/frequency curve, transistor density, process immaturity, being a relatively big chip with eight cores, etc etc. To Ryzen's defense, Intel's HEDT chips don't OC as well as their quad cores either quad cores either without them turning into a furnace, but they still have a lot more headroom compared to Ryzen. This might get better over time.

2. Probably to some extent. Windows doesn't seem fully aware of their CCXs and SMT yet. PCGH tested running their 1800X with a 2+2 and a 4+0 setup and there were about 1-10% differences. Then there's SMT which can have about a 10-15% penalty. We'll have to see whether this can be fixed or mitigated.
 

FireFly

Member
You think it will be not blown the fuck out by the 7700K?

Being $100 cheaper won't matter, when the performance numbers say that it's worth spending the extra $100.
The 7700K is clocked 12.5% faster than the 1800X and according to AMD they are 6.5% behind on IPC, so that gives an overall delta of ~20%. I think that is reflected in the benchmarks. For example, the more favourable Computerbase review (https://www.computerbase.de/2017-03/amd-ryzen-1800x-1700x-1700-test/4/) puts the 7700K at 7% faster overall at 1280x720. The main problem is the inconsistency, not the overall delta.

If they can fix the 35% losses then you are looking at i5 7600k level gaming performance and massively better application and multithreaded performance for an equivalant price.

I think $259 is a highball number. Where did it come from? If AMD prices the 1600X over a 7600K they are bonkers.
You could be right. It looks like the only stories I can find are from February, and there isn't a price on the latest slide deck.
 

Mr Swine

Banned
It might be the same scalability issue in heavily communicating threads that is also a factor in other benchmarks.

That's not really true though?
I paid around 350€ for the 5820k in 2014. Now the Ryzen 7 1700 (never mind X or 1800) is 360€.

I agree as to the significance of the WD2 results, but your remark about the 7700k doesn't hold true in frametimes at Computerbase at least:
ryzen_percentile_wd29ok2s.png


That said, Intel's 6- and 8-core CPUs show much better scaling.

So can you explain what that is for a pleb like me?
 
The 7700K is clocked 12.5% faster than the 1800X and according to AMD they are 6.5% behind on IPC, so that gives an overall delta of ~20%. I think that is reflected in the benchmarks. For example, the more favourable Computerbase review (https://www.computerbase.de/2017-03/amd-ryzen-1800x-1700x-1700-test/4/) puts the 7700K at 7% faster overall at 1280x720. The main problem is the inconsistency, not the overall delta.

If they can fix the 35% losses then you are looking at i5 7600k level gaming performance and massively better application and multithreaded performance for an equivalant price.


You could be right. It looks like the only stories I can find are from February, and there isn't a price on the latest slide deck.

Yes I've just realised how close Ryzen is to the 7700K in Computerbase and also Joker Productions review on YT. The 7700K is only 4% faster on average in 14 games at 1080p.

17103623_227361721063456_4009195654022624193_n.jpg


https://www.computerbase.de/2017-03.../#abschnitt_benchmarks_mit_fps_und_frametimes

Yet we got people here saying it gets 'blown the fuck out the water' in gaming?!

As you say, the results are massively varied. Some reviews are very dubious.
 

Paragon

Member
Here's some memory scaling numbers in Arma from io-tech.fi. Looks like there's gains to be had once memory support improves.
For reference, an i7-2600K at 4.2GHz with DDR3-2133 RAM scores 31.0 FPS in this test, and an i7-7700K with DDR4-3733 RAM scores 56.5 FPS.
http://techreport.com/review/31410/a-bridge-too-far-migrating-from-sandy-to-kaby-lake/2

But ARMA 3 is basically a single-threaded game so that's a worst-case scenario for Ryzen.

Apparently switching to Performance Mode in power plan in Windows gives performance gains from 5-15% as well in games. I'm willing to bet some reviewers didn't do that, but how were they to know: https://www.youtube.com/watch?v=mW1pzcdZxKc&t=537s

AMD contacted reviewers about this, so they would have known about it.
Disabling power-saving features would also benefit Intel processors though.

I'm still not sure what would be best for testing either:
Do you test the CPU with power-saving features disabled, or do you test it in the normal state which the majority of users are going to have it set to?
With my 2500K, I disabled the power-saving features because it does have a noticeable impact on game performance since it's so old now. However part of the reason I'd want to upgrade would be the huge strides that have been made in power savings. Even having the power savings options enabled doesn't save much on this CPU.

But Intel has been working really hard to improve that, starting with Skylake.
With Kaby Lake, they can reach maximum performance from idle in only 15ms, compared to almost 100ms in older CPUs.

So can you explain what that is for a pleb like me?
Frametimes in percentile are a better way to characterize game performance.
Measurements are in milliseconds so lower is better, and so is a flatter line.

What that graph shows is that the 1800X performs better than the 7700K (lower line) and that the 6850K and 6900K perform almost identically except in the worst-cases, where the 6900K performs marginally better. (minimum framerate would be higher)
It's a much better look at performance than just the three max/avg/min framerate numbers many places post - if you even get that. A lot of sites only post averages.

EDIT: Looking over the "Frametimes in Percentile" graphs from Computer Base really shows that games are starting to benefit from having 6 cores now.
If the 7700K is faster, the 6850K is still performing very close to it. However there are some games where the 6850K is performing noticeably better than the 7700K now.
8 cores don't seem to bring much improvement over 6 cores though, and can sometimes perform considerably worse than either the 7700K or 6850K.
Really interesting results for Rise of the Tomb Raider there too. It clearly demonstrates how DX12 significantly smooths out the gameplay experience even if overall performance is slightly lower.
 

JohnnyFootball

GerAlt-Right. Ciriously.
Right now it's obvious that for gaming only, Kaby Lake 7700K is the best choice. No question. It's cheaper than the 1700X and 1800X. However the Z270 motherboards are quite a bit more expensive than the B350 motherboards and that affects the overall cost.

When you consider things outside of gaming Ryzen closes the gap.

Now the good news is that as software and games become more optimized for AMD the gap will diminish although it is unlikely to go away.

However AMDs real concern is that Intel could decide to permanently drop the price of the i5 and i7 Kaby Lake CPUs. A $279 i7 7700K would be deadly. Whether that will happen remains to be seen.

I think AMD should consider getting the 1600X out sooner rather than later, as that CPU feels like the best potential bang for buck.
 

Irobot82

Member
Anyone know if this means much of anything? Anandtech Forum member finds that windows is viewing each thread as it's own core will full L2 and L3 cache. So this is some kind of an error?

Link here with data

This is affect SMT and scheduling they think.
 

shark sandwich

tenuously links anime, pedophile and incels
Yeah I think the 1600X paired with sub-$100 B350 motherboards will be a very popular choice for gamers. That is a pretty damn good value, plus AMD tends to stick with the same socket for a long time.

Ryzen isn't exactly the second coming of Athlon 64, but it's still a pretty great value for certain use cases.
 
Anyone know if this means much of anything? Anandtech Forum member finds that windows is viewing each thread as it's own core will full L2 and L3 cache. So this is some kind of an error?

Link here with data

This is affect SMT and scheduling they think.

It means Windows thinks that the CPU has more cache resource to work with than they actually do, which obviously will cause problem. I think they answered in the AMA that they have submitted patch to Microsoft but it is not live yet.
 

joshcryer

it's ok, you're all right now
I watched the video and I've looked at benchmarks. Techspot Ryzen shows a huge variance from 10% to 30%. What's interesting is that Tom's Hardware shows a similiar spread, but if you look at AotS it goes from a 30% gap to 20% with SMT disabled. Which validates the video you linked of how some games favor Intel's HT vs AMD's SMT. So yes at this point disabling SMT on a per game basis would be a pain in the ass, but going forward games will hopefully support both. So it won't be an issue.

It appears that Windows scheduler isn't properly allocating the SMT cores (it's seeing each SMT core as a true core) according to this post and these comments:

https://www.reddit.com/r/Amd/comments/5xgths/smt_configuration_error_in_windows_found_to_be/

https://www.reddit.com/r/Amd/commen...orm_in_games/defc6un/?st=izvflz88&sh=e80651af

This should be patched by Windows. Apparently Linux, from the same thread, has much better scheduling, putting the Ryzen 1700 up there with freaking $1k server CPUs: https://www.servethehome.com/amd-ryzen-7-1700-linux-benchmarks-zen-buy/

https://www.servethehome.com/amd-ryzen-7-1700x-linux-benchmarks/

Yeah. I'm sticking with my statement earlier in the thread that Ryzen is my next build. I'll wait until the kinks are worked out, but it's going to be a stellar build for the price. There's no question in my mind.
 

antiloop

Member
Disappointing so far but it makes sense that almost all games are optimized for Intel platforms as they have been the platform of choice for a long time, for gamers.

I am still tempted to buy one. I don't game much nowadays.
 

NXGamer

Member
Yes I've just realised how close Ryzen is to the 7700K in Computerbase and also Joker Productions review on YT. The 7700K is only 4% faster on average in 14 games at 1080p.

17103623_227361721063456_4009195654022624193_n.jpg


https://www.computerbase.de/2017-03.../#abschnitt_benchmarks_mit_fps_und_frametimes

Yet we got people here saying it gets 'blown the fuck out the water' in gaming?!

As you say, the results are massively varied. Some reviews are very dubious.
Some of these tests seem..."odd" from sights that I have seen.

I am working through my tests at moment and hope many enjoy them, find them useful and as always honest as I have bought and built an entire new system for this.

That said it definately has some issues and these come from windows and firmware issues.
 

Durante

Member
So can you explain what that is for a pleb like me?
In addition to what Paragon said, frametime percentiles are, in my opinion, the way that game benchmark results should be presented. More than any other objectively and accurately measureable metric, they give you an idea of how smooth a game actually fels to play.
 
My 1700's wraith spire fan is surprisingly noisy. Anyone else?

Might contact AMD about it... or is it a standard sized fan that I can replace with something else nicer easily (and leave the heatsink on)?
 

Nachtmaer

Member
My 1700's wraith spire fan is surprisingly noisy. Anyone else?

Might contact AMD about it... or is it a standard sized fan that I can replace with something else nicer easily (and leave the heatsink on)?

Can you check what RPM it's running at? If they're using a really conservative fan profile, you might be able to lower the RPM, and thus noise, without raising temps too much.

I can't really find anything on what size it is, but since they're using a round frame, it probably won't be easy to find a different fan that could fit in those mounting holes. Least hassle would probably be getting a different cooler.
 

shark sandwich

tenuously links anime, pedophile and incels
My 1700's wraith spire fan is surprisingly noisy. Anyone else?

Might contact AMD about it... or is it a standard sized fan that I can replace with something else nicer easily (and leave the heatsink on)?
I -think- it's a standard 92mm fan. My guess is that you could replace it with any 92mm fan although you wouldn't be able to use that fancy shroud.
 
Can you check what RPM it's running at? If they're using a really conservative fan profile, you might be able to lower the RPM, and thus noise, without raising temps too much.

I can't really find anything on what size it is, but since they're using a round frame, it probably won't be easy to find a different fan that could fit in those mounting holes. Least hassle would probably be getting a different cooler.
About 1000 RPM. It's not terrible but you can hear a quiet whine (instead of a just general whoosh or otherwise).

I -think- it's a standard 92mm fan. My guess is that you could replace it with any 92mm fan although you wouldn't be able to use that fancy shroud.
Thanks, I'll do some looking around. It would be nice to not have to buy an additional cooler since I don't tend on overclocking on this anyway. I think I'm spoiled by my Noctua D15 in the other computer!
 
Anandtech did a more comprehensive battery of tests:
http://www.anandtech.com/show/11170...review-a-deep-dive-on-1800x-1700x-and-1700/18

I've read that the memory issues are the No.1 priority for AMD to resolve now. The Windows driver isn't apparently fully supporting Ryzen yet.

But simply put, I don't think 5-8% lower ipc and ballpark frequency translates to 40% deficit in performance in 1080p gaming.

Well there's a reason Intel supports up to quad-channel DDR4 on HEDT. Strangling 6 and 8 core CPUs with dual-channel DDR4 memory bandwidth was never a good idea and AMD should have known this. Skylake and Kaby Lake are heavily memory bandwidth constrained even with just 4 cores, it has to be dramatically bottlenecking Ryzen.
 

wildfire

Banned
Well there's a reason Intel supports up to quad-channel DDR4 on HEDT. Strangling 6 and 8 core CPUs with dual-channel DDR4 memory bandwidth was never a good idea and AMD should have known this.

This is overblown. The average size of data being accessed at any given time will hit cache or be quickly placed into cache.
There are probably more reasons for this but all answers point to Quad channel bandwidth means nothing for gaming.
 
Top Bottom