• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Ryzen Thread: Affordable Core Act

ethomaz

Banned
If you refer to Jokers review, should maybe turn on the sound. He explains why the workloud is 99% in his 1080p tests. VSYNC off means gpu load will always show 99%. He discusses it further in his follow up videos.

It is incredible frustrating to see people trash reviews without an idea whats going on.
This excuse makes no sense at all.

VSYNC off didn't make GPU always go to 99% lol

If that happens it is because his bottleneck was the GPU and others components didn't was being used at full potential.

And that is not the ideal scenario to compare CPU results in a game... it is basically useless to gauge CPU performance.
 

Profanity

Member
Where do the differences then come from? Why then test at 1080p at all (thats what seems to be wanted by most), if any modern game is apperently still GPU limited pretty hard at 1080p, even with the fastest gpus.

Also, isnt this then a test which games are cpu bound at all? games that are really cpu limited would show a drop in gpu usage.

Well as Durante said above, there are still a few games that are CPU-driven at 1080p, such as Watch Dogs 2 and Total War: Warhammer. A good CPU test should be conducted at 720p and with a very powerful GPU to ensure that the GPU usage is kept as low as possible, so that the CPU is stressed.

Ideally in a normal gaming situation the majority of the time you'll be GPU bound, but in the cases where that 99% falls you want the CPU to act as a kind of 'safety net' to prevent the FPS or frametimes from dropping/varying too hard when the workload is transferred onto it.
 

Micael

Member
Where do the differences then come from? Why then test at 1080p at all (thats what seems to be wanted by most), if any modern game is apperently still GPU limited pretty hard at 1080p, even with the fastest gpus.

Also, isnt this then a test which games are cpu bound at all? games that are really cpu limited would show a drop in gpu usage.

When you lower the resolution, the time it takes the CPU per frame remains roughly the same (exceptions exist but lets ignore this), however you can now reach much higher frame rates because the load on the GPU diminishes, which means the time the GPU needs to spend to process each frame is smaller, so the contribution of the CPU to the overall frame is higher, thus 1080p tests the contribution of the CPU more, than the 4k.

Basically if your game reaches say 60fps at 4k (lets ignore vsync) you need each frame to be calculated in 16.67ms, if we assume the CPU takes 3ms to calculate each frame, this means that the GPU is taking 13.67ms to calculate each frame, or roughly 82%, now if we lower the resolution and the game now reaches 184 FPS, the CPU is still taking 3ms, however each frame needs to be calculated in 5.44ms, so that means the GPU is taking 2.44ms or roughly 45% of the total frame time, so by extension the CPU contribution is now bigger, and as a result it is now testing the CPU contribution far more.
 

Livanh

Member
There are plenty of games that when you are using a strong enough GPU will be bottlenecked by memory speeds/latency and cpu. Witcher 3 in particular is used a lot in tests because of how well it scales with CPU, particularly in Novigrad. Novigrad is all about cpu and how well it utilizes memory bandwidth.

Thats exactly what he did though.
The attack on his tests seem still rather odd. He tested a wide array of games at 1080p.
This may not be the best test about theoretical cpu speed, but its seems like a cpu comparisson with games is not really feasable. but for gaming this is pretty much the only relevance or am i missing something here?
 

Datschge

Member
I was looking at Tech Report's frametime analysis, which is showing completely different results. I can't recall a CPU launch with such variable performance shown.
Which is why beyond the misleading GPU benchmarks even the actually well done interesting gaming benches may actually be more reflective about the state of the used mainboard's BIOS or other missing optimizations than the chip's performance itself.

I hope we'll see some thorough analysis again once BIOS updates as well as OS improvements through settings, drivers and updates settle down. Maybe that's the reason production oriented R7 was released now and gaming oriented R5 is coming later.
 

Livanh

Member
When you lower the resolution, the time it takes the CPU per frame remains roughly the same (exceptions exist but lets ignore this), however you can now reach much higher frame rates because the load on the GPU diminishes, which means the time the GPU needs to spend to process each frame is smaller, so the contribution of the CPU to the overall frame is higher, thus 1080p tests the contribution of the CPU more, than the 4k.

Basically if your game reaches say 60fps at 4k (lets ignore vsync) you need each frame to be calculated in 16.67ms, if we assume the CPU takes 3ms to calculate each frame, this means that the GPU is taking 13.67ms to calculate each frame, or roughly 82%, now if we lower the resolution and the game now reaches 184 FPS, the CPU is still taking 3ms, however each frame needs to be calculated in 5.44ms, so that means the GPU is taking 2.44ms or roughly 45% of the total frame time, so by extension the CPU contribution is now bigger, and as a result it is now testing the CPU contribution far more.

I understand that lowering resolution shows that other bottlenecks exist at some point, but whats the relevance for any real world scenario then? And in a pure gaming benchmark, does it then really show cpu dependencies or just other contributing factors.
 

PFD

Member
I don't get the negativity at all, these things are bringing what used to be incredibly overpriced Intel HEDT core count down to mainstream pricing. Ryzen literally is the Affordable Core Act. Intel spent years locking more than 4 cores behind absurd price premiums just because they could.

Not to mention locking hyperthreading behind a hundred+ dollar upgrade
 

Paragon

Member
If you refer to Jokers review, should maybe turn on the sound. He explains why the workloud is 99% in his 1080p tests. VSYNC off means gpu load will always show 99%. He discusses it further in his follow up videos.
It is incredible frustrating to see people trash reviews without an idea whats going on.
I did watch the video with the sound on.
His 'explanation' that disabling V-Sync allows the GPU to hit 99% load shows a lack of understanding of the issue.
When you set up a CPU-limited test, the GPU cannot reach 99% usage.

If his tests are set up in a way that allows the GPU to hit 99% load, then performance in the game is being limited by the GPU - it cannot work any harder than that.
Therefore if the test is always at 99% GPU load, it's measuring GPU performance and not CPU performance.
If it's bouncing around between 90-99% for example, then the test tells us nothing useful at all, because that test is GPU-limited in some scenes (99% load), and CPU-limited in others. (<99% load)

A CPU test must be set up so that the GPU cannot ever hit 99% load.
I would try to keep it at least below 90% at all times - ideally much lower than that.
And the easiest way to achieve that is to use the fastest GPU that you have, and run it at a low resolution.
It doesn't matter which GPU you use, as long as you use the same GPU and settings with all CPUs that are being compared, and so long as it never reaches 99% GPU usage when testing with your fastest CPU.

When the GPU is unable to hit 99% usage, it generally means that the CPU is limiting the framerate, instead of the GPU.
That is how things need to be set up when you are trying to benchmark the CPU.


-----

Here's an example of CPU-limited results in Deus Ex: Mankind Divided on my PC. In all tests, CPU usage is at 100%.
Now that's actually quite rare - most games won't load up the CPU to 100% usage on all cores, but I chose it for this example because it makes it clear that the CPU is the limiting factor.

In this test I am using low enough settings that even a GTX 960 is not coming close to 99% GPU load.
When I swap the GTX 960 out for a GTX 1070 - though it could be any faster GPU - the only thing which changes is that the GPU load drops from 60% to 33%.
Framerate remains the same despite the faster GPU, since the CPU is the limiting factor.

The faster GPU does allow me to turn the settings all the way up and have essentially no impact on framerate though - it drops from 43.7 to 43.3 FPS.

&#10240;

Now what would happen if I used a faster CPU for these tests?
I don't have a faster CPU here to show actual results, but I can tell you what the results would be like.

In the first two tests, a faster CPU would increase GPU usage, which would allow for higher framerates.
If the CPU is fast enough, GPU usage would increase to 99% usage on both cards.

If we assume that everything scales linearly - it likely does not, but that assumption keeps the example simple:
Since the 960 is at 60% load and 43.7 FPS, we could say that a fast enough CPU would allow the game to run at 72.1 FPS when it hits 99% GPU usage.
With the 1070 at 33% load in the 720p test, a fast enough CPU might allow it to run at 131.1 FPS using the same settings.

The 1620p test is already GPU-limited though, with the settings tweaked so that it's essentially running at 100% CPU and GPU usage at the same time.
The only thing that a faster CPU would do for this test is drop the CPU usage.
Performance would remain at 43.3 FPS since the GPU is already maxed out.

Now if I was trying to benchmark a group of CPUs, I could not do this at 1620p on a GTX 1070.
All the CPUs in the test would show a 43.3 FPS result, unless they were slower than my 2500K and caused the framerate to dip even lower.
CPU A might be able to run the game at 60 FPS and CPU B might be able to run it at 90 FPS, but you'd never know because the test is GPU-bound and both will show a result of 43.3 FPS.

A CPU comparison would have to be done using the 720p settings - or possibly even lower - to prevent the GPU from ever hitting 99% and affecting the results.
If the settings cannot be reduced any further and the GPU load is still at 99%, then we need to swap out that 1070 for a faster GPU.

They addressed it on Reddit. They simply wanted reviewers to test at all resolutions, not just 1080p, to give a full picture of the performance across resolutions.
The only thing increasing resolution does is increase the likelihood that you will be GPU-bound again, which compresses the results and unfairly narrows the gap between the faster and slower CPUs. (in favor of the slower CPU)
 

Micael

Member
I understand that lowering resolution shows that other bottlenecks exist at some point, but whats the relevance for any real world scenario then? And in a pure gaming benchmark, does it then really show cpu dependencies or just other contributing factors.

The relevance besides showing that there is an actual difference in the time each processor spends per frame (and giving an idea how much that might be), is that in the future as graphic cards become more and more powerful, 1440p and 4k become less and less demanding, so the CPU becomes more important to the overall performance.

How much this is relevant is going to depend wildly on the game, and the person, for example if you maintain the processor for as long as I have maintained my x55 platform, the 1080p test is a whole lot more relevant, if however you are trading cpus every 2 years the test is a whole lot less relevant, certain games also use the CPU far more than others.

Now this is ofc all very debatable, since one can argue that single threaded performance which is what is seeing the 7700k getting ahead is going to become less important in the future, but ofc it can also be argued that single threaded performance continues to be very important, and those changes don't happen particularly quickly in the gaming industry due to a variety of programming reasons.

In the end though regardless of what one might believe about the future of CPU usage, by providing both 1080p and 4k more data is being given to the consumers to form a more informed decision, which is especially important since even within games usage scenario can vary wildly.
 

Marlenus

Member
The relevance besides showing that there is an actual difference in the time each processor spends per frame (and giving an idea how much that might be), is that in the future as graphic cards become more and more powerful, 1440p and 4k become less and less demanding, so the CPU becomes more important to the overall performance.

How much this is relevant is going to depend wildly on the game, and the person, for example if you maintain the processor for as long as I have maintained my x55 platform, the 1080p test is a whole lot more relevant, if however you are trading cpus every 2 years the test is a whole lot less relevant, certain games also use the CPU far more than others.

Now this is ofc all very debatable, since one can argue that single threaded performance which is what is seeing the 7700k getting ahead is going to become less important in the future, but ofc it can also be argued that single threaded performance continues to be very important, and those changes don't happen particularly quickly in the gaming industry due to a variety of programming reasons.

In the end though regardless of what one might believe about the future of CPU usage, by providing both 1080p and 4k more data is being given to the consumers to form a more informed decision, which is especially important since even within games usage scenario can vary wildly.

The problem here is you are being far too reasonable when what I want to see in this thread is fanboy arguments.
 

Durante

Member
I think the vast majority of gamers will probably sit on the same CPU far longer than the same GPU these days.

As such, actually testing CPU performance rather than GPU performance in CPU reviews is not just inherently correct, it's also very relevant in practice.
 
I don't get the negativity at all, these things are bringing what used to be incredibly overpriced Intel HEDT core count down to mainstream pricing. Ryzen literally is the Affordable Core Act. Intel spent years locking more than 4 cores behind absurd price premiums just because they could.

Those days are gone now, you can get the performance of my 5820K for half the price I paid and that's a great thing no matter how you slice it. The only reason I'm not bothering with Ryzen is, well, I already have the 5820K. Let's see what 2nd gen Ryzen can do, it would be fun to be back on AMD for the first time since my original Athlon 64 3700+ circa 2004.

I don't either, It's quite shocking, it's another example of the community making a rod for their own back yet again.

AMD have come from Bulldozer to beating Intel's HEDTs in some areas, it's an incredible success if you think about it, but some in here are more interested in AMD's 'deceit' with the gaming benchmarks.
 

Paragon

Member
In the end though regardless of what one might believe about the future of CPU usage, by providing both 1080p and 4k more data is being given to the consumers to form a more informed decision, which is especially important since even within games usage scenario can vary wildly.
CPU usage generally doesn't change with resolution - it's typically linked to framerate.
So if you reduce the framerate by increasing the resolution, you are reducing the CPU usage.
Therefore testing at high resolutions like 4K does not allow people to make a "more informed decision" - it only serves to prop up weaker CPUs by limiting the advantage of faster ones.
 
It's unlikely they'll OC much further. The issue is the low power Samsung 14 nm process they use. The voltages needed for higher frequencies ramp up a lot going past 3.3 Ghz, so the current max clocks for the high end parts are already a bit inefficient. Maybe they could get some improvements with Zen 2 just with architectural improvements, but I think GloFo 14nm is the biggest bottleneck. AMD might have caught up design-wise, but Intel is still way ahead when it comes to process nodes.
Im really disappointed in GloFo's 14nm process used on AMD's chips. RX 480 doesnt clock well and neither does Ryzen. It sucks that they have contractual obligations. I just wish they were using TSMC.
 

Marlenus

Member
I think the vast majority of gamers will probably sit on the same CPU far longer than the same GPU these days.

As such, actually testing CPU performance rather than GPU performance in CPU reviews is not just inherently correct, it's also very relevant in practice.

Agreed. [H] testing at 640x480 is an extreme example but it shows the discrepancies in performance in a very CPU bound scenario.

This does feel like when the Q6600 was released. I think for the life of the Q6600 it was not as fast as the top dual cores in gaming but it was better in multi threaded tasks.

I think though with the 6c R5 coming in a few months that will be the ideal price/performance/future proof combination. Especially if some of this gaming performance can be enhanced with BIOS, firmware, microcode etc updates.
 
CPU usage generally doesn't change with resolution - it's typically linked to framerate.
So if you reduce the framerate by increasing the resolution, you are reducing the CPU usage.
Therefore testing at high resolutions like 4K does not allow people to make a "more informed decision" - it only serves to prop up weaker CPUs by limiting the advantage of faster ones.

And that's important to showcase in a full review. You don't just show a theoretical worst case scenario because that would be unfair too. It's important to get a full picture of the performance profile, and frankly, more so from actual realistic use cases than theoretical scenarios. Those might show potential caveats later on when you upgrade your GPU, but if you test at whatever settings you actually use day to day, you know that's what you're getting. Testing only at 720p and showcasing huge performance changes might make some less informed people think this is how the differences are also at whatever 4K monitor they just bought.
 
I don't get the negativity at all, these things are bringing what used to be incredibly overpriced Intel HEDT core count down to mainstream pricing. Ryzen literally is the Affordable Core Act. Intel spent years locking more than 4 cores behind absurd price premiums just because they could.

Those days are gone now, you can get the performance of my 5820K for half the price I paid and that's a great thing no matter how you slice it. The only reason I'm not bothering with Ryzen is, well, I already have the 5820K. Let's see what 2nd gen Ryzen can do, it would be fun to be back on AMD for the first time since my original Athlon 64 3700+ circa 2004.

This.

Ultimately, AMD is back in the game and they at least have a viable product in multiple segments going forward.

In any case, it's finally an exciting time in CPU-land again after years of yawn inducing stagnation. For anyone who's currently using older intel or especially AMD APUs/CPUs, Ryzen is an intriguing get...particularly for mixed use. As for gaming, I expect we'll see a 10%-20% improvement in performance as BIOS/OS/driver bugs are squashed.

Personally, I can't wait to let the dust settle and pick up a 1600X + ASRock miniITX board to upgrade my primary HTPC (hopefully Vega allows a reasonably powerful low profile/<75W GPU PCB design...although the GTX 1050 Ti isn't too shabby and is a big jump over my 750 Ti, I crave more!). As for my primary and secondary tower rigs, I'm not sure what I'll do so I'll hold off until after I upgrade my HTPC. But it's exciting to at least, finally have the *option* of going AMD again....
 

Csr

Member
Showing that there isn't a cpu bottleneck at a higher resolution is useful information.
There are also users with specific needs and buying habits that would benefit from benchmarks in higher resolutions for example people who change cpu's often or at the same time as the gpu.

It is however pointless to use only gpu bound benchmarks if you want to see which cpu performs better in gaming.
 

Micael

Member
I think the vast majority of gamers will probably sit on the same CPU far longer than the same GPU these days.

As such, actually testing CPU performance rather than GPU performance in CPU reviews is not just inherently correct, it's also very relevant in practice.

Indeed I'm still rocking a X5650 a 7 year old processor, and that was a cheap upgrade to a i7 920 (over 8 years old), and while I don't see my next processor lasting this long given intel killed easy cross over to xeons after the X55 platform, I suspect my next processor will easily last 5 years, so that is at least 2 generation of graphic cards upgrades for me, potentially more.

CPU usage generally doesn't change with resolution - it's typically linked to framerate.
So if you reduce the framerate by increasing the resolution, you are reducing the CPU usage.
Therefore testing at high resolutions like 4K does not allow people to make a "more informed decision" - it only serves to prop up weaker CPUs by limiting the advantage of faster ones.

Yes and I pointed out that it doesn't change above, I even gave some basic math on frame times, however 4k is relevant to make a more informed decision regardless, because it is relevant for how things stand today, and in the near future.
It is entirely within reason that someone might be interested in knowing that because they intend on replacing their processor every few years, so future gaming performance is largely irrelevant for them.
 
I wouldn't call it "a bit" misleading - it's full on attemp to fool consumers with false picture of performance.
And it completely ignores that today 1080p performance of GPUs is tomorrow 1440p as GPU power grows with each new generation.

Using their method I could prove that FX is almost as good gaming cpu as Skylake i7 :)

Lol.

Do you know what misleading means?

So you're saying that because AMD asked reviewers to bench Ryzen using settings that 99% of buyers will use at home, that's a 'full on attempt to to fool consumers with a false picture of performance'? In fact, you could quite easily argue the other way around. That reviews showing Ryzen gaming performance using a Titan X and setting res down to 1080p (one major site did this) gives a false picture of performance.

This is your argument from a car perspective:

Nissan - here's our new car and how it performs on the road from 0-60mph.
Michelaus - I want to see how it performs at 130mph.
Nissan - But that is beyond the speed limit for cars on the road.
Michelaus - Showing how it performs at road speed from 0-60mph is a full on attempt to fool me with a false picture of performance.
Nissan - That is how most users will be driving our new car.
Michelaus - Liars. Shame on you.

See?
 

Profanity

Member
Lol.

Nissan - here's our new car and how it performs on the road from 0-60mph.
Michelaus - I want to see how it performs at 130mph.
Nissan - But that is beyond the speed limit for cars on the road.
Michelaus - Showing how it performs at road speed from 0-60mph is a full on attempt to fool me with a false picture of performance.
Nissan - That is how most users will be driving our new car.
Michelaus - Liars. Shame on you.

See?

You heard it here first, folks. Running at below 99% GPU usage on current CPUs is in fact illegal.
 

IC5

Member
IA CPU comparison would have to be done using the 720p settings - or possibly even lower - to prevent the GPU from ever hitting 99% and affecting the results.
If the settings cannot be reduced any further and the GPU load is still at 99%, then we need to swap out that 1070 for a faster GPU.

The only thing increasing resolution does is increase the likelihood that you will be GPU-bound again, which compresses the results and unfairly narrows the gap between the faster and slower CPUs. (in favor of the slower CPU)

low resolution CPU tests
http://www.hardocp.com/article/2017/03/02/amd_ryzen_1700x_cpu_review/4

Techreport does really nice CPU frametime data. Here's the page with Watchdogs 2 data. Other games can be found on other pages of the article:
http://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed/8
 

Durante

Member
That car comparison would only work if you are pretty sure that you'll get 90 MPH roads next year, 150 MPH roads the year after that, 220 MPH roads in the third year, and so on.

Utlimately, this is a question about the purpose of a CPU review.
I'm very much in the camp that a CPU review should do its best to review CPU performance, not something other than that.
Just like you wouldn't review a GPU in a CPU limited scenario.
 

TSM

Member
Lol.

Do you know what misleading means?

So you're saying that because AMD asked reviewers to bench Ryzen using settings that 99% of buyers will use at home, that's a 'full on attempt to to fool consumers with a false picture of performance'? In fact, you could quite easily argue the other way around. That reviews showing Ryzen gaming performance using a Titan X and setting res down to 1080p (one major site did this) gives a false picture of performance.

This is your argument from a car perspective:

Nissan - here's our new car and how it performs on the road from 0-60mph.
Michelaus - I want to see how it performs at 130mph.
Nissan - But that is beyond the speed limit for cars on the road.
Michelaus - Showing how it performs at road speed from 0-60mph is a full on attempt to fool me with a false picture of performance.
Nissan - That is how most users will be driving our new car.
Michelaus - Liars. Shame on you.

See?

Well except 4k would be 130mph in your example. The average 0-60 mph road use would be 1080p tests. The average user will not be trying to run 4k. You have your analogy backwards.
 
You heard it here first, folks. Running at below 99% GPU usage on current CPUs is in fact illegal.

Here we go.

I'm responding to the poster bandying around the word 'misleading'.

Of course bench games at 420p with 2X Titan X's for data, it's important we know these things as 2X Titan X's is next year's RX 460.
 

Micael

Member
Lol.

Do you know what misleading means?

So you're saying that because AMD asked reviewers to bench Ryzen using settings that 99% of buyers will use at home, that's a 'full on attempt to to fool consumers with a false picture of performance'? In fact, you could quite easily argue the other way around. That reviews showing Ryzen gaming performance using a Titan X and setting res down to 1080p (one major site did this) gives a false picture of performance.

This is your argument from a car perspective:

Nissan - here's our new car and how it performs on the road from 0-60mph.
Michelaus - I want to see how it performs at 130mph.
Nissan - But that is beyond the speed limit for cars on the road.
Michelaus - Showing how it performs at road speed from 0-60mph is a full on attempt to fool me with a false picture of performance.
Nissan - That is how most users will be driving our new car.
Michelaus - Liars. Shame on you.

See?

That is a deeply flawed argument, for starters there are in fact roads that allow for more than 130mph, so right from the get go that is wrong, then what if I want to use a nissan GTR on tracks?
Even tires for example are tested at their limits, not just within normal use case scenarios.
Not that it matters the way cars are used is completely and totally different than the way computers are used, and as such this example bears no real connection to the use case of PCs, certainly not when the intend is to focus on the misleading part.

Also going by the example of commonly used, the fact is that plenty of people are using 1080p, according to steam stats 43,23% of their users have 1080p as their primary monitor resolution, 1440 and 4k only account for less than 4%, now obvious when you are buying a 500$ or even the 300$ processor that number is almost certainly going to be higher, but still shows a very high % of people might genuinely buy these and play at 1080p, making it relevant regardless of what you think of future usage.
 

JohnnyFootball

GerAlt-Right. Ciriously.
The reviews have convinced me that it is probably best to wait for the 1600X which was my intention all along. THAT is the CPU that should be focused on as a 7700K competitor. And/or see if Intel responds with a permanent competitive price cut.

By then things like BIOS, software, etc should be better optimized.


As I have been saying, the real win isn't so much for AMD as it is for gamers and the industry.

Ryzen now makes 6 and 8 core CPUs affordable, which means that (hopefully) there will be a much bigger push for games to take advantage of extra cores.
 
That car comparison would only work if you are pretty sure that you'll get 90 MPH roads next year, 150 MPH roads the year after that, 220 MPH roads in the third year, and so on.

Utlimately, this is a question about the purpose of a CPU review.
I'm very much in the camp that a CPU review should do its best to review CPU performance, not something other than that.
Just like you wouldn't review a GPU in a CPU limited scenario.

Yes I know, I'm not saying don't fully test the product in all scenarios if you want, be it a car, CPU or hamster.

But it's the extreme reaction of some with negativity and faux outrage when this is a very good CPU release.
 

Durante

Member
The reviews have convinced me that it is probably best to wait for the 1600X which was my intention all along. THAT is the CPU that should be focused on as a 7700K competitor.
I think for gaming in terms of price/performance a 1600 (without the X) should be even more interesting. (As long as the current trend of maximum OC frequencies not differing significantly across models holds, which I expect)
 

JohnnyFootball

GerAlt-Right. Ciriously.
Btw, I noticed that most Ryzen motherboards have a display port.

Am I missing something? I thought Ryzen didn't have an iGPU.
 
That is a deeply flawed argument, for starters there are in fact roads that allow for more than 130mph, so right from the get go that is wrong, then what if I want to use a nissan GTR on tracks?
Even tires for example are tested at their limits, not just within normal use case scenarios.
Not that it matters the way cars are used is completely and totally different than the way computers are used, and as such this example bears no real connection to the use case of PCs, certainly not when the intend is to focus on the misleading part.

Also going by the example of commonly used, the fact is that plenty of people are using 1080p, according to steam stats 43,23% of their users have 1080p as their primary monitor resolution, 1440 and 4k only account for less than 4%, now obvious when you are buying a 500$ or even the 300$ processor that number is almost certainly going to be higher, but still shows a very high % of people might genuinely buy these and play at 1080p, making it relevant regardless of what you think of future usage.

Come on. Just stop.

Yes, I know 1080p is the primary resolution. But not paired with a 1080 or a Titan X. Or do you believe otherwise?

The vast majority own cards like 970s, 390s, 1060, 1070 etc (with a 1080p) monitor which would have the same effect of greatly reducing (in fact, wiping out altogether in most cases) any performance gap between a Ryzen and a 7700K.
 

Livanh

Member
I did watch the video with the sound on.
His 'explanation' that disabling V-Sync allows the GPU to hit 99% load shows a lack of understanding of the issue.
When you set up a CPU-limited test, the GPU cannot reach 99% usage.

But he said 99% should always be hit with vsync off if there arent any other bottlenecks... i think the confusion came down to mostly bad wording. Anyway, i dont really want to defend his knowledge or lack thereoff, but rather that his test is relevant to gaming in todays games.

Anyway, testing at 1080p or above is precisely what a typical gamer wants to see, since thats the typical real world usage scenario.

Conclusion CPU doesnt really matter in most cases. And gaming benchmarks dont really show what the bottleneck is, even if the gpu doesnt show a high load.

Also since even a 1080 is still limiting at 1080p i think its quite useless to draw conclusion from low res tests about futurproofing, since cards are so fare away from not being the limiting factor at 1440p or 4k.

Beyond that, even at lower resolutions and "proper" tests, other parts of the system seem to play a way bigger role (ram speed, mainboard brand), but since data on that is so limited and varies so much its too early to say much about those factors.
 

JohnnyFootball

GerAlt-Right. Ciriously.
Btw, it has been talked about a little, but not much:

The cost of Ryzen motherboards vs Kaby Lake motherboards. They have similar features, but they can be had for under $100 for the B350, which is probably what most people should consider unless they want to go for a Crossfire/SLI setup.

A B350 motherboard for $80. Has an M2 and tons of other goodies.
https://www.newegg.com/Product/Product.aspx?Item=13-144-019
 

Parsnip

Member
Yeah, I'm definitely waiting for the R5 line (probably 1600), seeing how that shakes will determine what I buy for my itx build.
 

JohnnyFootball

GerAlt-Right. Ciriously.

Yes and anybody not overwhelmed by the ridiculous hype knew that it was unlikely to be faster due to the significant speed advantage Kaby Lake possessed. Few games scale beyond 4 cores. Not to mention that the 1800X and 1700X were not directly competing with the 7700K. Those were aimed at Intel's Broadwell-E 6 and 8 core CPUs.

The 1600X and 1500X are considered the Kaby Lake competitors.
 

dr_rus

Member
CB.de did a comparison of Zen with several previous CPU generations from AMD: Phenom II X6 und AMD Ryzen 7 im Vergleich


Aslo when talking about being GPU limited and CPU limited it always worth to remember that even in extreme resolutions there is no such thing as being completely GPU limited. For example, a faster CPU will perform the level loading quicker even if the following gameplay will be 100% GPU limited - which is very rarely the case.

index.php


Even in this 4K frametimes graph the game may be CPU limited in some parts of it (this usually happens on the lower numbers as the more frames you have to show per second the higher is the CPU load usually) - for example this small plateau between marks 10 and 15 can easily be CPU limited even in 4K resolution. This won't affect average fps much but it will result in a more powerful processor being a tad faster as a result.
 

Micael

Member
Come on. Just stop.

Yes, I know 1080p is the primary resolution. But not paired with a 1080 or a Titan X. Or do you believe otherwise?

The vast majority own cards like 970s, 390s, 1060, 1070 etc (with a 1080p) monitor which would have the same effect of greatly reducing (in fact, wiping out altogether in most cases) any performance gap between a Ryzen and a 7700K.

Sure but I also don't believe most people with a 1080x will have a Titan X, and ofc a Titan X of today is a 1170 or 1270 of tomorrow, in fact the Titan X might already be getting surpassed for gaming, and it hasn't even been a year.

This is why I believe it is important to give as much information as possible to consumers, because it shows how much the CPU brings to the table in a variety of scenarios, do I think the 1800x is ever going to be as outpaced in the real world vs a 7700k as a 1080p tests shows? No, but I also don't expect it to be as meaningless in the future as the 4k test shows, but since reviewers can't make reviews based on future hardware and games, it is vital to give a wide range of information so the consumer can decide, what is important to them.
 

gypsygib

Member
Just wanted to comment on how much I love the tittle.

If there were gaf awards for best thread tittle of the year, I'd nominate this one so far.
 

ethomaz

Banned
Lol.

Do you know what misleading means?

So you're saying that because AMD asked reviewers to bench Ryzen using settings that 99% of buyers will use at home, that's a 'full on attempt to to fool consumers with a false picture of performance'? In fact, you could quite easily argue the other way around. That reviews showing Ryzen gaming performance using a Titan X and setting res down to 1080p (one major site did this) gives a false picture of performance.

This is your argument from a car perspective:

Nissan - here's our new car and how it performs on the road from 0-60mph.
Michelaus - I want to see how it performs at 130mph.
Nissan - But that is beyond the speed limit for cars on the road.
Michelaus - Showing how it performs at road speed from 0-60mph is a full on attempt to fool me with a false picture of performance.
Nissan - That is how most users will be driving our new car.
Michelaus - Liars. Shame on you.

See?
You examples helps his point... not your.

Show CPUs tests at 0-60mph (non GPU bootleck = lower resolution) will give you the real picture of the car even if it reaches over 130mph (GPU bound = 4k).

You are testing the CPU after all... not the GPU.
 
So i have waited for Ryzen's Benches to build a new Mobo/Ram/CPU combo for gaming.

Currently running a Sandy 2600K and i think he made its time this time. I think i will go 7700K but i'll wait a little longer.
 
So i have waited for Ryzen's Benches to build a new Mobo/Ram/CPU combo for gaming.

Currently running a Sandy 2600K and i think he made its time this time. I think i will go 7700K but i'll wait a little longer.

I for one, am waiting for Intel destructive (4 core lol) response.

A 7700k variant in the new socket.
 
Yes and anybody not overwhelmed by the ridiculous hype knew that it was unlikely to be faster due to the significant speed advantage Kaby Lake possessed. Few games scale beyond 4 cores. Not to mention that the 1800X and 1700X were not directly competing with the 7700K. Those were aimed at Intel's Broadwell-E 6 and 8 core CPUs.

The 1600X and 1500X are considered the Kaby Lake competitors.
1600X + 1500X are just bad yield 1800X chips with cores disabled. They dont run faster clocks than an 1800X either, so you'll save $ but you shouldnt expected greater performance.

If AMD can improve performance via bios, driver, OS updates, that would be great for Ryzen and gaming but if not, expect the same levels of performance or less from their R5 chips vs kaby lake
 
Any reviews for overclocking with B350 boards yet? A lot of the go-to reviewers are still pending but maybe some more obscure ones. I'm hoping to discourage myself from spending an extra $70-85 when I don't need SLI.
 
Top Bottom