·feist·;233255351 said:
MSI and ASRock appear to be continuing their own XMP memory compatibility improvements, apart from the updates passed on by AMD. Both ASRock's X370 Taichi and X370 Fatality Pro have received ~3-4 XMP compatibility updates since launch.
For what it's worth, the Crosshair VI I was waiting on finally arrived on Sunday, after having the rest of the system sitting around all of last week, and I managed to get some of my own testing done.
Perhaps I've just been lucky, but I haven't had any difficulty running fast memory on this board when using the latest UEFI BIOS (1002).
Now I haven't done extensive stability testing yet; only a few runs of AVX2 workloads, letting AIDA64 run for 20-30 minutes, and then running games on it for several hours, but I've not experienced any instability or had any other issues with a 1700X at 4GHz using 1.35V with 3600MT/s RAM, and it's been running for a couple of days now.
The memory is Corsair's 3600C18 Vengeance LPX kit as it was the cheapest "probably B-Die" kit that I could get here. (apparently all Corsair 3600 kits are B-Die)
Even the 32x memory multiplier, which everyone has been reporting as unstable (refusing to POST) just seems to work, at the standard 1.35V.
The only issue that I have encountered so far is that it would boot with a 100MHz BCLK and the 32x multiplier for 3200MT/s, or 122.6MHz BCLK and the 29.33x multiplier for 3597MT/s, but not the 122.6MHz BCLK and the 26.66x multiplier for 3269MT/s. Which doesn't make a lot of sense.
With everything going so smoothly, it leaves me hopeful for running the four DIMM ECC kit I'm currently waiting for at 2666MT/s. Especially once the memory update is out.
That said, I've not seen much difference in games from increased memory speeds - nothing like people seem to be reporting.
Game performance has been pretty mixed though; which I suppose is to be expected at this point.
With
Deus Ex: Mankind Divided, I can take it down from 8c/16t to 3c/6t and only lose a couple of FPS - though framerate consistency goes out of the window at that point.
Going from 2666MT/s to 3600MT/s only gains 4 FPS too, raising it from 57 FPS to 61 FPS in the area I normally use to test that game.
However that was in DX12. Going to DX11 raises performance from ~60 FPS to ~70 FPS, and reducing the core count in DX11 starts to greatly affect FPS.
GPU utilization still only reached a peak of 89% though - so there's 10% performance being left on the table, and that's only with a 1070. A faster GPU would be worse off.
I was really hoping that a faster CPU with a lot of cores would help
Dishonored 2 more than it did, since it's based on id Tech.
Stuttering is still just as present as it was on my i5-2500K system. I guess it's just that engine.
Just like in
Deus Ex, going from 2666MT/s to 3600MT/s only increased the framerate in
Dishonored 2 by ~4 FPS.
It seems like it's mainly old or non-demanding games which are benefitting from the faster RAM.
I don't have
Fallout 4 to test though, and that seems to be the main recent game that people report memory speed making a difference with.
I loaded up
Skyrim SE as it's running on the same engine, but the game was just completely locked to 60 FPS the entire time no matter the memory speed.
It also seemed to show that the scheduler is handling things properly as that game only seemed to be running on the 8 physical cores and not the SMT cores.
Even though the main thread was hitting 95%+ load, it never dropped a frame.
I only have an early-game save in the special edition though, and have not loaded it up with mods so maybe that's not representative of how demanding the game might get.
Some games seem to absolutely love having lots of available threads, and take full advantage of the CPU though - even ones you might not expect.
ABZÛ on ultra settings absolutely destroyed my i5-2500K, and I played through that from start to finish (80 minutes) on the R7-1700X last night.
I think I noticed twice in the whole game where it ever dropped a frame - and only from 60 to 59 FPS.
Performance was really consistent here, and total CPU usage peaked at about 70%. Highest core was 77%.
I'd be really curious to see how an i7-7700K compares in this area.
After
Dishonored 2 I wanted to test something else running on older id Tech 5, and
The Evil Within seemed to be running very well; load spread out across all threads with my 1070 sitting at 99% utilization at all times (V-Sync off) and the game running close to 120 FPS most of the time. (at 1080p)
That's really what I was hoping to see from
Dishonored 2, but whatever they did to make id Tech into the Void Engine seems to have regressed its performance.
TEW originally shipped in quite a poor state too, but the patch they released to fix that really seems to have done a good job.
I've not seen any games where it benefitted performance to limit the game to running on a single CCX.
I think there is performance to be gained if the developer optimizes their code for it, but having threads jump across CCXes doesn't seem to be an issue.
Even in games which place their load on one main core and you have a very low workload on the rest of the CPU, they always seem to perform best when given access to all the cores.
As a pure gaming CPU, it may not be the best choice right now, but it has been a very smooth experience in most games I've tested so far, even if they aren't fully utilizing the CPU.
At least when I still have a 1070, I think I may be satisfied enough with how it performs in games that I'm not going to want a separate gaming system right now.
I don't know how it will hold up with higher-end cards though, since I'm already seeing the 1070 under-utilized in some games.