• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Intel Core i9-12900K Alder Lake CPU "Destroys" AMD Ryzen 9 5950X In Single-Core & Mult-Threaded Benchmark Leak

Thebonehead

Gold Member
Early adoption is hard for sure.
Also ight want to double check that the adapter plate actually has the correct pressure else you are likely gonna have a hot chip running very very hot.
Alderlake IHS is shorter than previous.

Intel-Alder-Lake-CPU-LGA-1700-Mounting-Pressure-Distribution-Comparison-AIO-Coolers.jpg


Intel-LGA-1700-Alder-Lake-CPU-Socket-Mounting-Specs.jpg
Good point. It's a minefield
 

DonkeyPunchJr

World’s Biggest Weeb
I wanna see 12 series DDR4 vs 5 memory benchmarks. Everyone is only rambling about high latency. What about 5 being quad channel on mainstream boards? [2 per module]

Crucial had 64 gigs for 350 yest., now only 16 GB [2x8] kit left. https://uk.crucial.com/catalog/memory/ddr5
At least for gaming with a discrete GPU, memory bandwidth hasn’t been a bottleneck for a long time. I’m not expecting much of a difference between DDR4 and DDR5 at the same latency. Still curious to see what the comparison looks like though.
 

mitchman

Gold Member
Latency is very much dependant on clock frequency on the memory. 40ns on DDR4 at 3200 MT/s vs 50ns DDR5 at 4800 MT/s is very different (numbers made up for illustration only). The actual latency for the DDR5 in this example is actually lower because the memory runs so much faster. This has been the case for every memory generational upgrade.
 

DonkeyPunchJr

World’s Biggest Weeb
Latency is very much dependant on clock frequency on the memory. 40ns on DDR4 at 3200 MT/s vs 50ns DDR5 at 4800 MT/s is very different (numbers made up for illustration only). The actual latency for the DDR5 in this example is actually lower because the memory runs so much faster. This has been the case for every memory generational upgrade.
No this isn’t true. 40 nanoseconds is 40 nanoseconds. That is a measure of time.

you are probably confused because latency is usually listed in # of clock cycles and not nanoseconds. So e.g. DDR5 6400 CL32 would have the same latency as DDR4 3200 CL16.

so yeah, from what I’ve seen so far the latency is slightly worse for DDR5
 
Hm Linus calling it sub-channels. The new W11 scheduler will have to be flawless. :messenger_sad_relieved: You have 12 series with multiple different cores P + E and new memory with 2 sub-channels instead of one single channel per module as usual. Very excited for the 4th reviews and more importantly comparison of ddr4 vs ddr5. Could it be that only some apps will take advantage of new memory and cpu architecture at launch?

 

mitchman

Gold Member
No this isn’t true. 40 nanoseconds is 40 nanoseconds. That is a measure of time.

you are probably confused because latency is usually listed in # of clock cycles and not nanoseconds. So e.g. DDR5 6400 CL32 would have the same latency as DDR4 3200 CL16.

so yeah, from what I’ve seen so far the latency is slightly worse for DDR5
Would be you believe an actual memory manufacturer then? https://www.crucial.com/articles/about-memory/difference-between-speed-and-latency

You are probably confused in believing latency is a fixed value at all speeds, which isn't the case as demonstrated in the article above.
 
Last edited:
So where can we see the current Jedec ddr5 4800 standard avg Clock cycle time ? Why not have it disclosed on product sale page for all to see ?
 

DonkeyPunchJr

World’s Biggest Weeb
Would be you believe an actual memory manufacturer then? https://www.crucial.com/articles/about-memory/difference-between-speed-and-latency

You are probably confused in believing latency is a fixed value at all speeds, which isn't the case as demonstrated in the article above.
That article says exactly the same thing I wrote in my post.

and it says the way to compare latency is in nanoseconds because that takes into account both CAS latency (which is listed in cock cycles) and the clock speed.

You said that you can’t compare nanoseconds because 40ns isnt the same on memory of different clock speeds. That’s incorrect because nanoseconds already takes into account clock speed.
 
Last edited:

mitchman

Gold Member
I wonder what would happen if AMD also bruteforced turbo boosting to never end, and didn't care about power consumption. Would they still be slightly behind Intel's new chips?
 

Chiggs

Gold Member
I wonder what would happen if AMD also bruteforced turbo boosting to never end, and didn't care about power consumption. Would they still be slightly behind Intel's new chips?

Or used DDR5, for that matter.

I think we both know the answer. :messenger_sunglasses:
 

winjer

Gold Member
I wonder what would happen if AMD also bruteforced turbo boosting to never end, and didn't care about power consumption. Would they still be slightly behind Intel's new chips?

You can do that with PBO.
But even with that, Zen3 would still be behind in a lot of applications.

We'll have to wait a few months for Zen3+, for AMD to give it's answer.
Meanwhile, AMD it probably going to lower it's prices to remain competitive.
 

mitchman

Gold Member
You can do that with PBO.
But even with that, Zen3 would still be behind in a lot of applications.

We'll have to wait a few months for Zen3+, for AMD to give it's answer.
Meanwhile, AMD it probably going to lower it's prices to remain competitive.
PBO disables boosting and just overclocks all cores, from what I could see here on my 5950X when I tested it. So lower single core performance, but higher multicore performance. Now if they kept the boost and made it all core, power consumption be damned, it would be comparable.
 

Dirk Benedict

Gold Member
PBO disables boosting and just overclocks all cores, from what I could see here on my 5950X when I tested it. So lower single core performance, but higher multicore performance. Now if they kept the boost and made it all core, power consumption be damned, it would be comparable.

Interesting. I have been wondering if I should throw PBO on or not. I'm still undecided. On one hand, it's a beast of a CPU. It's chewed through everything. Games in 4k while streaming to twitch in 1080p/60fps, for example.
Probably something I'd do in the future with a hybrid cooling solution, but with the lower single core.... Probably just best I keep it where I have it. Stable 4.8ghz on a Chromax, 3 fans. :messenger_relieved:
 

hlm666

Member
PBO disables boosting and just overclocks all cores, from what I could see here on my 5950X when I tested it. So lower single core performance, but higher multicore performance. Now if they kept the boost and made it all core, power consumption be damned, it would be comparable.
It shouldn't be disabling boosting, when I was trying to overclock my 5950x with pbo the boost did get worse cause more voltage was being thrown into the cpu so it got hotter and then thermally throttled. It's hard to get a good oc with this cpu without some really good cooling or spending alot of time with the pbo curve optimiser. After many hours playing with lowering the voltage on each core and setting voltage on the 2 main cores a bit higher I got a small overclock (cinebench score went from 25k to 28.5k), single core will boost to 5k and the temp doesn't go above 80c (ambient was 30c at the time tests were done) on avx loads.

If you are interested in playing with it this may get you started.
 

mitchman

Gold Member
It shouldn't be disabling boosting, when I was trying to overclock my 5950x with pbo the boost did get worse cause more voltage was being thrown into the cpu so it got hotter and then thermally throttled. It's hard to get a good oc with this cpu without some really good cooling or spending alot of time with the pbo curve optimiser. After many hours playing with lowering the voltage on each core and setting voltage on the 2 main cores a bit higher I got a small overclock (cinebench score went from 25k to 28.5k), single core will boost to 5k and the temp doesn't go above 80c (ambient was 30c at the time tests were done) on avx loads.

If you are interested in playing with it this may get you started.
Thanks, but I don't think I really can be bothered. It was nice when I used it for heavy compile tasks taking an hour, but now I just run it stock and those compile tasks are distributed with goma to a threadripper cluster now.
 

b0bbyJ03

Member
Early adoption is hard for sure.
Also ight want to double check that the adapter plate actually has the correct pressure else you are likely gonna have a hot chip running very very hot.
Alderlake IHS is shorter than previous.

Intel-Alder-Lake-CPU-LGA-1700-Mounting-Pressure-Distribution-Comparison-AIO-Coolers.jpg


Intel-LGA-1700-Alder-Lake-CPU-Socket-Mounting-Specs.jpg
Came here to ask about this. Anyone set up their chips yet? I’ve got everything in EXCEPT the adapter kit for my AIO. wondering if I should wait till it comes in (on back order) or just use the current mounting hardware, which will work, but judging from these pics, won’t be tight enough.
 
this is a noob question but, why did it take intel so long to make the 10nm FinFET process? Did they solve their own foundry manufacturing issues? Do they have their own foundry or relying on TSMC? How are they going to manufacture their 10m+, 10nm+++, 10nm++++++++ process from now on?
 

Saucy Papi

Member
Not going to lie, I'm strongly contemplating doing a new build next year with the i5 12400f just to see how well it performs. If I'm happy with the performance, I may just do "budget" builds going forward.
 
Top Bottom