Intel Alder Lake (12000 Series)

  • Thread starter Deleted member 7537
  • Start date
I agree, and it's pretty similar to the view from DF when they reviewed 3700X, ( 24:15 )


Except for some very high fps requirements like in e-sports, you're not as concerned with the CPU performance except for some very rare drops( < 0.01% ) here and there. He calls them hotspots in the video.

These can be highly irritating, especially when you're playing a single player game and have to pass through the same areas again and again. A very good example is Swan's Pond in Fallout 4 where the fps dropped precipitously and you really needed a good CPU to keep it up.

In a typical review, even if you were doing it at 360p, you wouldn't see this issue since most of the time your benchmark run wouldn't stop there and explore the area but most likely tangentially touch it, if at all. But when actually playing the game, you'd tear your hair out.

I remember that guy FrameChasers during 30xx series announcement, he was quick to point out that 2080Ti overclocked rather well and trying out Doom comparison. Glad to see he's putting out this kind of content

That's why I like to see benchmarks that target very specific target areas that are brutal, like nakatomi in warzone. Or just benchmark at 720p to let the cpus hit their max so you can actually see the real differences between the cpus. That kind of 720p testing will be illuminating for problem areas that may not otherwise show up. Oh, and testing the platforms with memory that has been min/maxed to death is ideal.
 
To me its like WTF do they always benchmark handbrake etc its like how many ppl are actually using this software in their jobs? they should be benchmark compiling, theres millions of ppl around the world alone who spends hours per week waiting on compiling to finish
 
To me its like WTF do they always benchmark handbrake etc its like how many ppl are actually using this software in their jobs? they should be benchmark compiling, theres millions of ppl around the world alone who spends hours per week waiting on compiling to finish
I use handbrake all the time to transcode my smartphone videos into a codec that is watchable on my TV, and to transcode older videos on H264 or older into H265 to save storage.
 
I use handbrake a lot as well, in addition to some CPU rendering. This CPU would be a poor choice for me.
 
Sure Ive used handbrake as well, but what I mean very few ppl are using it fulltime 40 hours a week like millions of coders around the world are
 
I imagine Handbrake testing is a popular attraction for the readership. But I think it would be more interesting to see generic FFMPEG tests since that can be used alone and is integrated in many projects like Handbrake. And of course they should test some pro video software.

Do some NVENC and QuickSync comparisons against the new hotness CPU cores while they're at it.
 
Last edited:
My next gaming system will most likely be equipped with a Intel cpu, very impressive what they have achieved after years of trailing behind amd and apple.

https://publish.twitter.com/?query=https://twitter.com/AhsanQu89750398/status/1456982596198576129&widget=Tweet

Intel transitioning to 7 or even 5nm is going to be very intresting. Thanks competition (amd) :)

That might be over dramatizing things.

In terms of strictly efficiency for sustained MT loads than AMD only really pulled ahead once they transitioned to Zen 2 and TSMC's 7nm node. In terms of the overall capabilities of the desktop products it wasn't until Zen 3 in late 2020 that they were ahead across the board. With Zen 2 it was a push depending on what you were looking for, and for gaming Intel was still comfortably ahead overall (with a few outlier titles to the contrary).

Intel's problem was their fab progress stalled. As they for the longest time tied uarch to process this effectively meant they kept the same design from 2015 - 2020 (even then there's some argument that Skylake was delayed from 2014 due to process issues making it even older). But ADL is releasing into a window of opportunity. They've brought their design up to date against their competitors year old design. They also have process parity (roughly) for now.

It's really more so a battle between Intel and TSMC going forward. If TSMC can better provide 5nm and future nodes to it's clients (eg. AMD) and Intel faces challenges on the foundry side it'll again be tricky navigating forward.
 
Intels problem is their P cores while performing ok and continuing intel's tradition of being able to clock to 11 if you can cool it but, if you look holistically they aren't great ( compared to the E cores which look pretty good).

50% more decode , 100% more rob , 33% more L/S ports , 50% more L1 cache , ~ 125% more L2 cache , 1 more execution port , ~ 100% more die area ( inc L1D ,L1i) and that nets ~ 5-10% more IPC then a Zen 3. If Apple keeping Appling and AMD continue the path of improved perf per watt @ ISO process + 15-20 IPC a Gen at some point intel P cores are gonna run out of road. To me nothing shows this gap more then branch prediction, even in K8 glory days AMD was worse at branch prediction, now they have been better for multiple generations.

In consumer space it looks like Intel will get some breathing room before Zen4 shows up , but Sapphire rapids will have no such luxury.

I will be interested to see how large core count gracemont vs Zen4D (dense) plays out, no idea which one will "better".

Do have to give credit to intel in that even despite what i think of the Goldencove core they still do make it work as a product and the i5 /i7 look better options then 5600/5800 as consumer CPU's!
 
Even with price cuts, the 12600K is cheaper($369 CAD) here than the Ryzen 5800X ($499 CAD down from $619 CAD). I know the motherboards for 12 series are expensive, but wouldn't be surprised if the Intel build came out cheaper with DDR4.
 
Interesting. Would be cool to see a site really dig into that. Equalize power and benchmark across multiple applications.

That same image show the processor cluster at 45 Watt and that's without RAM.

As have been said before Cinebench is really not that great of a benchmark for M1, so it's hard to say anything by a single benchmark.

But competition is good!
 
Cinebench is really not that great of a benchmark for M1

As some have mentioned (LTT i think too), you choose the benchmarks that fits your product and workload. Not just cinebench 'isnt that good' on M1, i have seen other comparisons where the M1 max lost to windows based hardware. One of them was exporting video's (windows laptop hw).
 
Even with price cuts, the 12600K is cheaper($369 CAD) here than the Ryzen 5800X ($499 CAD down from $619 CAD). I know the motherboards for 12 series are expensive, but wouldn't be surprised if the Intel build came out cheaper with DDR4.
Not yet. A fine B550 board is $80 ($60 low end) while Z690 > $200 USD.
 
Do some NVENC and QuickSync comparisons against the new hotness CPU cores while they're at it


And also the compatibility.

AFAIK Intel QSV is the most widely supported. NVENC come close 2nd. While AMD AMF is very very far away 3rd.

And also the quality. AFAIK NVENC in the rtx 3000 gpu is currently the best quality (comparable to cpu fast IIRC)
 
Sure Ive used handbrake as well, but what I mean very few ppl are using it fulltime 40 hours a week like millions of coders around the world are
IIRC most PC gamers play like 8 hours a week on their PCs, yet there's always a lot of videogame benchmarks in CPU reviews.

I do get that you're frustrated on the lack of tests made on popular compilers, but so are my colleagues who put their PCs working 24/7 on Finite Element Method simulations and would greatly benefit from knowing which architecture is best, or my other colleagues working on a multibody simulation software, or the others who got their custom academic code working on Matlab / Simulink.

Point being: reviewers can't really satisfy everyone. At least for GPUs there's SPECViewPerf but even that is far from an ideal pool of results.
 
Point being: reviewers can't really satisfy everyone. At least for GPUs there's SPECViewPerf but even that is far from an ideal pool of results.
SPEC CPU is supposed to be representative of several CPU bound workloads. Of course it's lacking in some areas (for instance code size is too small, no JIT, etc.), but it's quite good to get a good idea of CPU performance.
 
As some have mentioned (LTT i think too), you choose the benchmarks that fits your product and workload. Not just cinebench 'isnt that good' on M1, i have seen other comparisons where the M1 max lost to windows based hardware. One of them was exporting video's (windows laptop hw).

It appears Cinebench uses Intel Embree and I can see that it is currently not optimised properly for Apple M1 (or aarch64 for that matter).

The current implementation uses an SSE-to-Neon translation layer instead of Arm SIMD, it uses the older SSE codepath instead of AVX2 and it appears it isn't optimised for the four 128-bit SIMD units present in the M1.

It is clear from this git pull request that the Open Source Apple support team is testing new code paths.

The latest update from 16 hours ago states:

Code:
Developer-Ecosystem-Engineering
Have some changes ready, working on reviews and rebasing, will update the PR when completed.

So we are looking at x86 Intel heavily optimised ray tracing kernels made and maintained by ... well, Intel.

I would honestly disregard Cinebench as a legitimate benchmark for now when comparing across different architectures like aarch64 and x86.

It is clear they are not doing the same computational workload to render the scene.
 
Last edited:
Back
Top