Intel Alder Lake (12000 Series)

  • Thread starter Deleted member 7537
  • Start date
It appears Cinebench uses Intel Embree and I can see that it is currently not optimised properly for Apple M1 (or aarch64 for that matter).
I had no clue that was the case so I did investigate (it doesn't mean I didn't trust what you wrote, I had to know more) and I found this: https://www-heise-de.translate.goog...sl=auto&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=nui
Maxon confirmed to us that the ARM versions of Cinema 4D (R23) and Cinebench R23 use an ARM port from Embree 3.11.

It is clear from this git pull request that the Open Source Apple support team is testing new code paths.
They claim an 8% speed improvement (that might not fully translate to an 8% score increase in Cinebench depending on how much time is spent in Embree code). That's almost worth one year of hardware improvements.

So we are looking at x86 Intel heavily optimised ray tracing kernels made and maintained by ... well, Intel.
That's unavoidable and I wouldn't blame Intel for that. Apple and other Arm houses still have a lot of work to do to port and tune software.

I would honestly disregard Cinebench as a legitimate benchmark for now when comparing across different architectures like aarch64 and x86.

It is clear they are not doing the same computational workload to render the scene.
I definitely agree. Such comparisons should at least mention it.
 
I had no clue that was the case so I did investigate (it doesn't mean I didn't trust what you wrote, I had to know more) and I found this: https://www-heise-de.translate.goog...sl=auto&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=nui



They claim an 8% speed improvement (that might not fully translate to an 8% score increase in Cinebench depending on how much time is spent in Embree code). That's almost worth one year of hardware improvements.


That's unavoidable and I wouldn't blame Intel for that. Apple and other Arm houses still have a lot of work to do to port and tune software.


I definitely agree. Such comparisons should at least mention it.

Trust but verify :yes:

It's always good to do some due diligence and research.

I agree that it is to be expected, especially when you diverge from the norm. Nothing is optimised out of the gate unless specifically made for the platform.
 
Found this
pic_disp.php


So its a lot quicker than intels previous best machine for compilation but even with DDR5 its an extra 90 seconds waiting between compiles compared to a 5950
 
Some fun observations from my build.

i5 12600K
Asus Prime Z690M Plus D4
32GB DDR4 4000
Coolermaster Geminii S from the dark ages (it's this one but with a 1155 bracket)
GTX 1080
X-Fi Ti
Win 11

I am running mostly default BIOS settings. Enabled XMP for RAM, tuned fan speeds, disabled SATA, enabled secure boot, wake on LAN. Power limits come unlocked by default but I think the turbo ratios are default.... It has that shady multicore enhancement setting enabled. Lots of new mysterious settings. There was a BIOS update with "improved performance" in the release notes, compared to the launch BIOS.

It's a cool and quiet system until I play around with Prime95. ;) With a 10 thread blend test, it will hit 150W (HWInfo) and eventually do some thermal throttling, but with a 2 or 4 thread test, it thermally throttles much sooner even though it is using less total power. Even if AVX2 is disabled. The P cores apparently become extreme hotspots and the old cooler isn't even close to up to the task.

I also tried undervolting via core offset. It is Prime95'ing stable so far at -0.025v but -0.050v locks up quickly at boot.
 
Last edited:
cmon intel, release this in my country and push AMD to lower their price.

The ~50% price increase from Ryzen 5 3600 to 5600 is ridiculous.
 
The respective crypto called Raptoreum is ready to shorten supply and up the prices to astronomical levels.

Is there any real substance to Raptoreum or is it another altcoin that could go to zero overnight? I really have no idea, I had never heard of it until yesterday.


One thing I'm not figuring out is exactly how much this crypto is so dependent on L3. These news pieces that popped up yesterday seem to suggest that Ryzen 3 VCache will be profitability monsters on this crypto because of how much L3 they'll have.
But if L3 was that much of a bottleneck for mining this crypto then the R9 5950X should have about the same performance as the R9 5900X considering they share the same 64MB of L3. Yet, the 5950X with 33% more cores gets 20% higher hashrate.
Sure, there's some diminishing return with the additional number of cores in this case, but I also don't see how tripling the L3 to 192MB on the 16-core version is going to do wonders for the hashrate.
 
For some reason, reviews aren't covering Supreme Commander Forged Alliance anymore. Boggles the mind, really. ;) So here we have the 12600K with a +2 sim speed advantage over a Ryzen 3900X in a big 3 AI vs 2 human game of Forged Alliance Forever. I saw it maintaining a +1-+3 advantage throughout the game. The simulation speed is the main bottleneck with big games and is all within one thread. My previous 7600K @ 4.9 GHz was somewhere near parity with the 3900X in our games.

I never saw the E cores being used but the game/Windows was moving threads around on the other cores.
FAF.jpg
 
Last edited:
Is supreme commander still such a CPU hog? I remember that game being an absolute monster to any cpu back in the day. Fun game aswell, esp on LAN.
 
Is supreme commander still such a CPU hog? I remember that game being an absolute monster to any cpu back in the day. Fun game aswell, esp on LAN.
It's mind blowing that we used to play this on single core CPUs. But I also remember it running -8 sim speed in smaller scale games back then. It used to turn into a slideshow in multiplayer games.

Today the scope/scale of games is bigger because of new maps and mods. So it's more demanding than it was back then and with many units and/or AI in play it will take as much CPU as you can give it. Modern AIs are actually vastly more interesting and yet less demanding than the stock game AI (it was somewhat broken/unfinished) but it is still much more demanding than a human player.

Players with notebooks can be problematic because of throttling and power limits. Inadequate cooling can cause games to slow down considerably later in the game as CPU load increases.

Like its predecessor Total Annihilation, it's a shame the source code hasn't and probably never will be released.
 
Last edited:
It's mind blowing that we used to play this on single core CPUs. But I also remember it running -8 sim speed in smaller scale games back then. It used to turn into a slideshow in multiplayer games.

Today the scope/scale of games is bigger because of new maps and mods. So it's more demanding than it was back then and with many units and/or AI in play it will take as much CPU as you can give it. Modern AIs are actually both vastly more interesting and yet less demanding than the stock game AI (it was somewhat broken/unfinished) but it is still much more demanding than a human player.

Players with notebooks can be problematic because of throttling and power limits. Inadequate cooling can cause games to slow down considerably later in the game as CPU load increases.

Like its predecessor Total Annihilation, it's a shame the source code hasn't and probably never will be released.

I had a QX6600 OC'd heavily and it struggled in this game when things got with many units on LAN's. Never tried this on a single core :p
Supreme commander should be used in todays CPU reviews, its 'another Crysis for benches. Both launched 2007 lol.
 
I had a QX6600 OC'd heavily and it struggled in this game when things got with many units on LAN's. Never tried this on a single core :p
Supreme commander should be used in todays CPU reviews, its 'another Crysis for benches. Both launched 2007 lol.
I initially was playing Supreme Commander (original, not FA) on a Pentium M 770 notebook and an Opteron 165 desktop. It ran terribly. I discovered that disabling the game's sound via command line helped a bit lol. Later I had a Q6600 and ran it around 3.2 GHz. It was a good CPU for the game, for the time.

But this 12600K is the biggest jump since Sandy Bridge I think. More cores than 4 does nothing for the game, but more per-core performance does a lot. This CPU screams and it is noticeably faster than the 4.9 GHz 7600K for everything really.

I swapped in the Scythe Mugen 5 RevB that I had on my other PC (an 8600K) and am now running 5.0 GHz all-core-turbo with a 0.01v undervolt. This seems manageable on thermals outside of Prime95.
 
Last edited:
I've been playing around with forcing Gear 1 1:1 memory mode at higher speeds. I just recently learned about the Gear 2 function that helps with memory compatibility at high speed but has a large impact on memory performance. I've been able to run up to DDR4 3700 at 1:1 so far. It's considerably faster with the 7z benchmark than Gear 2 @ 4000. More tweaking experimenting to do!

Edit
I've decided to leave it to default XMP settings in the end. The tweaking ends up with fairly similar results.
 
Last edited:
Back
Top