Intel i9 7900x CPUs

What would non-programmers do with all that power?
Big core counts are nice for some infrequent asset tasks like cooking data, (re)converting textures, (re)converting distance fields (sculpted high poly mesh -> SDF conversion is pretty slow), etc.

High clocked 6-core CPU with a brand new architecture would have been the perfect upgrade, since it was supposed to fit to the same socket (Wikipedia still says so). 50% more cores + small IPC gains + small clock gains over 6700K. But if you need to buy a new motherboard and potentially a new cooler (we have fancy water coolers), then the upgrade becomes much more expensive and complex. I hope that this rumor is false, but if it is true, we might as well be future proof and get 12-core Threadrippers instead. I have heard that many AAA devs using UE4 have 12 or 18 core Xeons (single or dual socket). With a consumer CPU, you hit some bottlenecks in productivity.
 
I had some more fun with the AVX 2 / AVX 512 / GPU fractal zoomer.
- AVX 512 slightly faster while consuming less power due to loop unrolling and using 32 registers instead of 16.
Now 61.5 frames per second at 4 K with initial display ( on 8 core at 4 Ghz while using AVX 512, max power now also 198 Watt instead of 208 Watt, only one fastest version now)
- AVX 2 now 10% faster (I'm curious how that runs on a Zen), about half speed compared to AVX 512
- More sane zooming around mouse pointer instead of center screen.
 
I had some more fun with the AVX 2 / AVX 512 / GPU fractal zoomer.
- AVX 512 slightly faster while consuming less power due to loop unrolling and using 32 registers instead of 16.
Now 61.5 frames per second at 4 K with initial display ( on 8 core at 4 Ghz while using AVX 512, max power now also 198 Watt instead of 208 Watt, only one fastest version now)
- AVX 2 now 10% faster (I'm curious how that runs on a Zen), about half speed compared to AVX 512
- More sane zooming around mouse pointer instead of center screen.

I can't run the exe for whatever reason, 1.8 works fine though. Getting about 2200 fps the moment it starts.
 
What CPU and GPU ? Can you still switch to GPU (G key) ?

Ryzen 1700 + 1080 Ti, it starts in GPU mode and this is what I'm getting:

desktop08.06.2017-16.luluw.png
 
Ryzen 1700 + 1080 Ti, it starts in GPU mode and this is what I'm getting:

desktop08.06.2017-16.luluw.png
I meant GPU mode in the 2.2 version, anyway there must be something wrong in CPU mode, with the dynamic texture updating (used to copy CPU buffer to GPU). No idea what. I tested on 4 different systems (but no Ryzen).
 
I meant GPU mode in the 2.2 version, anyway there must be something wrong in CPU mode, with the dynamic texture updating (used to copy CPU buffer to GPU). No idea what. I tested on 4 different systems (but no Ryzen).

I can't even run the 2.2 exe :D
 
Did you link the wrong article? That one is from june 2nd and deals with the LCC-SKX, while the most recent round of news about die-shots was about the MCC-version of Skylake X. I'm asking because the single fact that SKX uses two PCBs on top of each other has been mentioned/shown in this very thread already and on its own would not add very much uniqueness to the link.
 
I had some more fun with the AVX 2 / AVX 512 / GPU fractal zoomer.
- AVX 512 slightly faster while consuming less power due to loop unrolling and using 32 registers instead of 16.
Now 61.5 frames per second at 4 K with initial display ( on 8 core at 4 Ghz while using AVX 512, max power now also 198 Watt instead of 208 Watt, only one fastest version now)
- AVX 2 now 10% faster (I'm curious how that runs on a Zen), about half speed compared to AVX 512
- More sane zooming around mouse pointer instead of center screen.

With a 140-Watt UEFI and according behaviour of the 7900X and for the initial view I'm getting:
35.x Fps in AVX2-Mode (208 Watt at the wall, 140 Watt CPU, 3.6 GHz)
61.x Fps in AVX512-Mode (211 Watt at the wall, 140 Watt CPU, 3.1-3.2 GHz)
27.x Fps in GPU-Mode (Titan X-P, 220 Watt at the wall, 7 Watt CPU, 1.05 GHz)

Maybe I can try TR later.

edited wall wattages, were higher before, because GFX did not leave high power mode

edit 2: No-go on Threadripper 1950X. Does not even start (v2.3).
 
Last edited:
Maybe Intel could start enabling AVX512 on their desktop chips as well then. All this artificial market segmentation really accomplishes is killing adoption and creating additional headache for everyone.
 
What do you mean? AFAIK AVX512 is only integrade in their Skylake-SP series, not in Kaby Lake, Kaby Lake X or even Coffee Lake. In this case, again AFAIK, there's no artificial market segmentation as an exception to the rule.
 
AFAIK AVX512 is only integrade in their Skylake-SP series, not in Kaby Lake, Kaby Lake X or even Coffee Lake.
AVX512 is in skylake (and successors, I assume) from what I understand; there's supposedly a xeon skylake where it is enabled. Anyway, even if it isn't in any of the desktop-class chips at all that's still market segmentation, which hurts adoption with little benefit to anyone except maybe intel who may earn a few extra bucks here and there from the odd person who has enough need for AVX512 to shell out for the more expensive chips. It can't be very many of these people though, compared to the entire x86 market.
 
Hm, interesting, but I haven't heard of this. Skylake is a 2 year old mainstream chip, Skylake SP/X is 2 month old, their Xeon brethren not much older. I find it quite logical that in the meantime, you could add broader vector units to a design and also add to the L2 cache.

edit: You could look at the die shots of both chips in order to determine whether or not they have the same vector units. A quick google search only yielded some of the known rumor mogering sites and people not understanding that current Xeon Phi is not at all Skylake-based as sources for "Skylake has AVX512". But maybe I googled just too quick.
 
Last edited:
Back
Top