LuxMark v3.0beta2

Wow. The power use of vector units is... impressive. My Haswell ULT does the benchmark at 1.1GHz on CPU cores because it runs into the 15W power limit. In many typical applications, it has no trouble staying at 2.4GHz with high loads on both cores.

Can image errors appear because I've had some other application in the foreground, resized the window, etc?
How can I find results for a specific GPU on luxmark.info?
 
Scores: 16609, 5871, 1674
3.1 beta 1
980ti

I've messed around with Lux in Blender3D a couple of years ago on a 285gtx. In the first test the image looks quite clean after just a few seconds on this card. I'll have to grab blender again and have a play.
 
Last edited:
Wow. The power use of vector units is... impressive. My Haswell ULT does the benchmark at 1.1GHz on CPU cores because it runs into the 15W power limit. In many typical applications, it has no trouble staying at 2.4GHz with high loads on both cores.
Scratch that. It seems I had accidentally used CPU+GPU mode :D
 
Too much compute compared to what can be fed into and out of the CUs. Scaling compared to R9 390X is far less than ideal.
 
Too much compute compared to what can be fed into and out of the CUs. Scaling compared to R9 390X is far less than ideal.

It is a very strange result, they have scaled the number of ALUs but not some other kind of resources (cache size, etc.). It looks like an hint of some architectural problem (or, may be, a problem in OpenCL driver ... not exactly an uncommon event).

Sincerely, the current results don't match the expected performanceat all.
 
It is a very strange result, they have scaled the number of ALUs but not some other kind of resources (cache size, etc.). It looks like an hint of some architectural problem (or, may be, a problem in OpenCL driver ... not exactly an uncommon event).

Sincerely, the current results don't match the expected performanceat all.

Agreed, but I am not the only one seeing this behaviour. Also my colleague from golem.de has this mediocre scaling. Might me memory related?
 
so the current suspects are memory and the compiler, are there no tweaks that can be done on the application side @Dade?

May be but the very first requirement is to understand what is different from a 4870/5870/7970/290X (i.e AMD GPUs I have directly tested and have all shown the expected increase in performance with the increased number of cores). For having a nearly zero increase in performance with a noticeable increase in cores count, there must be something going seriously wrong (on the software or hardware side).

Having LuxMark running under CodeXL could shade some light on the problem.
 
LuxMark v3.1 has been released: http://www.luxrender.net/forum/viewtopic.php?f=34&t=12359

What is new in v3.1 ?

- The new LuxRender v1.5 render engine. Among other features, it includes some OpenCL optimization suggested by NVIDIA to LuxRender project. Because of the general score improvements in v3.1, it is not fair to compare LuxMark v3.0 results with LuxMark v3.1;

- OpenCL "overclocking" (OpenCL C compiler options: -cl-fast-relaxed-math -cl-mad-enable -cl-no-signed-zeros);

- a new "OpenCL Compiler Options" menu in order to allow the user to enable/disable single compiler options. By default, the following options are enabled: "-cl-fast-relaxed-math -cl-mad-enable -cl-no-signed-zeros".
"-cl-strict-aliasing" is not enabled by default because Intel compiler is broken and it doesn't support this standard option.

- a new command line --ext-info option (http://www.luxrender.net/forum/viewtopic.php?f=8&t=12278#p115645);

- a fix for OpenCL device with weird names (http://www.luxrender.net/forum/viewtopic.php?f=34&t=11585&start=50#p115646);
 
GPU is overclocked by 8%, and memory by 12%

lux.jpg
 
Back
Top