LuxMark v3.0beta2

I've already posted my benchmark result on luxmark.info
On simple scene, CPU+GPU actually slower than GPU only on Kaveri.
Looking at the numbers, I think I need to buy a dGPU.

Btw, I'm almost doing all the bechmark, except for the CPU only bechmark for the medium scene. I did the CPU only OpenCL benchmark on the complex scene and it failed (score less than 200 and failed the image validation).

Using A10-7850K only.

Edit: rant a bit....
Just checked the score. The top end Kaveri (overclocked too) is beaten by Phenom II X6. Unless games (or other application) starts to utilize GPU (for compute), then all those GPU would be wasted if I buy a dGPU and I'll be left with a crappy CPU. Either they would need to add more core or just don't do this CMT thing or starts naming their 4 core Kaveri to 2 Module Kaveri. It certainly doesn't perform like a true 4 core CPU.
 
Last edited:
I've already posted my benchmark result on luxmark.info
On simple scene, CPU+GPU actually slower than GPU only on Kaveri.
Looking at the numbers, I think I need to buy a dGPU.

Btw, I'm almost doing all the bechmark, except for the CPU only bechmark for the medium scene. I did the CPU only OpenCL benchmark on the complex scene and it failed (score less than 200 and failed the image validation).

Using A10-7850K only.

Edit: rant a bit....
Just checked the score. The top end Kaveri (overclocked too) is beaten by Phenom II X6. Unless games (or other application) starts to utilize GPU (for compute), then all those GPU would be wasted if I buy a dGPU and I'll be left with a crappy CPU. Either they would need to add more core or just don't do this CMT thing or starts naming their 4 core Kaveri to 2 Module Kaveri. It certainly doesn't perform like a true 4 core CPU.

is the clock are sustained during the benchmark ? I can tell you my 4930K at 100% 12threads usage is taking an hit.

This said, i have started use LUXrender (SGL openCL path ) for Blender and 3Dmax... good alternative ( and free )..

Was hoping Autocad could be updated ( need serious CPU's farm for complex renderer )
 
is the clock are sustained during the benchmark ? I can tell you my 4930K at 100% 12threads usage is taking an hit.

This said, i have started use LUXrender (SGL openCL path ) for Blender and 3Dmax... good alternative ( and free )..

Was hoping Autocad could be updated ( need serious CPU's farm for complex renderer )
I believe so. I didn't pay attention to that, but using other program for stability testing of my OC (stuff like prime), my PC managed to sustain the clock speed without throttling. I've used OCCT which can push CPU and GPU at the same time and still stable, thus I believe running the CPU+GPU for 2 minutes should be doable without throttling.
Isn't the recent Autocad includes MentalRay? isn't that fast enough? I know it isn't the fastest, but much better than whatever their old renderer is. Btw, does LUXrender dependent on GPU VRAM or it can use PC RAM? Because I don't think my scene would fit into the GPU VRAM (my work PC has 2GB VRAM) because usually my RAM was being used more than 4GB to render a scene (it was very painful in 32bit days).
 
Still the CPU who is used, it quite fast for small render, ( a bit like Cinema4D ).. the worst is when you do an video.. Need say there's so much complementary with 3Dmax, that you can just work with the same files on both and do the material, light, textures paint and render with max ( just need to think that you will use mesh then and not solid like in Autocad ).

I dream of a day where we will get one software who contain both 3Dmax and Autocad.

Its a good question for Lux, i dont know, should be the vram as you use the gpu, but i had not the feeling it goes really high, for be honest i discover LUX and Blender at the same time.

I use Autocad since 1991, from the DOS version ( 8 - 10 and 12 ) to the 2015 learning with Blender and 3Dmax was a bit of a pain at start. lol.
 
Last edited:
Hi Dade, thanks for the heads up on the new version.

I don't know if you've had a chance to do much testing against NVIDIA parts, but I'm finding that kernel compilation is behaving very oddly. LuxBall works fine, but the other two tests aren't having their kernels compiled correctly. 9 times out of 10 kernel compilation is basically hanging; Luxmark is still alive, but even 20 minutes later it's still maxing out a CPU core working on it, and is up to 20GB memory usage. I've been able to get the hotel to compile once (with a kernel exactly 8MB big), and I'm convinced that's a fluke.

This is occurring on both a GTX 980 running NVIDIA's latest drivers, and a GTX Titan in a separate machine running a slightly older driver build.
 
Btw, does LUXrender dependent on GPU VRAM or it can use PC RAM? Because I don't think my scene would fit into the GPU VRAM (my work PC has 2GB VRAM) because usually my RAM was being used more than 4GB to render a scene (it was very painful in 32bit days).

LuxRender has 3 rendering modes:

- CPU-only (aka C++ mode in LuxMark): it uses only the CPU and the CPU ram.

- Hybrid CPU/OpenCL: it uses the GPU only for ray/triangle intersection so it requires to store in GPU ram only the geometry (i.e. no materials, no texture maps, etc.). This rendering mode is not exposed in LuxMark and it was the very first kind of GPU acceleration introduced in LuxRender. It offers a very light acceleration and it may even be removed in a future because a full OpenCL implementation is a LOT faster.

- Full OpenCL: it uses any OpenCL device available (i.e. CPU, GPU, etc.) and requires to store complete copy of the scene in each device memory (geometry, materials, lights, texture maps, etc.). It is the main benchmark mode used in LuxMark.

Side note: the data are stored in GPU ram in a quite compact format, you can store 7-8 million of triangles in about 1GB of GPU ram. Texture maps are usually the main problem when working with GPU renderers but if your model is coming from AutoCAD, you have probably far more triangles than texture maps.
 
I don't know if you've had a chance to do much testing against NVIDIA parts, but I'm finding that kernel compilation is behaving very oddly. LuxBall works fine, but the other two tests aren't having their kernels compiled correctly. 9 times out of 10 kernel compilation is basically hanging; Luxmark is still alive, but even 20 minutes later it's still maxing out a CPU core working on it, and is up to 20GB memory usage. I've been able to get the hotel to compile once (with a kernel exactly 8MB big), and I'm convinced that's a fluke.

This is occurring on both a GTX 980 running NVIDIA's latest drivers, and a GTX Titan in a separate machine running a slightly older driver build.

Unfortunately, I don't have a NVIDIA GPU to do a test but others LuxRender developers and users are using NVIDIA GPUs (GTX560, GTX780, GTX970 to name few) and they have not reported any problem. Everything seems to work fine for Fellix with his NVIDIA GPU too (https://forum.beyond3d.com/threads/luxmark-v3-0beta2.56400/#post-1818393)

Are you sure to not have any other OpenCL device aside the GTX980 or Titan installed on your PC ? I remember a user having a problem similar to your, he than discovered the source of the problem was the GPU embedded in his Haswell CPU. Intel GPUs are usually too limited to run such a complex stuff as medium/complex LuxMark benchmark.
 
Are you sure to not have any other OpenCL device aside the GTX980 or Titan installed on your PC ? I remember a user having a problem similar to your, he than discovered the source of the problem was the GPU embedded in his Haswell CPU. Intel GPUs are usually too limited to run such a complex stuff as medium/complex LuxMark benchmark.
This is a clean system with an -E class CPU. So no other GPUs or GPU drivers are present.
 
Now that i use Luxrender for Blender since 2 good months, and starting to more understand how it work. i can confirm this is an incredibly good render,its not only an alternative to cycle, the quality of the result is way higher of what i was expect.... ( ).

I hope in the future, it could be compltely integrated in Blender..
 
Last edited:
However using OpenCL v2.0 means to drop NVIDIA support and that is something I doubt someone is ready to do. But we can not be hold back indefinitively by NVIDIA. Especially with AMD and Intel well focused on supporting v2.0.
well CUDA is the standard in compute/scientific field (market share speaks big time). so why don't make a CUDA version and be relevant to 80% of the community ?
 
well CUDA is the standard in compute/scientific field (market share speaks big time).

Developing a cross platform/vendor benchmark based on a proprietary API doesn't sound like a good idea to me. To be sincere, I find hard to see how a benchmark that can run only on a couple of different GPUs (and nothing else, no CPU, no other devices, etc.) can be useful at all. I mean, a benchmark is supposed to be used to compare different things, why to develop one to compare only NVIDIA GPUs ?

Anyway, there is Octane Benchmark or Blender Cycles that can be used has CUDA-only benchmarks.

so why don't make a CUDA version and be relevant to 80% of the community ?

I have some doubt about that 80% number: most of the compute/scientific community is probably using an Intel Xeon or Xeon Phi.

Anyway, I think I don't need to explain why people working on open source software find the idea of using a proprietary API disturbing.
 
Dade how accurately does LuxRender simulate light.

The definition of "accurate" really depend on the field of application and it may be enough or not but, for instance, Advanced Concepts Team of the European Space Agency has recently used LuxRender to explore the concept of new depth sensors: http://www.esa.int/gsp/ACT/bio/projects/SpiderVision.html (http://www.esa.int/gsp/ACT/doc/BIO/AlekeNolte_FinalReport.pdf).

LuxRender is definitively more accurate than a generic rendering package but may be less accurate than some specific software developed to run simulations for a specific field. Generic rendering packages are usually developed more for artistic renderings than accuracy.

And the general rule garbage-in-garbage-out still apply: if you want to obtain an accurate result, you have to provide accurate light sources, materials, etc. information.
 
I'm wondering If I used it to perform the double slit experiment (especially the single photon at a time version) would I get the correct result ?
 
Well, Nvidia finally brought out support for OCL1.2 with the 350.05 driver, but now I can only run the LuxBall test. The other two scenes are getting in some kind of an endless loop during the kernel compilation step, resulting in memory leak quickly filling up the 12GB RAM in my system. :???:
 
Last edited:
LuxMark v3.1beta1 has been released and it is available here: http://www.luxrender.net/forum/viewtopic.php?f=34&t=12223

This version includes LuxRender v1.5RC1 render engine (available here: http://www.luxrender.net/forum/viewtopic.php?f=12&t=12227). Among other features, it includes some OpenCL optimization suggested by NVIDIA to LuxRender project (https://bitbucket.org/luxrender/luxrays/commits/2c88fb60e64f04f7fcf393a31e661793594a33f6). It is a generic optimization suggested also in Intel OpenCL SDK: https://software.intel.com/sites/la...g_Restrict_Qualifier_for_Kernel_Arguments.htm

For the record, NVIDIA has kindly donated a couple of GTX980 to LuxRender project in order to test this and further optimizations.

Thanks to the above optimization, I have measured a 10-20% increase in LuxMark scores in Windows with a GTX980 and a 20-25% on Linux. Your mileage may vary as the improvement is strictly related to the hardware and driver version used.

Note: While LuxMark v3.0 and v3.1beta1 deliver similar results but it is fair to compare only v3.1beta1 results with v3.1beta1 and v3.0 with v3.0.

Binaries

Windows 64bit: http://www.luxrender.net/release/luxmark/v3.1/luxmark-windows64-v3.1beta1.zip (note: you may have to install VisualStudio 2013 C++ runtime => https://www.microsoft.com/en-US/download/details.aspx?id=40784)
MacOS 64bit: http://www.luxrender.net/release/luxmark/v3.1/luxmark-macos64-v3.1beta1.zip
Linux 64bit: http://www.luxrender.net/release/luxmark/v3.1/luxmark-linux64-v3.1beta1.tar.bz2
 
Back
Top