LuxMark v3.0beta2

Discussion in 'Tools and Software' started by Dade, Jan 12, 2015.

  1. Dade

    Newcomer

    Joined:
    Dec 20, 2009
    Messages:
    206
    Likes Received:
    20
    Introduction

    LuxMark is a OpenCL cross-platform benchmark tool and has become, over past years, one of the most used (if not the most used) OpenCL benchmark. It is intended as a promotional tool for LuxRender and it is now based on LuxCore, the LuxRender v2.x C++ or Python API available under Apache Licence 2.0 and freely usable in open source and commercial applications.

    OpenCL render engine

    A brand new micro-kernel based OpenCL path tracer is used as rendering mode for the benchmark.

    C++ render engine

    This release includes the come back of a benchmarking mode not requiring OpenCL (i.e. a render engine written only in C++ like in LuxMark v1.x). Ray intersection C++ code uses state-of-the-art Intel Embree.

    Stress mode

    Aside from benchmarking modes, it is also available a stress mode to check the reliability of the hardware under heavy load.

    Benchmark Result Validation

    LuxMark now includes a validation of the rendered image by using the same technology used for pdiff in order to check if the benchmarked result is valid or something has gone wrong. It has also a validation of the scene sources used (i.e. hash of scene files). While it will still possible to submit fake results to the LuxMark result database, it will make this task harder.

    LuxVR

    LuxVR is included as demo too and replaces the old "Interactive" LuxMark mode.

    A brand new web site

    There is now a brand new web site dedicated to LuxMark result: http://www.luxmark.info. It includes many new features compared the old results database.

    Benchmark Scenes

    3 brand new scenes are included. The simple benchmark is the usual "LuxBall HDR" (217K triangles):

    [​IMG]

    The medium scene is the "Neumann TLM-102 Special Edition (with EA-4 shock mount)" (1769K traingles) designed by Vlad "SATtva" Miller (http://vladmiller.info/blog/index.php?comment=308):

    [​IMG]

    The complex scene is the "Hotel Lobby" (4973K) designed by Peter "Piita" Sandbacka:

    [​IMG]

    Binaries

    Windows 64bit: http://www.luxrender.net/release/luxmark/v3.0/luxmark-windows64-v3.0beta2.zip (note: you may have to install VisualStudio 2013 C++ runtime: https://www.microsoft.com/en-US/download/details.aspx?id=40784)
    MacOS 64bit: http://www.luxrender.net/release/luxmark/v3.0/luxmark-macos64-v3.0beta2.zip
    Linux 64bit:

    Some note to compile LuxMark:

    - the sources are available here: https://bitbucket.org/luxrender/luxmark (tag: luxmark_v3.0beta2)

    - LuxMark can be compiled exactly like LuxRender. It has exactly the same dependencies (i.e. LuxCore, LuxRays, etc.)

    - it requires LuxRays "for_v1.5" branch to be compiled (tag: luxmark_v3.0beta2)

    - the complete scenes directory is available here: https://bitbucket.org/luxrender/luxmark/downloads
     
    BRiT and Lightman like this.
  2. fellix

    fellix Hey, You!
    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,490
    Likes Received:
    400
    Location:
    Varna, Bulgaria
    [​IMG]

    Doesn't run for me.

    GeForce GTX 580, with the 344.48 driver.
     
  3. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    No problem here..

    2x 7970

    [​IMG]
     
    #3 lanek, Jan 13, 2015
    Last edited: Jan 13, 2015
  4. Dade

    Newcomer

    Joined:
    Dec 20, 2009
    Messages:
    206
    Likes Received:
    20
    Ah, sorry, I forgot NVIDIA is OpenCL v1.1 only and it is not able to run any executable compiled with newer OpenCL SDKs. I recompiled the binary with v1.1, can you download the .zip again and try if it works now ?
     
  5. fellix

    fellix Hey, You!
    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,490
    Likes Received:
    400
    Location:
    Varna, Bulgaria
    It works now. Thanks!
     
  6. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Tested with both GPU's and CPU enabled
    - 2x HD7970 @1120mhz/1500mhz
    - I7 4930K@4300mhz

    Simple scene benchmark:
    [​IMG]

    Medium scene Benchmark:
    [​IMG]

    Complex scene Benchmark
    [​IMG]
     
    #6 lanek, Jan 13, 2015
    Last edited: Jan 13, 2015
  7. Lightman

    Veteran Subscriber

    Joined:
    Jun 9, 2008
    Messages:
    1,804
    Likes Received:
    475
    Location:
    Torquay, UK
    Single R9 290X OC (1030/1250) - 15251
    Very nice job Dade!
     
  8. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,172
    Location:
    La-la land
    You're getting a fair amount of differing pixels, over 16% in the final test... What happens if you roll back to stock clocks?
     
  9. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    at least on the small test, they stay the same... ( 0.09% ) .. i dont know exactly how work their validations anyway.
     
  10. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,172
    Location:
    La-la land
    @lanek Maybe the small test either doesn't push the hardware enough, or your chips don't have time to heat up fully. I know that when I experimented a bit with OCing my R290X (it's factory OCd, wanted to see if I could push it more), 3DMark didn't start getting wonky unless I ran the demo first before the benchmark to let the card heat up. Run the full suite again to make sure... :)
     
  11. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    I have check at stock speed the complex one ( hotel ), and i have more pixel error lol. ( 17.4% )... This said, all my system is under H2o and the gpu's dont pass over 40°C on stress condition. Well the error could come from anything, dont know exactly. my 7970's start to be aged, will try make some test if i have time.
     
    #11 lanek, Jan 14, 2015
    Last edited: Jan 14, 2015
  12. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,172
    Location:
    La-la land
    Wow. Well, maybe the pixel error thing isn't entirely reliable then! :p Anyway, watercooled. Pretty hardcore!
     
  13. Dade

    Newcomer

    Joined:
    Dec 20, 2009
    Messages:
    206
    Likes Received:
    20
    LuxMark includes a reference (i.e. noise free) image of the rendering used for a comparison with your result. The benchmark runs only for 2 minutes (i.e. not enough to achieve a totally noise free rendering in pretty much any hardware available today) and it is expected to be different from the reference image (i.e. slower is your hardware, more noise you will have and larger the difference will be).

    LuxMark uses perceptual based algorithm (http://pdiff.sourceforge.net) to compare you rendering and the reference image. The result will be rejected only if the amount of "perceived" different pixels is larger than a threshold. Indeed, this threshold has to be quite high to still accept results from very slow devices (i.e. aka CPUs ;)).

    Don't worry, like I said it is perfectly normal to have an amount of different pixels from the reference image.
     
    Lightman and Grall like this.
  14. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Ok ,Thanks you Dade..
     
  15. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Not much score with Nvidia so far, hoping to see if maxwell perform good or not on it.
     
  16. OpenGL guy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,357
    Likes Received:
    28
    Hi Dade,

    It seems Luxmark is caching the kernel binaries. Where is the cache stored so I can wipe it for testing?

    Thanks!

    P.S. I found c:\Users\<profile name>\Local\Temp\kernel_cache\LUXCORE_1.5dev, is that it? Is there a control for this feature?
     
    #16 OpenGL guy, Jan 16, 2015
    Last edited: Jan 16, 2015
  17. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    Congratulations on the new version, Dade.

    Doesn't that leave some performance on the table? Or is OCL v1.1 equally fast in practice?

    By the way, do the new, larger scenes exhibit different behavior compared to LuxBall? For example, do they benefit more from GPU acceleration?
     
  18. Dade

    Newcomer

    Joined:
    Dec 20, 2009
    Messages:
    206
    Likes Received:
    20
    Yup, it should be (i.e. it is placed wherever Boost library thinks is the temporary directory for your OS)

    Yes, you can use 3 types of cache: PERSISTENT (the default setting), VOLATILE (kernel is compiled only once for each run and than stored in ram), NONE (cache disabled). You can use VOLATILE or NONE for testing.

    You have to edit the 3 .cfg files (scenes/luxball/render.cfg, scenes/mic/render.cfg, scenes/hotel/render.cfg) under the scenes directory and add the following line:

    opencl.kernelcache = NONE

    This will change the scene hash and you will unable to submit result to http://www.luxmark.info but I assume it doesn't matter for testing.
     
  19. Dade

    Newcomer

    Joined:
    Dec 20, 2009
    Messages:
    206
    Likes Received:
    20
    Not really, the differences between v1.1 and v1.2 are really microscopic.

    Yes.

    OpenCL v2.0 is a totally different topic: for performance (in same case) but mostly for how easier is to write applications. Working with v1.x is like programming in assembler in the old days. The amount of micro-management you have to do and the amount of code you have to write for even the most simple operation is a bit insane.

    However using OpenCL v2.0 means to drop NVIDIA support and that is something I doubt someone is ready to do. But we can not be hold back indefinitively by NVIDIA. Especially with AMD and Intel well focused on supporting v2.0.

    On the contrary, GPU renderers can be easily 5-10 times faster than a CPU renderers when rendering simple scenes. While the GPU is "only" 2-4 times faster with more complex scenes. This is due to threads divergence among other factors. You can compare C++ Vs GPU in LuxBall and Hotel benchmarks to see this pattern at work.

    This one of the reason holding back the adoption of GPUs in the VFX field (most high-end productions are still rendered on CPU render farms). The other factor is the huge complexity (i.e. cost) of software development for GPUs.
     
    Alexko, Jawed and Lightman like this.
  20. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    Have you considered releasing two versions of LuxMark?

    Very informative post, thank you.
     
Loading...
Similar Threads - LuxMark 0beta2
  1. Dade
    Replies:
    26
    Views:
    8,045
  2. Dade
    Replies:
    41
    Views:
    16,298

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...