DX11 / AVX2 volume rendering

Discussion in 'Rendering Technology and APIs' started by Voxilla, Feb 1, 2014.

  1. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    832
    Likes Received:
    505
    In case you only would want to show part of a huge volume that could work.
    For showing more of a large volume, mip mapping or quadrilinear is best anyway.

    I'm begining to have visions of vast voxel worlds rendered this way, with all kinds of unseen volumetric effects applied to walls ...
    I'm wondering if there are artists and tools that could create the volumes needed.
     
  2. Anteru

    Newcomer

    Joined:
    Jul 4, 2004
    Messages:
    114
    Likes Received:
    3
    Shameless plug, but there is a 3D voxel modelling tool from a company I co-founded: VOTA (see https://volumerics.com/en-us/vota.) Not sure if that is exactly what you had in mind, but to my knowledge at least, it's the only really scalable voxel editor there is (going up to volumes with 2048³ resolution and more.)

    Higher than that, it's mostly an exercise in streaming/hiding/level-of-detail. We're on it :)
     
  3. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    832
    Likes Received:
    505

    Indeed that's more or less what I was thinking about.
    You store rgba per voxel I assume ?
    To apply some of the transfer function effects, the alpha would need to correspond to the density of material or distance from the exterior.
    For some volumes, 8 bit per voxel could be enough, using the transfer function as a palette.
    Volumes also would need to repeat to stitch them together seamless.
    Sculpting data could better be stored in separate mask, or even be generated dynamically while rendering.

    I can imagine a game world build that way, at least it could look and behave very different compared to flat static textured walls.
     
  4. jlippo

    Veteran

    Joined:
    Oct 7, 2004
    Messages:
    1,744
    Likes Received:
    1,090
    Location:
    Finland
  5. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    Who does ?
     
  6. Osamar

    Newcomer

    Joined:
    Sep 19, 2006
    Messages:
    231
    Likes Received:
    43
    Location:
    40,00ºN - 00,00ºE
    The autor of the blog.
     
  7. Priyadarshi

    Newcomer

    Joined:
    Sep 22, 2012
    Messages:
    57
    Likes Received:
    0
    Location:
    USA
    Did similar stuff using OpenCL a few years ago. Here is a video. I also used the same method to model smoke and clouds using an isotropic light scattering model. Looks realistic but only got around 12 fps on a 7970. Biggest problem is memory bandwidth and current rasterization optimized texture caches don't really help in ray-tracing.

    [​IMG]
     
  8. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,176
    Location:
    La-la land
    Early PS3 game Warhawk had fairly realistic volumetric clouds generated in realtime on the Cell processor. How were they done, drawn as translucent voxels perhaps, and then uploaded to RSX as a texture...? The clouds had a peculiar tendency to look fuzzy at times, and pixellated at other times.
     
  9. Priyadarshi

    Newcomer

    Joined:
    Sep 22, 2012
    Messages:
    57
    Likes Received:
    0
    Location:
    USA
    Just saw a video of Warhack and the clouds are most likely billboards. So everything is pre-computed but there could be multiple layers of them to give them a more volumetric look.

    My implementation was computing all the lighting in real-time though which takes up most of the performance. Each sample from inside the cloud/smoke shoots a shadow ray to determine how much light is received at that point. This can be precomputed.

    I just saw this demo explaining how they implemented that volumetric explosion in UE4 Infiltrator demo. That is using particles to sample the volume instead of ray-tracing which also makes it highly view dependent.
     
  10. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,176
    Location:
    La-la land
    No, they were definitely not billboards, that's obvious when playing the game. Also if they were billboards it would mean the devs were blatantly lying when they claimed in interviews and articles that they were being realtime generated volumetric renderings.
     
  11. Novum

    Regular

    Joined:
    Jun 28, 2006
    Messages:
    335
    Likes Received:
    8
    Location:
    Germany
    Could be that they are rendered on Cell to a texture and then that texture gets rendered as a billboard on RSX.
     
  12. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    832
    Likes Received:
    505
    The UE4 demo renders a time series of volumes, like 128x128x10 size x 64 time steps.
    It's rendered so you are looking down the z axis where the volume is low resolution with 10 layers.
    The explosion is precomputed
    A few years back I did something similar with 200x200x200 volumes and the whole fire/fluid simulated on the GPU.
     
    #33 Voxilla, Feb 18, 2014
    Last edited by a moderator: Feb 18, 2014
  13. Priyadarshi

    Newcomer

    Joined:
    Sep 22, 2012
    Messages:
    57
    Likes Received:
    0
    Location:
    USA
    My bad. In the video I posted they seemed view dependent but doesn't look like the case in general. Ray-casting is just one way to do volumetric rendering though, you could also render view-dependent alpha textured splices out of your 3D volume. They are called volumetric billboards. You can achieve nice results with a bit of tweaking.
     
  14. Priyadarshi

    Newcomer

    Joined:
    Sep 22, 2012
    Messages:
    57
    Likes Received:
    0
    Location:
    USA
    That transfer function is awesome! Are you using the Lattice-Boltzmann method for fluid simulation? And ray-casting I suppose?
     
  15. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    832
    Likes Received:
    505
    It does Semi-Lagrangian with second order MacCormack as described here.
    This is raycasting indeed, the rendering is based on maximum intensity projection.
    As I also posted the source a lot of people have been using it.
    Nvidia made nice improvements here
    Recently I saw a video somewhere of a flame throwing dragon demo by NV. Maybe somebody can point to the location ?

    This type of rendering and simulation can look very realistic, but it needs massive amounts of memory bandwidth (and texture filtering). With stacked DRAM ~TB/s as in Volta, these effects will become more viable.
     
    #36 Voxilla, Feb 19, 2014
    Last edited by a moderator: Feb 19, 2014
  16. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    832
    Likes Received:
    505
    As a change I decided to try what can be done with volume rendering on a smartphone.
    As I have no idea if volume textures are supported on SoCs, I made a CPU rendering version using multi-core and SIMD NEON for ARM processors first.
    The result is quite usable, and the volumes can be large ~1GB, and this should work on any SoC.
    For a video see: http://volumize.be/videos.html
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...