DX11 / AVX2 volume rendering

Could you combine this with virtual texturing/megatexture to increase the GPU cube size...?

In case you only would want to show part of a huge volume that could work.
For showing more of a large volume, mip mapping or quadrilinear is best anyway.

I'm begining to have visions of vast voxel worlds rendered this way, with all kinds of unseen volumetric effects applied to walls ...
I'm wondering if there are artists and tools that could create the volumes needed.
 
Shameless plug, but there is a 3D voxel modelling tool from a company I co-founded: VOTA (see https://volumerics.com/en-us/vota.) Not sure if that is exactly what you had in mind, but to my knowledge at least, it's the only really scalable voxel editor there is (going up to volumes with 2048³ resolution and more.)

Higher than that, it's mostly an exercise in streaming/hiding/level-of-detail. We're on it :)
 
Shameless plug, but there is a 3D voxel modelling tool from a company I co-founded: VOTA (see https://volumerics.com/en-us/vota.) Not sure if that is exactly what you had in mind


Indeed that's more or less what I was thinking about.
You store rgba per voxel I assume ?
To apply some of the transfer function effects, the alpha would need to correspond to the density of material or distance from the exterior.
For some volumes, 8 bit per voxel could be enough, using the transfer function as a palette.
Volumes also would need to repeat to stitch them together seamless.
Sculpting data could better be stored in separate mask, or even be generated dynamically while rendering.

I can imagine a game world build that way, at least it could look and behave very different compared to flat static textured walls.
 
Did similar stuff using OpenCL a few years ago. Here is a video. I also used the same method to model smoke and clouds using an isotropic light scattering model. Looks realistic but only got around 12 fps on a 7970. Biggest problem is memory bandwidth and current rasterization optimized texture caches don't really help in ray-tracing.

mamx72tl.png
 
Early PS3 game Warhawk had fairly realistic volumetric clouds generated in realtime on the Cell processor. How were they done, drawn as translucent voxels perhaps, and then uploaded to RSX as a texture...? The clouds had a peculiar tendency to look fuzzy at times, and pixellated at other times.
 
Just saw a video of Warhack and the clouds are most likely billboards. So everything is pre-computed but there could be multiple layers of them to give them a more volumetric look.

My implementation was computing all the lighting in real-time though which takes up most of the performance. Each sample from inside the cloud/smoke shoots a shadow ray to determine how much light is received at that point. This can be precomputed.

I just saw this demo explaining how they implemented that volumetric explosion in UE4 Infiltrator demo. That is using particles to sample the volume instead of ray-tracing which also makes it highly view dependent.
 
No, they were definitely not billboards, that's obvious when playing the game. Also if they were billboards it would mean the devs were blatantly lying when they claimed in interviews and articles that they were being realtime generated volumetric renderings.
 
Could be that they are rendered on Cell to a texture and then that texture gets rendered as a billboard on RSX.
 
The UE4 demo renders a time series of volumes, like 128x128x10 size x 64 time steps.
It's rendered so you are looking down the z axis where the volume is low resolution with 10 layers.
The explosion is precomputed
A few years back I did something similar with 200x200x200 volumes and the whole fire/fluid simulated on the GPU.
 
Last edited by a moderator:
No, they were definitely not billboards, that's obvious when playing the game. Also if they were billboards it would mean the devs were blatantly lying when they claimed in interviews and articles that they were being realtime generated volumetric renderings.

My bad. In the video I posted they seemed view dependent but doesn't look like the case in general. Ray-casting is just one way to do volumetric rendering though, you could also render view-dependent alpha textured splices out of your 3D volume. They are called volumetric billboards. You can achieve nice results with a bit of tweaking.
 
That transfer function is awesome! Are you using the Lattice-Boltzmann method for fluid simulation? And ray-casting I suppose?

It does Semi-Lagrangian with second order MacCormack as described here.
This is raycasting indeed, the rendering is based on maximum intensity projection.
As I also posted the source a lot of people have been using it.
Nvidia made nice improvements here
Recently I saw a video somewhere of a flame throwing dragon demo by NV. Maybe somebody can point to the location ?

This type of rendering and simulation can look very realistic, but it needs massive amounts of memory bandwidth (and texture filtering). With stacked DRAM ~TB/s as in Volta, these effects will become more viable.
 
Last edited by a moderator:
As a change I decided to try what can be done with volume rendering on a smartphone.
As I have no idea if volume textures are supported on SoCs, I made a CPU rendering version using multi-core and SIMD NEON for ARM processors first.
The result is quite usable, and the volumes can be large ~1GB, and this should work on any SoC.
For a video see: http://volumize.be/videos.html
 
Back
Top