What is 'ray marching'? *spawn

In my experience, raymarching volumes to achieve very good results has always been more expensive than raytracing itself.
That's only because you're not using ray tracing to its full potential. Ray tracing is perfect. It can calculate images exactly as they should be constructed from light*. The problem is that level of quality is too damned expensive! In reality in a foggy cloud, you'd have billions of tiny particles. Pure ray tracing would trace through this cloud computing intersects with particles and reflections and determining blocked light from reflected particles in an insanely complex interaction that's impossible to calculate in human lifetimes. So instead, we use approximations for volumetrics. One option is to sample at discrete points along the ray. At you point, you could cast secondary rays for things like subsurface scattering. Another is take the bounding volume (length of ray between entry and exit volume surfaces) and compute an amount of reflected and transmitted light calculated as probabilities of hitting a particle within the cloud. Importantly, you use exact surface tests (either a surface is hit, or when it's not, you try the next step). Another option is to use volumetric samples but then you're using a different algorithm to the 'perfect' ray tracing algorithm. It would appear that in CGI parlance, Ray Marching is used to described a volumetric sampling process.

To answer LB's original question more directly, ray tracing (ray casting on a single iteration) calculates a surface hit by a ray and yields the surface and its normals and any other details in the surface description. Ray marching performs an iterative search along a ray in discrete steps until it determines it is inside an object, and as such is both lower resolution in terms of depth and doesn't yield surface details (although you could approximate them). I've read ray marching is a better fit for GPUs, but don't know for sure. And PowerVR includes a raytracing unit for Total Awesomeness!

Ray Tracing
Shoot a laser in a direction. Does it hit anything? Yes - that's your surface.

Ray Marching
Get a 30 cm ruler and point it in a direction. Is the end of that 30 cm ruler penetrating something? No - advance along another 30 cms. Keep repeating until the end of the ruler penetrates a surface - there's your object.

Obviously ray marching is too dangerous to use for rendering humans. :yep2:

* You'd need to add refraction and diffraction on top of the traditional straight-line rays to truly nail it, and have a suitably solvable representation of your scene data for the ray intersects which we have with triangle based geometry. Ideal case, represent your data as atoms and trace light from atom to atom...
 
Last edited:
That's like saying a car is faster than a plane because your car trip from spain to france took less time than the flight from New York to Hong Kong.

I'm not saying that at all.

Different algorithym names do indeed carry different inferred meaning from industry to industry, but they all have an actual "pure" definition, and ray tracing, although not comonly aplied to volume rendering, is actually capable of doing that too. So given the same work load (volume rendering) which algo will perform better depends on the details of the implementation of each, and the data structure being processed.

I never once tried to go into the details of any particular algorithm. I was only covering the basics of the definitions without going over what data structures are used and how intersection tests are computed or how LoTF implements there algorithm. I feel like I made a statement and the gaming dev gurus are quick to tell me how wrong my definition is (from their perspective). So...I'll digress from the topic..
 
I'm not saying that at all.

I feel like that is exactly what you did. You compared the render time of two different scenes. Putting different algorithms through different workloads is not the best way to compare them.
This is not really a game-dev vs. CGI thing either. Neither raytracing nor raymarching get used very often in real time rendering. People just wanna be clear about the "ray-marching is uber ray tracing" statement, which is not completely correct.
Despite the industry used, all algos have a defined conceptual definition, that is independent of application and implementation.
Sure, most rendering software, and CGI studios workflow might use ray-marching for volume rendering and tracing for surfaces, and on the everyday "ray-marching" is said meant to say volume rendering, and "ray-tracing" is used to convey rendering of solids and their reflections. Regardless of that, both terms have well defined conceptual definitions, which can be used by a variety of disciplines from real/offline rendering to computer science, mathematics, physics-sim and so on.
Even if most rendering software don't support the use of ray-tracing for volume rendering, it is theoretically possible. So to say that ray marching is always faster, you'd have to implement a renderer that can solve a volume through both techniques and test the two with the same data-set to make a fair comparison. The results would still vary depending on the data, the way the algos were programed, and the hardware characteristics. Also, ray-tracing would be the only one capable of mathematically perfect results (ignoring numeric imprecisions of implementation), while marching would approximate it with varying precision depending on the size of each step (which could become fine enough to equal ray-traced results for a discrete bit-depth). As such, there is no such thing as X is faster than Y, in such a loose and broad sense.
To sum-up: we are being extremely nit-picky and anoying.
 
Ray marching is used for different purpose than ray casts. Ray cast solves the collision point analytically. This is nice for solid objects. However this is not enough for realistic rendering since the ray passes through gas/liquid (mix of air, water, CO2). In order to accumulate the right amount of incoming light, we need the integral of incoming light at every point in the rays path. It is only possible to calculate this analytically in very simple cases (constant fog density, etc). So we do this integration numerically at N sample points. Offline renderers call this ray marching.

You could use ray marching to detect collisions with solid surfaces, but in most cases that would be slower than the analytical solution (since the ray / triangle collision math is simple). If you data structure is something else than triangles (for example voxels or a distance field), then ray marching is a fast way to calculate the collision points to solid geometry (however you would do hierarchical ray marching instead of linear to reduce the running time down to the logarithm of distance).
 
Man, I don't know where you hear about those things. Fortunately it had to be sebbbi -among some others- who came to the rescue
 
Back
Top