Signed Distance Field rendering - pros and cons (as used in PS4 title Dreams) *spawn

What exactly is the data representation of these surfaces/volumes? I browsed a paper and it was suggesting CSGs as building blocks were being assembled to make more complex shapes. That would make sense regards efficiency and is the basis, as I understand it, of these epic procedural demos. Having used Real3D on Amiga many, many years ago that used CSG and booleans to model and render, I understand that this can be versatile. Displacement maps are also possible, so I suppose computed displacement on top of CSG bounding volumes could add detail.

From a modelling perspective, I suppose MM would need to provide an arbitrary volume and map that to CSGs on the fly. Unless I'm completely mistaken on what's poing on here!
 
You can generate a (non-signed) distance field volume from a point cloud by first calculating a voronoi diagram from the point cloud. This tells you which point is the closest one for each distance field voxel (calculating the distance to this point is trivial). Signed distance fields do not exist for point clouds (as points are infinitely small, you cannot be inside one). If you want to generate a SDF (signed distance field) by assuming that the points have some radius, the algorithm becomes quite a bit more complex. This is actually the same problem as calculating the intersection between multiple distance fields. Carving out multiple small fields from a big one doesn't generate an optimal distance field (ray doesn't skip all the empty space in one step), the same is true for the negative side (inside the object distance) for the union operation. Perfect quality for the negative (inside) side is not that important. It is usually only used for gradient calculation at the surface (for surface normals). As long as the negative side is correct very close to the surface, the rendering performance and quality will not suffer. If you don't have any negative values (a non-signed distance field), you need to bias the gradient query a little bit off the surface to avoid artifacts.

Distance fields can be used to accelerate ray tracing (all kinds of rays, primary rays, secondary rays, shadow rays, etc), cone tracing, sphere tracing, collision detection, area queries, etc. You need surprisingly low resolution distance field to get pretty good results (for anything else than primary and shadow rays of course - primary rays and shadow rays need quite a bit detail). You can combine low resolution (conservative) distance fields with triangles in ray tracing. Triangle tests are only performed when the ray is close enough to surface, and only to those triangles that are nearby. Same thing works with voxels and practially any kind of geometry (procedural, fractals, subdivision, etc). Algorithmic complexity of calculating a distance field from different kinds of geometry differs a lot.

Of course if you just use pure distance fields (and have your own software for authoring them), you never need to generate them. However in this case, you need a very good way to compress the field, as a pure distance field renderer (for primary and shadow rays) needs lots of detail density (and the field needs n^3 storage as it is a 3d volume texture). Using mathematical distance functions sidesteps the storage cost completely, but it has it's own limitations (for both the content look and for the scene complexity). Several 4K demos have used mathematical distance field functions successfully, but the scenes in these demos have been quite small (and contained lots of repeat and other tricks).
 
When it is based on points. Is this somehow related to this unlimited technique voxel thing?
Most of these point/voxel data presentations are somewhat similar. Unlimited Details guys have never said what kind of acceleration structures they use, so it is impossible to compare their technique to the other alternatives. They are however purely CPU based. From the (limited amount of) forum posts made by the UD guys, I understood that they don't even use AVX to data parallelize their rendering algorithm. They are most likely using some algorithm that is highly serial, and would not be easy to port to the GPU (as a compute shader).
 
It's interesting to note that the use of distance fields are not uncommon to approximate GI and other effects. Of the top of my head, Splinter Cell Conviction, Infamous 2, Sim City 4, Last of Us, all used some sort of Ambient Occlusion fields. Some were capsules, 3d textures, and other approximations. Currently UE4 uses distance fields to not only compute AO, but also distant shadows on their Kite demo.

Here is a demo of soft-shadows computed entirely with distance fields in UE4:


Surprisingly detailed, I wonder how much memory it eats up. The catch here, is that the rest of the scene is rendered traditionally, with rasterized triangle meshes, lit with a deferred renderer. Dreams though, as it seams, is gonna be the first example of a commercial game where even the primary rays are computed through distance fields, which is apparently only viable given the more loose and (appropriately) dreamy art-style they are going for.
 
Dreams though, as it seams, is gonna be the first example of a commercial game where even the primary rays are computed through distance fields, which is apparently only viable given the more loose and (appropriately) dreamy art-style they are going for.

Nope that is incorrect from what information they have given and from what can be seen in the material released. They are NOT tracing primary rays for rendering the distance fields, they are creating dense point splats from the distance field data. It's the point splats that are rendered using a compute shader. They probably do some AO/lighting stuff with the aid of the distance field, although it would be interesting to see if things like disc tree based AO techniques might be more efficient.

I imagine their pipeline is something like:

1) Procedural primitives to aid user creation (Possibly a mix of CPU and GPU stuff)
2) Procedural primitives get evaluated into sparse distance field volume chunks - maybe 16bit distance values, on the GPU using sparse volume texture support, storage is probably transient/cached
3) Some interesting compute shaders run to turn the distance field chunks into a point/disc based representation - could be adaptive or uniform sphere tree LOD'd, uniform gridded would allow certain optimisations and as it's highly dense...
4) If the points are being skinned for a character they either adaptively fill gaps by tessellating in new points or their point fields are perhaps made really dense to cover the eventuality given the view parameters, or something more fun
5) The cached point based representation is rendered by binning them into screen tiles in a compute shader, sorting per tile (possibly using some tricks if its a gridded point based representation), composited front to back using custom HiZ buffer for occlusion
6) Whether they are using more academically inspired filtering techniques like filtered elliptical points splats, EWA etc I am not sure but I guess they have come up with something faster that looks good enough eg: http://www.iquilezles.org/www/articles/pclouds/pclouds.htm
7) Lots of tricks involving compositing cutoffs, noise, dithering, and screen space filters to fill unwanted gaps and reduce particle count


If that's how it works the devil is in all the details and making it run fast enough which is a pretty amazing feat.
 
It's interesting to note that the use of distance fields are not uncommon to approximate GI and other effects. Of the top of my head, Splinter Cell Conviction, Infamous 2, Sim City 4, Last of Us, all used some sort of Ambient Occlusion fields. Some were capsules, 3d textures, and other approximations. Currently UE4 uses distance fields to not only compute AO, but also distant shadows on their Kite demo.

Here is a demo of soft-shadows computed entirely with distance fields in UE4:


Surprisingly detailed, I wonder how much memory it eats up. The catch here, is that the rest of the scene is rendered traditionally, with rasterized triangle meshes, lit with a deferred renderer.

Even low resolution distance fields are very good at keeping details.

You are going to see a lot of 'mixed' approaches over the next few years which is why it's quite exciting - as honestly no one is quite sure which direction things will swing. UE4 is a fantastic case in point as it's really a last generation 'nextgen' renderer that is struggling to pick the right feature set going forward in these changing times (which isn't meant as a criticism!). I'd conjecture that what UE5 picks as it's feature set will be most interesting as the gap between UE3 and UE5 should be a generational shift.

Matt/SMASH has done some interesting experiments on the path of the 'mixed' approach (https://directtovideo.wordpress.com/). Plus there is the PowerVR raytracing stuff they want people to mix with traditional rendering methods.

When the GPU's from the end of this year are common place I expect it might be commercially feasible to release some sort of restricted game type that actually uses distance field tracing for primary hits (Timothy Lottes seems to be making most progress in that regard at the moment).
 
The Teddy Bear vs Zombies scene seems surprisingly sharp. Perhaps not everything is or needs to be hidden in stylistic choices?
 
they say this


The Teddy Bear vs Zombies scene seems surprisingly sharp. Perhaps not everything is or needs to be hidden in stylistic choices?

From the article

Now about the mysterious gameplay: There's a reason why Dreams' visual design shifts between the solid and the gauzy -- an effect Healey likens to an impressionist painting -- and that's because progression through the game will mirror that of actual dreams. Healey says that players "can go from experience to experience in a very dream-like way." It's an effect he hopes will spur the community to experiment quickly with the create tools and stumble into new modes of play.
 
Last edited:
Yeah, we don't now how the distance fields actually become pixels on screen. The fuzzier stuff definetly looks a lot like its using some sort of particle system, while the sharper objects such as the zombies look more ray-tracey, but from what I gather the more keen eyed here did spot some wholes and blochyness enven there, so your suggestion of using a compute render to render point clouds seems very reasonable.
What I meant though, was that this might be the first exemple were the primary represntation of the assets is the distance field themselves. In unreal 4 and the other exemples I gave, what you see on screen the poly-mesh, distance fields are secondary represntations to accelerate lighting effects. Here you might see particles, but those were generated based on a distance field, there are no triangles to speak of. But all of this is very speculative.

UE4 is a fantastic case in point as it's really a last generation 'nextgen' renderer that is struggling to pick the right feature set going forward in these changing times (which isn't meant as a criticism!).

Well, that's sort of what I felt UE3 was like though. They refused to hop in the deferred lighting/render boat throughout the whole last gen up until the DX11 revision, despite everyone else eventually doing so - except for most 60fps games, a target very few U3 games hit anyway. Most Unreal 3 solutions felt very past gen-ish. No surprise UE4 is strugglling to pick modern choices, they've been sort of avoiding doing that in the last decade to some extent.
 
The make or break for the game seems to rest on how intuitive their player content creation tools are for something that could be so open ended. If it's a step towards I can't draw or design but can imagine
Yeah, we don't now how the distance fields actually become pixels on screen. The fuzzier stuff definetly looks a lot like its using some sort of particle system, while the sharper objects such as the zombies look more ray-tracey, but from what I gather the more keen eyed here did spot some wholes and blochyness enven there, so your suggestion of using a compute render to render point clouds seems very reasonable.
What I meant though, was that this might be the first exemple were the primary represntation of the assets is the distance field themselves. In unreal 4 and the other exemples I gave, what you see on screen the poly-mesh, distance fields are secondary represntations to accelerate lighting effects. Here you might see particles, but those were generated based on a distance field, there are no triangles to speak of. But all of this is very speculative.

Well, that's sort of what I felt UE3 was like though. They refused to hop in the deferred lighting/render boat throughout the whole last gen up until the DX11 revision, despite everyone else eventually doing so - except for most 60fps games, a target very few U3 games hit anyway. Most Unreal 3 solutions felt very past gen-ish. No surprise UE4 is strugglling to pick modern choices, they've been sort of avoiding doing that in the last decade to some extent.


They have actually said they are using point rendering via twitter, plus you can really see the splotchy points on the zombies if you look close enough. I think you are right in it's definitely the first commercial game example of something using distance fields for the main primitive. I assume they are probably brick based as it would be more obviously GPU efficient given the sparse texture support, plus if they were using something like ADF's someone might run foul of that horrible part of the US legal system.

To be fair to UE4 they have to make their choices that fit commercial realities (which have been skewed by mobile) and I would say they are a good modern engine just not a truely nextgen engine - as the nextgen is still out there waiting to be defined, a much more radical risk then it was in the past. The way they had to drop VoxelGI support right at the start is an example (Although given how well Tomorrow Children has managed plus new nVidia GPU support I am sure it's coming back soon). With the amount of TFLOPS the new PC graphics cards have it could go in lots of different directions really - which is fun and stylistically the market as it stands can support a few different crazy directions until the best compromises emerge.
 
Look like sprites to me.
Yes, that smoke looks like (properly lit) sprites. Their particle renderer could be using a similar technique to this (page 19): http://www.slideshare.net/mobile/De...ndering-using-direct-compute-by-gareth-thomas

This technique beats the ROP (rasterizer) based particle rendering by a large factor when you have lots of overdraw. This particular presentation doesn't talk about local light sources. I can assure you that this technique handles huge light counts very well. You can use similar light culling algorithm as you would use with tiled deferred lighting (or even combine the two passes together).

I am wondering whether they render the (SDF/point) geometry to a g-buffer and then do lighting and post processing traditionally (with compute shaders of course, as everyone should be doing). Or if they do the lighting (and maybe some post effects) as part of the geometry rendering pass. Pure SDF raytracer has zero overdraw, so g-buffering doesn't bring big peformance gains. A point renderer however could get some gains (depending on the exact technique used).
 
Last edited:
Yes, that smoke looks like (properly lit) sprites. Their particle renderer could be using a similar technique to this (page 19): http://www.slideshare.net/mobile/De...ndering-using-direct-compute-by-gareth-thomas

This technique beats the ROP (rasterizer) based particle rendering by a large factor when you have lots of overdraw. This particular presentation doesn't talk about local light sources. I can assure you that this technique handles huge light counts very well. You can use similar light culling algorithm as you would use with tiled deferred lighting (or even combine the two passes together).

I am wondering whether they render the (SDF/point) geometry to a g-buffer and then do lighting and post processing traditionally (with compute shaders of course, as everyone should be doing). Or if they do the lighting (and maybe some post effects) as part of the geometry rendering pass. Pure SDF raytracer has zero overdraw, so g-buffering doesn't bring big peformance gains. A point renderer however could get some gains (depending on the exact technique used).

It is great it means probably explosion looking like having volume. It is not true volumetric smoke but it will probably improve look of of explosion in future game.
 
http://advances.realtimerendering.com/s2015/index.html

Soon we will have a better idea of what they do in Dreams when the slide will be available

Abstract:

Over the last 4 years, MediaMolecule has been hard at work to evolve its brand of ‘creative gaming’. Dreams has a unique rendering engine that runs almost entirely on the PS4’s compute unit (no triangles!); it builds on scenes described through Operationally Transformed CSG trees, which are evaluated on-the-fly to high resolution signed distance fields, from which we generate dense multi-resolution point clouds. In this talk we will cover our process of exploring new techniques, and the interesting failures that resulted. The hope is that they provide inspiration to the audience to pursue unusual techniques for real-time image formation. We will chart a series of different algorithms we wrote to try to render ‘Dreams’, even as its look and art direction evolved. The talk will also cover the renderer we finally settled on, motivated as much by aesthetic choices as technical ones, and discuss some of the current choices we are still exploring for lighting, anti-aliasing and optimization.
 
They actually go over a number of ways of using the data which'll save others the R&D. Some of the results looked fabulous but weren't suitable for their specific UGC based game. The final solution uses 2D splats based on a point cloud evaluation.

Also, three years experimenting! It's all very well hoping other devs will try alternative rendering techniques, but few can afford the luxury of three+ years experimenting without a working product to sell!
 
Back
Top