Signed Distance Field rendering - pros and cons (as used in PS4 title Dreams) *spawn

Discussion in 'Rendering Technology and APIs' started by chris1515, Jun 16, 2015.

  1. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,104
    Likes Received:
    16,896
    Location:
    Under my bridge
    What exactly is the data representation of these surfaces/volumes? I browsed a paper and it was suggesting CSGs as building blocks were being assembled to make more complex shapes. That would make sense regards efficiency and is the basis, as I understand it, of these epic procedural demos. Having used Real3D on Amiga many, many years ago that used CSG and booleans to model and render, I understand that this can be versatile. Displacement maps are also possible, so I suppose computed displacement on top of CSG bounding volumes could add detail.

    From a modelling perspective, I suppose MM would need to provide an arbitrary volume and map that to CSGs on the fly. Unless I'm completely mistaken on what's poing on here!
     
    Lightman likes this.
  2. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    You can generate a (non-signed) distance field volume from a point cloud by first calculating a voronoi diagram from the point cloud. This tells you which point is the closest one for each distance field voxel (calculating the distance to this point is trivial). Signed distance fields do not exist for point clouds (as points are infinitely small, you cannot be inside one). If you want to generate a SDF (signed distance field) by assuming that the points have some radius, the algorithm becomes quite a bit more complex. This is actually the same problem as calculating the intersection between multiple distance fields. Carving out multiple small fields from a big one doesn't generate an optimal distance field (ray doesn't skip all the empty space in one step), the same is true for the negative side (inside the object distance) for the union operation. Perfect quality for the negative (inside) side is not that important. It is usually only used for gradient calculation at the surface (for surface normals). As long as the negative side is correct very close to the surface, the rendering performance and quality will not suffer. If you don't have any negative values (a non-signed distance field), you need to bias the gradient query a little bit off the surface to avoid artifacts.

    Distance fields can be used to accelerate ray tracing (all kinds of rays, primary rays, secondary rays, shadow rays, etc), cone tracing, sphere tracing, collision detection, area queries, etc. You need surprisingly low resolution distance field to get pretty good results (for anything else than primary and shadow rays of course - primary rays and shadow rays need quite a bit detail). You can combine low resolution (conservative) distance fields with triangles in ray tracing. Triangle tests are only performed when the ray is close enough to surface, and only to those triangles that are nearby. Same thing works with voxels and practially any kind of geometry (procedural, fractals, subdivision, etc). Algorithmic complexity of calculating a distance field from different kinds of geometry differs a lot.

    Of course if you just use pure distance fields (and have your own software for authoring them), you never need to generate them. However in this case, you need a very good way to compress the field, as a pure distance field renderer (for primary and shadow rays) needs lots of detail density (and the field needs n^3 storage as it is a 3d volume texture). Using mathematical distance functions sidesteps the storage cost completely, but it has it's own limitations (for both the content look and for the scene complexity). Several 4K demos have used mathematical distance field functions successfully, but the scenes in these demos have been quite small (and contained lots of repeat and other tricks).
     
  3. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    Most of these point/voxel data presentations are somewhat similar. Unlimited Details guys have never said what kind of acceleration structures they use, so it is impossible to compare their technique to the other alternatives. They are however purely CPU based. From the (limited amount of) forum posts made by the UD guys, I understood that they don't even use AVX to data parallelize their rendering algorithm. They are most likely using some algorithm that is highly serial, and would not be easy to port to the GPU (as a compute shader).
     
  4. milk

    milk Like Verified
    Veteran

    Joined:
    Jun 6, 2012
    Messages:
    3,977
    Likes Received:
    4,101
    It's interesting to note that the use of distance fields are not uncommon to approximate GI and other effects. Of the top of my head, Splinter Cell Conviction, Infamous 2, Sim City 4, Last of Us, all used some sort of Ambient Occlusion fields. Some were capsules, 3d textures, and other approximations. Currently UE4 uses distance fields to not only compute AO, but also distant shadows on their Kite demo.

    Here is a demo of soft-shadows computed entirely with distance fields in UE4:



    Surprisingly detailed, I wonder how much memory it eats up. The catch here, is that the rest of the scene is rendered traditionally, with rasterized triangle meshes, lit with a deferred renderer. Dreams though, as it seams, is gonna be the first example of a commercial game where even the primary rays are computed through distance fields, which is apparently only viable given the more loose and (appropriately) dreamy art-style they are going for.
     
    homerdog, Lightman and TheAlSpark like this.
  5. Warrick

    Newcomer

    Joined:
    Jun 5, 2003
    Messages:
    33
    Likes Received:
    37
    Location:
    Hong Kong
    Nope that is incorrect from what information they have given and from what can be seen in the material released. They are NOT tracing primary rays for rendering the distance fields, they are creating dense point splats from the distance field data. It's the point splats that are rendered using a compute shader. They probably do some AO/lighting stuff with the aid of the distance field, although it would be interesting to see if things like disc tree based AO techniques might be more efficient.

    I imagine their pipeline is something like:

    1) Procedural primitives to aid user creation (Possibly a mix of CPU and GPU stuff)
    2) Procedural primitives get evaluated into sparse distance field volume chunks - maybe 16bit distance values, on the GPU using sparse volume texture support, storage is probably transient/cached
    3) Some interesting compute shaders run to turn the distance field chunks into a point/disc based representation - could be adaptive or uniform sphere tree LOD'd, uniform gridded would allow certain optimisations and as it's highly dense...
    4) If the points are being skinned for a character they either adaptively fill gaps by tessellating in new points or their point fields are perhaps made really dense to cover the eventuality given the view parameters, or something more fun
    5) The cached point based representation is rendered by binning them into screen tiles in a compute shader, sorting per tile (possibly using some tricks if its a gridded point based representation), composited front to back using custom HiZ buffer for occlusion
    6) Whether they are using more academically inspired filtering techniques like filtered elliptical points splats, EWA etc I am not sure but I guess they have come up with something faster that looks good enough eg: http://www.iquilezles.org/www/articles/pclouds/pclouds.htm
    7) Lots of tricks involving compositing cutoffs, noise, dithering, and screen space filters to fill unwanted gaps and reduce particle count


    If that's how it works the devil is in all the details and making it run fast enough which is a pretty amazing feat.
     
    xz321zx, homerdog, Lightman and 2 others like this.
  6. Warrick

    Newcomer

    Joined:
    Jun 5, 2003
    Messages:
    33
    Likes Received:
    37
    Location:
    Hong Kong
    Even low resolution distance fields are very good at keeping details.

    You are going to see a lot of 'mixed' approaches over the next few years which is why it's quite exciting - as honestly no one is quite sure which direction things will swing. UE4 is a fantastic case in point as it's really a last generation 'nextgen' renderer that is struggling to pick the right feature set going forward in these changing times (which isn't meant as a criticism!). I'd conjecture that what UE5 picks as it's feature set will be most interesting as the gap between UE3 and UE5 should be a generational shift.

    Matt/SMASH has done some interesting experiments on the path of the 'mixed' approach (https://directtovideo.wordpress.com/). Plus there is the PowerVR raytracing stuff they want people to mix with traditional rendering methods.

    When the GPU's from the end of this year are common place I expect it might be commercially feasible to release some sort of restricted game type that actually uses distance field tracing for primary hits (Timothy Lottes seems to be making most progress in that regard at the moment).
     
    xz321zx and chris1515 like this.
  7. Arwin

    Arwin Now Officially a Top 10 Poster
    Moderator Legend

    Joined:
    May 17, 2006
    Messages:
    18,762
    Likes Received:
    2,639
    Location:
    Maastricht, The Netherlands
    The Teddy Bear vs Zombies scene seems surprisingly sharp. Perhaps not everything is or needs to be hidden in stylistic choices?
     
    chris1515 likes this.
  8. chris1515

    Legend

    Joined:
    Jul 24, 2005
    Messages:
    7,157
    Likes Received:
    7,965
    Location:
    Barcelona Spain
    #28 chris1515, Jun 19, 2015
    Last edited: Jun 19, 2015
  9. chris1515

    Legend

    Joined:
    Jul 24, 2005
    Messages:
    7,157
    Likes Received:
    7,965
    Location:
    Barcelona Spain
    they say this


    From the article

     
    #29 chris1515, Jun 19, 2015
    Last edited: Jun 19, 2015
  10. milk

    milk Like Verified
    Veteran

    Joined:
    Jun 6, 2012
    Messages:
    3,977
    Likes Received:
    4,101
    Yeah, we don't now how the distance fields actually become pixels on screen. The fuzzier stuff definetly looks a lot like its using some sort of particle system, while the sharper objects such as the zombies look more ray-tracey, but from what I gather the more keen eyed here did spot some wholes and blochyness enven there, so your suggestion of using a compute render to render point clouds seems very reasonable.
    What I meant though, was that this might be the first exemple were the primary represntation of the assets is the distance field themselves. In unreal 4 and the other exemples I gave, what you see on screen the poly-mesh, distance fields are secondary represntations to accelerate lighting effects. Here you might see particles, but those were generated based on a distance field, there are no triangles to speak of. But all of this is very speculative.

    Well, that's sort of what I felt UE3 was like though. They refused to hop in the deferred lighting/render boat throughout the whole last gen up until the DX11 revision, despite everyone else eventually doing so - except for most 60fps games, a target very few U3 games hit anyway. Most Unreal 3 solutions felt very past gen-ish. No surprise UE4 is strugglling to pick modern choices, they've been sort of avoiding doing that in the last decade to some extent.
     
  11. Warrick

    Newcomer

    Joined:
    Jun 5, 2003
    Messages:
    33
    Likes Received:
    37
    Location:
    Hong Kong
    The make or break for the game seems to rest on how intuitive their player content creation tools are for something that could be so open ended. If it's a step towards I can't draw or design but can imagine

    They have actually said they are using point rendering via twitter, plus you can really see the splotchy points on the zombies if you look close enough. I think you are right in it's definitely the first commercial game example of something using distance fields for the main primitive. I assume they are probably brick based as it would be more obviously GPU efficient given the sparse texture support, plus if they were using something like ADF's someone might run foul of that horrible part of the US legal system.

    To be fair to UE4 they have to make their choices that fit commercial realities (which have been skewed by mobile) and I would say they are a good modern engine just not a truely nextgen engine - as the nextgen is still out there waiting to be defined, a much more radical risk then it was in the past. The way they had to drop VoxelGI support right at the start is an example (Although given how well Tomorrow Children has managed plus new nVidia GPU support I am sure it's coming back soon). With the amount of TFLOPS the new PC graphics cards have it could go in lots of different directions really - which is fun and stylistically the market as it stands can support a few different crazy directions until the best compromises emerge.
     
    chris1515 likes this.
  12. chris1515

    Legend

    Joined:
    Jul 24, 2005
    Messages:
    7,157
    Likes Received:
    7,965
    Location:
    Barcelona Spain
    Arwin and Lightman like this.
  13. L. Scofield

    Veteran

    Joined:
    Mar 28, 2007
    Messages:
    2,559
    Likes Received:
    323
  14. chris1515

    Legend

    Joined:
    Jul 24, 2005
    Messages:
    7,157
    Likes Received:
    7,965
    Location:
    Barcelona Spain
    It is sprites just a good use of the fluffy rendering and lightning.
     
    #34 chris1515, Jun 21, 2015
    Last edited: Jun 21, 2015
  15. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    Yes, that smoke looks like (properly lit) sprites. Their particle renderer could be using a similar technique to this (page 19): http://www.slideshare.net/mobile/De...ndering-using-direct-compute-by-gareth-thomas

    This technique beats the ROP (rasterizer) based particle rendering by a large factor when you have lots of overdraw. This particular presentation doesn't talk about local light sources. I can assure you that this technique handles huge light counts very well. You can use similar light culling algorithm as you would use with tiled deferred lighting (or even combine the two passes together).

    I am wondering whether they render the (SDF/point) geometry to a g-buffer and then do lighting and post processing traditionally (with compute shaders of course, as everyone should be doing). Or if they do the lighting (and maybe some post effects) as part of the geometry rendering pass. Pure SDF raytracer has zero overdraw, so g-buffering doesn't bring big peformance gains. A point renderer however could get some gains (depending on the exact technique used).
     
    #35 sebbbi, Jun 21, 2015
    Last edited: Jun 21, 2015
    fellix, Lightman and chris1515 like this.
  16. chris1515

    Legend

    Joined:
    Jul 24, 2005
    Messages:
    7,157
    Likes Received:
    7,965
    Location:
    Barcelona Spain
    It is great it means probably explosion looking like having volume. It is not true volumetric smoke but it will probably improve look of of explosion in future game.
     
  17. chris1515

    Legend

    Joined:
    Jul 24, 2005
    Messages:
    7,157
    Likes Received:
    7,965
    Location:
    Barcelona Spain
    They use signed distance field and voxel too it seems. it is the tweetter account of Simon brown of MM

    https://mobile.twitter.com/sjb3d/

    And they were testing distance field with VR

    https://mobile.twitter.com/sjb3d/status/595699127108898817

    Another tweet said than they use some work done by Timothy Lottes from nvscene to succeed to do 1080p 60 fps; Very curious to see the SIGGRAPH presentation.
     
  18. chris1515

    Legend

    Joined:
    Jul 24, 2005
    Messages:
    7,157
    Likes Received:
    7,965
    Location:
    Barcelona Spain
    http://advances.realtimerendering.com/s2015/index.html

    Soon we will have a better idea of what they do in Dreams when the slide will be available

     
  19. chris1515

    Legend

    Joined:
    Jul 24, 2005
    Messages:
    7,157
    Likes Received:
    7,965
    Location:
    Barcelona Spain
  20. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,104
    Likes Received:
    16,896
    Location:
    Under my bridge
    They actually go over a number of ways of using the data which'll save others the R&D. Some of the results looked fabulous but weren't suitable for their specific UGC based game. The final solution uses 2D splats based on a point cloud evaluation.

    Also, three years experimenting! It's all very well hoping other devs will try alternative rendering techniques, but few can afford the luxury of three+ years experimenting without a working product to sell!
     
    London Geezer likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...