Signed Distance Field rendering - pros and cons (as used in PS4 title Dreams) *spawn

Discussion in 'Rendering Technology and APIs' started by chris1515, Jun 16, 2015.

  1. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,532
    Likes Received:
    3,377
    Location:
    Barcelona Spain
    https://twitter.com/mmalex/status/610689056645758976

    Dreams of Media Molecule use 100% compute software rendering, no rasterizer

    Distance field based software renderer apparently, they will discuss about this at next SiGGRAPH.
     
    #1 chris1515, Jun 16, 2015
    Last edited: Jun 16, 2015
  2. Globalisateur

    Globalisateur Globby
    Veteran Regular Subscriber

    Joined:
    Nov 6, 2013
    Messages:
    3,415
    Likes Received:
    2,120
    Location:
    France
    chris1515 likes this.
  3. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,532
    Likes Received:
    3,377
    Location:
    Barcelona Spain
    It was like The Tomorrow Children something special about the rendering of this game. It goes even a further...
     
  4. Billy Idol

    Legend Veteran

    Joined:
    Mar 17, 2009
    Messages:
    5,950
    Likes Received:
    779
    Location:
    Europe
    Is this the reason why it looked so good, like an animated CGI movie...cool!

    Bring it on!
     
  5. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,532
    Likes Received:
    3,377
    Location:
    Barcelona Spain
  6. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,532
    Likes Received:
    3,377
    Location:
    Barcelona Spain
  7. London-boy

    London-boy Shifty's daddy
    Legend Subscriber

    Joined:
    Apr 13, 2002
    Messages:
    21,910
    Likes Received:
    6,050
    That trailer looks heads and shoulders above everything else that's been shown this generation. But what is the game about??
     
    kalelovil, DSoup and Shortbread like this.
  8. Rurouni

    Veteran

    Joined:
    Sep 30, 2008
    Messages:
    970
    Likes Received:
    261
    Making movies... err... dreams!
     
  9. Karamazov

    Veteran Regular

    Joined:
    Sep 20, 2005
    Messages:
    2,852
    Likes Received:
    2,279
    Location:
    France
  10. zed

    zed
    Veteran

    Joined:
    Dec 16, 2005
    Messages:
    4,701
    Likes Received:
    825
    I wouldnt be surprised if its similar to a method I done > 10 years ago. First on the CPU & then a couple of years later on a geforce 3, it was too slow at the time for anything fullscreen, but hardware has come a long way from the days of the gf3
     
  11. Zane

    Newcomer

    Joined:
    Feb 14, 2015
    Messages:
    20
    Likes Received:
    17
    A giant creation tool. From Engadget: "Dreams, as the new title is called, takes a unique approach to gameplay, letting PlayStation 4 users create, explore and "remix" each other's dreams."

    They said, on stage, you can make games and plays. Anything you want, it seems.
     
  12. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    42,999
    Likes Received:
    15,174
    Location:
    Under my bridge
    There are significant issues there regards modelling and animation. The renderer looks amazing, but what are the creation tools going to be like? And what limits will be imposed by the system on the creations?
     
  13. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,532
    Likes Received:
    3,377
    Location:
    Barcelona Spain
    Animation is fully procedural. No keyframe talk about it on the stage. There will be limitation but they use signed distant field raytracing rendering and it is ideal for scultpting.

    http://www.cescg.org/CESCG-2010/papers/PragueCVUT-Jamriska-Ondrej.pdf

    A great pdf about the technique

     
    #13 chris1515, Jun 17, 2015
    Last edited: Jun 17, 2015
  14. Newguy

    Regular Newcomer

    Joined:
    Nov 10, 2014
    Messages:
    257
    Likes Received:
    113
  15. milk

    milk Like Verified
    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,356
    Likes Received:
    3,180
    The renderer is cool, but I think I have to burst your bubble with the "volumetric smoke" thing. It just seemed like that smoke trail was no more than a bunch of opaque bloby objects being spawned every frame and growing in size. You could do that with polygons, its not real fluid simulation, and only looks ok for highly stylized game like this.
     
  16. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    42,999
    Likes Received:
    15,174
    Location:
    Under my bridge
    Everything is blobby particles, basically. Also we're not talking async compute here, but compute in general. Probably needs a spawn...
    Moved discussion to a real forum, where SDF doesn't stand for Sony Defence Force.
     
    chris1515 likes this.
  17. Warrick

    Newcomer

    Joined:
    Jun 5, 2003
    Messages:
    17
    Likes Received:
    14
    Location:
    Hong Kong
    The most interesting and beautiful thing to happen for some time for sure.

    It's not distance field raytracing - although it probably uses some tricks casting rays with the distance fields for lighting. It's using compute shaders and not the rasteriser for I guess something similar to tiled particle rendering - see a recent'ish AMD demo for example where their particle rendering with compute shader was faster than the rasteriser.

    Whether it's only made efficient enough on consoles rather than the PC for now given the low level access on the consoles and their characteristics is an interesting question.

    It probably also combines some influence from some of Iq's old point cloud demo's and transparency viewpoint sorting but probably renders front to back for occlusion reasons. I don't think they are using full on EWA splatting as you can clearly see the type of artefacts at certain angles you can see in Iq's old point cloud stuff (1.8 million points, 1024^3 volumes, 46 fps, 800x600 screen resolution on a Radeon 9800 in 2005...).

    I'd guess it's likely using something like a custom z buffer hierarchy for occlusion culling, highly tuned tile sizes, and with various optimisations given the dense small size point cloud nature of it for early kills. Also if you look closely at the screenshots from their media pack on their website there is definitely lots of dithering and noise being added to help move it fast enough plus the usual post effects.

    You can see in the close up and far away parts in the screenshots where you get a pontilist impressionistic effect. Now whether this is a concession for performance that works well with the aesthetic they have decided to mask it with, or purely a stylistic choice is an important factor given this style of rendering. So it may be they can only push just enough points for the requirements of their specific game. But at least years demo they had a clay looking scene with lots of houses and walls that seemed to not have this potential issue.

    How it handles truly transparent/refractive stuff would be interesting to see.

    For the SDF representations I suspect they must use some sparse brick boundary representation to save on memory. Definitely lots of tricky details in making that all work I am sure.

    From Alex Evan's tweets he stated they rebuild the points when the SDF changes (he also says it will be discussed at siggraph), it's possible they have a sphere tree based LOD representation of points and you could use a LOD metric similar to 'far voxels'. What's nice about this is what it means for dynamic and skinned things as opposed to rigid voxels. It's hard to tell from the skinned zombies for example how they deal with surface stretching - whether they subdivide points somehow to fill gaps or rely on it being dense enough given deformation limits and post process passes to fill holes - but you can certainly see the points the zombies are made of when they are close to the screen.

    In another of Alex's tweets they mention they had to disable displacement mapping due to a bug! So later showings should be even more interesting... the ultimate demo scene particle system!? I guess the displacements will be given the SDF normals from gradients direction etc.

    The commercial viability of their actual game is another interesting point - as not everyone is a decent creator, and most people are just lazy, but I think they deserve the benefit of the doubt right now to see what they have up their sleeve in that regard given their past history.
     
    xz321zx, Arwin, TheAlSpark and 3 others like this.
  18. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    42,999
    Likes Received:
    15,174
    Location:
    Under my bridge
    So, if I'm understanding this right, the data model for SDF is point clouds and where data resolution starts to get too low, we start to see the individual points resolve themselves rather than be a continuous surface? Is it data efficient simple because the number of points is considerably less than the number of triangles in similar triangle meshes?
     
    chris1515 likes this.
  19. Warrick

    Newcomer

    Joined:
    Jun 5, 2003
    Messages:
    17
    Likes Received:
    14
    Location:
    Hong Kong
    The rendering method for their SDF model data appears to be point based would be more correct to say - they extract/'tesselate' a point based representation from the SDF. When triangles go below a certain size they can become less efficient in a few different ways - not just for rendering (for example think about point based physics). Eventually as your triangles get so small for the minute micro details you want, there is possibly less value in a space spanning representation when the limit is converging to what effectively a point of data you are perceptually trying to represent (one point versus three indexed vertices).

    Different compromises on the type of connectivity information your representation has can also be a boon. For example they can more easily mix fluid simulations with character skinning and other bizarre cool stuff now.

    Another interesting aspect could be depth/parallax effects in VR. And if you think about trying to be physically accurate to the real world there are plenty of small gaps between atoms too I guess!
     
  20. Billy Idol

    Legend Veteran

    Joined:
    Mar 17, 2009
    Messages:
    5,950
    Likes Received:
    779
    Location:
    Europe
    When it is based on points. Is this somehow related to this unlimited technique voxel thing?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...