Signed Distance Field rendering - pros and cons (as used in PS4 title Dreams) *spawn

Distance field based software renderer apparently, they will discuss about this at next SiGGRAPH.
I wouldnt be surprised if its similar to a method I done > 10 years ago. First on the CPU & then a couple of years later on a geforce 3, it was too slow at the time for anything fullscreen, but hardware has come a long way from the days of the gf3
 
That trailer looks heads and shoulders above everything else that's been shown this generation. But what is the game about??

A giant creation tool. From Engadget: "Dreams, as the new title is called, takes a unique approach to gameplay, letting PlayStation 4 users create, explore and "remix" each other's dreams."

They said, on stage, you can make games and plays. Anything you want, it seems.
 
There are significant issues there regards modelling and animation. The renderer looks amazing, but what are the creation tools going to be like? And what limits will be imposed by the system on the creations?
 
There are significant issues there regards modelling and animation. The renderer looks amazing, but what are the creation tools going to be like? And what limits will be imposed by the system on the creations?

Animation is fully procedural. No keyframe talk about it on the stage. There will be limitation but they use signed distant field raytracing rendering and it is ideal for scultpting.

http://www.cescg.org/CESCG-2010/papers/PragueCVUT-Jamriska-Ondrej.pdf

A great pdf about the technique

Distance field is a versatile surface representation. Many
applications need to represent dynamic surface that
changes its shape over time in complex and unpredictable
ways.
For example, in solid modeling application, user
may want to sculpt the shape of a surface by combination
of adding material to existing model and carving holes
into it.
Another example is simulation of splashing wa-
ter, where the interface between water and air needs to be
tracked while it splits apart and merges together. In these
circumstances, distance field representation is often used,
due to its ability to handle operations that deform the sur-
face and change its topology.
 
Last edited:
The renderer is cool, but I think I have to burst your bubble with the "volumetric smoke" thing. It just seemed like that smoke trail was no more than a bunch of opaque bloby objects being spawned every frame and growing in size. You could do that with polygons, its not real fluid simulation, and only looks ok for highly stylized game like this.
 
Everything is blobby particles, basically. Also we're not talking async compute here, but compute in general. Probably needs a spawn...
Moved discussion to a real forum, where SDF doesn't stand for Sony Defence Force.
 
The most interesting and beautiful thing to happen for some time for sure.

It's not distance field raytracing - although it probably uses some tricks casting rays with the distance fields for lighting. It's using compute shaders and not the rasteriser for I guess something similar to tiled particle rendering - see a recent'ish AMD demo for example where their particle rendering with compute shader was faster than the rasteriser.

Whether it's only made efficient enough on consoles rather than the PC for now given the low level access on the consoles and their characteristics is an interesting question.

It probably also combines some influence from some of Iq's old point cloud demo's and transparency viewpoint sorting but probably renders front to back for occlusion reasons. I don't think they are using full on EWA splatting as you can clearly see the type of artefacts at certain angles you can see in Iq's old point cloud stuff (1.8 million points, 1024^3 volumes, 46 fps, 800x600 screen resolution on a Radeon 9800 in 2005...).

I'd guess it's likely using something like a custom z buffer hierarchy for occlusion culling, highly tuned tile sizes, and with various optimisations given the dense small size point cloud nature of it for early kills. Also if you look closely at the screenshots from their media pack on their website there is definitely lots of dithering and noise being added to help move it fast enough plus the usual post effects.

You can see in the close up and far away parts in the screenshots where you get a pontilist impressionistic effect. Now whether this is a concession for performance that works well with the aesthetic they have decided to mask it with, or purely a stylistic choice is an important factor given this style of rendering. So it may be they can only push just enough points for the requirements of their specific game. But at least years demo they had a clay looking scene with lots of houses and walls that seemed to not have this potential issue.

How it handles truly transparent/refractive stuff would be interesting to see.

For the SDF representations I suspect they must use some sparse brick boundary representation to save on memory. Definitely lots of tricky details in making that all work I am sure.

From Alex Evan's tweets he stated they rebuild the points when the SDF changes (he also says it will be discussed at siggraph), it's possible they have a sphere tree based LOD representation of points and you could use a LOD metric similar to 'far voxels'. What's nice about this is what it means for dynamic and skinned things as opposed to rigid voxels. It's hard to tell from the skinned zombies for example how they deal with surface stretching - whether they subdivide points somehow to fill gaps or rely on it being dense enough given deformation limits and post process passes to fill holes - but you can certainly see the points the zombies are made of when they are close to the screen.

In another of Alex's tweets they mention they had to disable displacement mapping due to a bug! So later showings should be even more interesting... the ultimate demo scene particle system!? I guess the displacements will be given the SDF normals from gradients direction etc.

The commercial viability of their actual game is another interesting point - as not everyone is a decent creator, and most people are just lazy, but I think they deserve the benefit of the doubt right now to see what they have up their sleeve in that regard given their past history.
 
So, if I'm understanding this right, the data model for SDF is point clouds and where data resolution starts to get too low, we start to see the individual points resolve themselves rather than be a continuous surface? Is it data efficient simple because the number of points is considerably less than the number of triangles in similar triangle meshes?
 
The rendering method for their SDF model data appears to be point based would be more correct to say - they extract/'tesselate' a point based representation from the SDF. When triangles go below a certain size they can become less efficient in a few different ways - not just for rendering (for example think about point based physics). Eventually as your triangles get so small for the minute micro details you want, there is possibly less value in a space spanning representation when the limit is converging to what effectively a point of data you are perceptually trying to represent (one point versus three indexed vertices).

Different compromises on the type of connectivity information your representation has can also be a boon. For example they can more easily mix fluid simulations with character skinning and other bizarre cool stuff now.

Another interesting aspect could be depth/parallax effects in VR. And if you think about trying to be physically accurate to the real world there are plenty of small gaps between atoms too I guess!
 
Back
Top