Claybook [XO, PS4, PC, NX]

btw, question about your rendering pipeline in regards to XBO. With deferred techniques esram became an issue scaling into higher resolutions, was footprint such an issue with your SDF + ray traced method?
Our raytracer has zero overdraw and doesn't touch any geometry that isn't visible. The last time I measured, our primary raytrace pass (that includes large scale AO rays) loaded only 8 MB worth of main memory (99.7% L1/L2 cache hit ratio) (*). Bandwidth isn't a bottleneck at all for ray tracing passes. ESRAM is mostly needed for post processing and lighting.

(*) This is a scene where the camera sees the whole volume = 1 GB of data. 8 MB of 1 GB = 0.78% of volume data accessed = great. With longer view distances, the number would only get better. We trace pixel wide cones instead of infinitely thin rays. Cones stop when details become smaller than one pixel. This results in automatic LOD without any popping or visible quality degradation (as only details smaller than one pixel can be lost).
@sebbbi Are you going to be at E3?
Still open. I'll let you know a bit later.
 
How does your SDF shadow mapping compare to UE4's own solution with reggards to performance and quality? Have you experimented with secondary rays for bounced lighting?
 
How does your SDF shadow mapping compare to UE4's own solution with reggards to performance and quality?
Our SDF shadows are directly ray traced. One ray (cone) per visible pixel. There's no need to generate shadow maps for SDF. Ray traced SDF shadows have very high quality with correctly widening penumbra based on distance. Most shadow mapping implementations fake soft shadows by uniformly blurring the shadow sampling result (PCF kernel with multiple taps). This produces uniformly blurred shadows, instead of physically plausible soft shadows that are perfectly sharp near the contact with widening (softer) penumbra when the occluder distance increases. Also SDF shadows don't have any of the shadow mapping problems, such as lack of self shadowing (precision), blockiness, peter panning, surface acne and popping (cascade & LOD changes). Shadow mapping scales pretty badly to higher resolution (memory and cost increases quadratically). There are techniques to combat this problem, such as virtual/sparse shadow mapping or conservative rasterization based ray trace hybrids, but these techniques are pretty complex and still can't produce properly widening penumbra.

Our SDF shadows aren't yet perfectly optimized. Currently shadows take roughly 2 milliseconds of frame time (on slowest console). This cost is comparable to shadow mapping. But I expect a big performance drop once I exploit the shadow ray coherence better. Currently all shadow rays are completely individual, there's no spatial grouping optimizations for faster empty space skipping. I most likely add a coarse cylinder sweep step to eliminate most per pixel empty space skipping.
Have you experimented with secondary rays for bounced lighting?
I had a reflection prototype. SDF ray tracing is perfect for mirror reflections. No need to use common rasterization hacks such as cubemaps or extra planar reflection passes. Diffuse reflections and highly glossy specular reflections are harder, because you'd need to cast more rays. The default solution for this is sampling data along a cone ("voxel cone tracing"). This technique needs a volume texture that contains directional information about the lighting (with a prefiltered mip chain). This technique is a bit too steep for 60 fps console game. Pre-existing SDF volumetric geometry helps (vs polygons that need to be converted to voxels at runtime), but my educated guess is that it's still too slow.

But I will certainly do a prototype at some point (possibly after this project) and post some screenshots :)
BTW..is it safe to assume that you already have Scorpio devkit?
I can't yet comment on that :)
 
Last edited:
Our SDF shadows are directly ray traced. One ray (cone) per visible pixel. There's no need to generate shadow maps for SDF.

I meant, how does your SDF ray-traced shadows compare to UE4's own ray traced distance field shadow. They do support it as an aditonal shadow type, using pre-generated sdf proxies for solid geometry.

https://docs.unrealengine.com/lates...ngAndShadows/RayTracedDistanceFieldShadowing/

EDIT: I guess, that with UE4'd renderer code being completely avaliable to devs, you can even compare the implementation details of both solutions directly, rather than just the results.
 
It seems your unable to talk Scorpio one way or another at the moment.

What about 4pro?
Can you give any details as to what you are able to do with it or plan to in the future?
Do you believe you'll be able to do much RPM, or just register pressure help if fp16 is used at all?
You may not be checkerboarding (or will you), but are you able to use the ID Buffer flag for something else?

Looks like a really interesting engine ang game.
 
I have a question about the game, if you can answer. Is the mulitplayer mode only online or local too? If the mode is available, do you keep 60 fps in local multiplayer mode ?
 
Going off of his twitter, where he said there is very little overhead for doing a second eye for VR with ray tracing, split screen would also not be that costly. When ray tracing volumes, there are no verts to transform and project to camera space, every pixel is a ray, no matter the direction. Although, as mentioned on his last post reggarding his shadows, ray tracing can be optmized to take advantage of ray coherence, but in a, say, 4 player split screen at 1080p, there is plenty coherence in each screen quarter... The game world also seems to be contained within a restricted area, so physics probably are done universally and not just within a certain radius from the camera, so there'd be not much physics overhead for local multi-player thanks to that too.
Not that any of this is enough to speculate if local multi will be there, but certainly from a technical POV, it's certainly easier to be implemented here than in most other games (ignoring all the nasty details I'm probably not anticipating or don't know enough about hahaha)
 
Going off of his twitter, where he said there is very little overhead for doing a second eye for VR with ray tracing, split screen would also not be that costly. When ray tracing volumes, there are no verts to transform and project to camera space, every pixel is a ray, no matter the direction. Although, as mentioned on his last post reggarding his shadows, ray tracing can be optmized to take advantage of ray coherence, but in a, say, 4 player split screen at 1080p, there is plenty coherence in each screen quarter...
Yes. Ray tracing cost is practically the same in split screen. VR (split RT for 2 eyes) should behave the same. Coherence is slightly worse with 4 views, but not much. However currently the characters are rendered as SDF -> surface particles -> triangulation -> indirect draw. Background (outside play area) is also triangle based. These are rendered to g-buffer and shadow cascades. Unreal Engine doesn't support a Crytek-style static background shadow map (Ryse used one 8k*8k map for all background area that was rendered at level load time, and many other games have adapted similar tech since). Because of this, the background room is rendered to shadow cascades every frame. And in case of splitscreen, it is rendered once per view. There are many ways to solve this issue, but all require further modifications to UE. Not a problem by any means, but we only have 0.5 graphics programmers (half of me, the other half is doing physics simulation). Need to pick my battles carefully.
The game world also seems to be contained within a restricted area, so physics probably are done universally and not just within a certain radius from the camera, so there'd be not much physics overhead for local multi-player thanks to that too.
Not that any of this is enough to speculate if local multi will be there, but certainly from a technical POV, it's certainly easier to be implemented here than in most other games (ignoring all the nasty details I'm probably not anticipating or don't know enough about hahaha)
Yes, the physics solver is global. There's plenty of other physics objects and fluids in the world. Wouldn't work if simulation was only active around the player. We have already announced local MP. Goal is to support up to 4 players.

Online MP would be fantastic, but synchronizing hundred thousand fluid particles over network connection would be problematic :)
 
Last edited:
Online MP would be fantastic, but synchronizing hundred thousand fluid particles over network connection would be problematic :)
Time to have a chat with Microsoft and the Crakdown 3 team and see if they can help you out with that:D As they are doing something similar for the game's MP (and using UE4..but I guess a new Cloud enhanced version of Havok )
 
Background (outside play area) is also triangle based. These are rendered to g-buffer and shadow cascades. The background room is rendered to shadow cascades every frame. There are many ways to solve this issue, but all require further modifications to UE.

Only more reason to look into Unreal Engine 4 native SDF ray-tracing shadows. Maybe a lot of the modifications you think you might need to do are already in place in the standard version of the engine.


From their public documentation:
These GPU timings were done on a Radeon 7870 at 1920x1080 in a full game scene.

Directional light with distance 10k, 3 cascades
  • Cascaded shadow maps 3.1ms
  • Distance Field shadows 2.3ms (25% faster)
Directional light with distance 30k, 6 cascades
  • Cascaded shadow maps 4.9ms
  • Distance Field shadows 2.8ms (43% faster)
...
Online MP would be fantastic, but synchronizing hundred thousand fluid particles over network connection would be problematic :)

Are your physics still deterministic so that rewinding can still work properly? If so, is that still not enough to guarantee sincronization across clients? Also, <XBOX POWER OF THE CLOUD JOKE>
 
I was under the assumption he just wrote 4D vectors and kept them all in memory if a rewind is required.
 
I can't imagine anything UE does is better than what other developers can do. I have heard many horror stories about UE.
 
Only more reason to look into Unreal Engine 4 native SDF ray-tracing shadows. Maybe a lot of the modifications you think you might need to do are already in place in the standard version of the engine.
I know that UE has also a SDF-based ray-traced shadow implementation. Their system is a bit different than ours. They use local (per mesh) distance fields and bin them to froxels. Each ray step goes through a list of distance fields and samples all overlapping ones. The minimum distance is taken. This is obviously pretty slow (compared to single trilinear SDF volume sample), so they also have camera centered low res SDF volume cascades to speed things up. These cascades are updated when the camera moves. Ray tracing step uses this "global volume" to skip through empty space quickly. Unreal's distance fields are 16 bit (we use 8 bit = half memory used). Unreal Engine 4.16 (preview) adds support for 8-bit SDF. Their SDF system memory consumption was pretty high previously. UE 4.16 will practically halve their SDF system memory consumption, making it much more useful for wider range of games. Also the change list includes: "Runtime cost has been reduced between 30-50%". This sounds like a nice improvement. We need to definitely evaluate the option to use UEs SDF shadow tech for the surrounding polygon scene. It could definitely scale better to splitscreen than cascaded shadow maps.

Unreal Engine 4.16 (Preview) change list:
https://forums.unrealengine.com/showthread.php?142071-Unreal-Engine-4-16-Preview
 
I can't imagine anything UE does is better than what other developers can do. I have heard many horror stories about UE.
UE isn't that bad. I have been surprised how easy UE has been to customize. We have been replacing physics/collision with our own system and replacing part of the rendering pipeline. It was pretty simple process once I learned their code base properly. Their tech code is pretty clean.

It seems that some people don't understand that you still need tech programmers even if you choose to license a stock engine (Unreal or Unity). Every game has their own needs, so you obviously need programmer effort to customize the tech. This is especially true if you are targeting consoles and aim to 60 fps. If you don't know how to use profiling tools to find the tech bottlenecks caused by your game code or content, there's nobody else that can do that for you. External tech is a great way to get things rolling quickly, but no external tech is perfect fit for your game. These technologies need to be generic enough to cater lots of scenarios (game/movie production, mobiles/consoles/handhelds/PC/VR, 30/60/90 fps, baked/dynamic lighting, etc). You need a bit of effort to customize the tech to your own needs. This is a given. No technology will ever fix this, in-house or licensed. Every new game needs its own tech customizations.
 
UE 4.13 (https://www.unrealengine.com/en-US/blog/unreal-engine-4-13-released) introduced shadow map caching for local lights (point and spot), but the caching doesn't support the sun light yet. Would be nice if they introduced sun light shadow map caching for distant geometry. The lighting shader would sample this shadow map for geometry that's further away than the last shadow map cascade. It would obviously need a different projection matrix than the cascades and higher resolution (8k*8k at 16bpp is pretty good).
 
Back
Top