monstercameron
Newcomer
Our raytracer has zero overdraw and doesn't touch any geometry that isn't visible. The last time I measured, our primary raytrace pass (that includes large scale AO rays) loaded only 8 MB worth of main memory (99.7% L1/L2 cache hit ratio) (*). Bandwidth isn't a bottleneck at all for ray tracing passes. ESRAM is mostly needed for post processing and lighting.btw, question about your rendering pipeline in regards to XBO. With deferred techniques esram became an issue scaling into higher resolutions, was footprint such an issue with your SDF + ray traced method?
Still open. I'll let you know a bit later.@sebbbi Are you going to be at E3?
Our SDF shadows are directly ray traced. One ray (cone) per visible pixel. There's no need to generate shadow maps for SDF. Ray traced SDF shadows have very high quality with correctly widening penumbra based on distance. Most shadow mapping implementations fake soft shadows by uniformly blurring the shadow sampling result (PCF kernel with multiple taps). This produces uniformly blurred shadows, instead of physically plausible soft shadows that are perfectly sharp near the contact with widening (softer) penumbra when the occluder distance increases. Also SDF shadows don't have any of the shadow mapping problems, such as lack of self shadowing (precision), blockiness, peter panning, surface acne and popping (cascade & LOD changes). Shadow mapping scales pretty badly to higher resolution (memory and cost increases quadratically). There are techniques to combat this problem, such as virtual/sparse shadow mapping or conservative rasterization based ray trace hybrids, but these techniques are pretty complex and still can't produce properly widening penumbra.How does your SDF shadow mapping compare to UE4's own solution with reggards to performance and quality?
I had a reflection prototype. SDF ray tracing is perfect for mirror reflections. No need to use common rasterization hacks such as cubemaps or extra planar reflection passes. Diffuse reflections and highly glossy specular reflections are harder, because you'd need to cast more rays. The default solution for this is sampling data along a cone ("voxel cone tracing"). This technique needs a volume texture that contains directional information about the lighting (with a prefiltered mip chain). This technique is a bit too steep for 60 fps console game. Pre-existing SDF volumetric geometry helps (vs polygons that need to be converted to voxels at runtime), but my educated guess is that it's still too slow.Have you experimented with secondary rays for bounced lighting?
I can't yet comment on thatBTW..is it safe to assume that you already have Scorpio devkit?
Our SDF shadows are directly ray traced. One ray (cone) per visible pixel. There's no need to generate shadow maps for SDF.
Yes. Ray tracing cost is practically the same in split screen. VR (split RT for 2 eyes) should behave the same. Coherence is slightly worse with 4 views, but not much. However currently the characters are rendered as SDF -> surface particles -> triangulation -> indirect draw. Background (outside play area) is also triangle based. These are rendered to g-buffer and shadow cascades. Unreal Engine doesn't support a Crytek-style static background shadow map (Ryse used one 8k*8k map for all background area that was rendered at level load time, and many other games have adapted similar tech since). Because of this, the background room is rendered to shadow cascades every frame. And in case of splitscreen, it is rendered once per view. There are many ways to solve this issue, but all require further modifications to UE. Not a problem by any means, but we only have 0.5 graphics programmers (half of me, the other half is doing physics simulation). Need to pick my battles carefully.Going off of his twitter, where he said there is very little overhead for doing a second eye for VR with ray tracing, split screen would also not be that costly. When ray tracing volumes, there are no verts to transform and project to camera space, every pixel is a ray, no matter the direction. Although, as mentioned on his last post reggarding his shadows, ray tracing can be optmized to take advantage of ray coherence, but in a, say, 4 player split screen at 1080p, there is plenty coherence in each screen quarter...
Yes, the physics solver is global. There's plenty of other physics objects and fluids in the world. Wouldn't work if simulation was only active around the player. We have already announced local MP. Goal is to support up to 4 players.The game world also seems to be contained within a restricted area, so physics probably are done universally and not just within a certain radius from the camera, so there'd be not much physics overhead for local multi-player thanks to that too.
Not that any of this is enough to speculate if local multi will be there, but certainly from a technical POV, it's certainly easier to be implemented here than in most other games (ignoring all the nasty details I'm probably not anticipating or don't know enough about hahaha)
Time to have a chat with Microsoft and the Crakdown 3 team and see if they can help you out with that As they are doing something similar for the game's MP (and using UE4..but I guess a new Cloud enhanced version of Havok )Online MP would be fantastic, but synchronizing hundred thousand fluid particles over network connection would be problematic
Background (outside play area) is also triangle based. These are rendered to g-buffer and shadow cascades. The background room is rendered to shadow cascades every frame. There are many ways to solve this issue, but all require further modifications to UE.
...These GPU timings were done on a Radeon 7870 at 1920x1080 in a full game scene.
Directional light with distance 10k, 3 cascades
Directional light with distance 30k, 6 cascades
- Cascaded shadow maps 3.1ms
- Distance Field shadows 2.3ms (25% faster)
- Cascaded shadow maps 4.9ms
- Distance Field shadows 2.8ms (43% faster)
Online MP would be fantastic, but synchronizing hundred thousand fluid particles over network connection would be problematic
I know that UE has also a SDF-based ray-traced shadow implementation. Their system is a bit different than ours. They use local (per mesh) distance fields and bin them to froxels. Each ray step goes through a list of distance fields and samples all overlapping ones. The minimum distance is taken. This is obviously pretty slow (compared to single trilinear SDF volume sample), so they also have camera centered low res SDF volume cascades to speed things up. These cascades are updated when the camera moves. Ray tracing step uses this "global volume" to skip through empty space quickly. Unreal's distance fields are 16 bit (we use 8 bit = half memory used). Unreal Engine 4.16 (preview) adds support for 8-bit SDF. Their SDF system memory consumption was pretty high previously. UE 4.16 will practically halve their SDF system memory consumption, making it much more useful for wider range of games. Also the change list includes: "Runtime cost has been reduced between 30-50%". This sounds like a nice improvement. We need to definitely evaluate the option to use UEs SDF shadow tech for the surrounding polygon scene. It could definitely scale better to splitscreen than cascaded shadow maps.Only more reason to look into Unreal Engine 4 native SDF ray-tracing shadows. Maybe a lot of the modifications you think you might need to do are already in place in the standard version of the engine.
UE isn't that bad. I have been surprised how easy UE has been to customize. We have been replacing physics/collision with our own system and replacing part of the rendering pipeline. It was pretty simple process once I learned their code base properly. Their tech code is pretty clean.I can't imagine anything UE does is better than what other developers can do. I have heard many horror stories about UE.