Isn't that depending on the place you want to have proper exposure?Looks natural to me? It's very bright outside the open wall
The lumen seems want to see the insides properly exposed, while non lument want to see the outside properly exposed
Isn't that depending on the place you want to have proper exposure?Looks natural to me? It's very bright outside the open wall
Isn't that depending on the place you want to have proper exposure?
The lumen seems want to see the insides properly exposed, while non lument want to see the outside properly exposed
I think part of that answer may be something a little lost in translation -- the answer is very focused on mathematical functions and curves to define sdfs. "Signed distance field" means, well, what it sounds like -- a field where at any given point you can sample a distance from a surface, and the distance is a positive number if you're outside the surface and a negative number if you're in the surface. It's quite easy to construct these with pure math -- you can get the signed distance from the surface of a sphere by subtracting the radius of the sphere from the distance between your point and the center of the sphere. Of course, more complex surfaces get into some pretty intense math. With reasonable expectations you can compute the sdf of a bunch of simple shapes -- cubes, spheres, etc -- in real time, which is how a lot of cool vfx on shadertoy work.BTW, I've requested the acceleration of signed distance fields in the DX12 discord. This was someone's response:
YeI wonder if the 120fps mode still works
Tracing SDFs is actually pretty fast in most cases. The limitations with SDFs are generally more around memory and precompute... they are both quite expensive/slow to precompute and they take a lot of space (as they are effectively volumetric textures for everything). So as with the various voxel techniques, they can scale pretty poorly with world size and there is not reasonable way to handle deformation at all since they cost so much to compute. There are of course ways to mitigate the various disadvantages, but I think if you were to look at some sort of acceleration, the biggest help would be on the *construction* side. That said, they are fairly fundamentally expensive from a computational perspective so it's hard to guess at how much HW acceleration would even help.@Andrew Lauritzen would that be a reasonable way to accelerate SW-Lumen with Ray accelerators /RT cores in UE5?
It does, but I believe it disables most of the new stuff (Nanite, Lumen, VSM - unsure on TSR).I wonder if the 120fps mode still works
Local tone mapping is also basically a necessity once you start doing GI with a mix of interiors and exteriors.They do that iris adjustment thing. If you step into a dark room you'll see exposure increase and then everything out of the windows will look a little overexposed. Not the biggest fan of that approach, because you can see the transition a little too much. But overall I think everything looks pretty nice and natural looking.
Yeah I was watching some side by side spectating with the new stuff on vs off and IMO the biggest obvious differences are the interiors, where Lumen makes a huge difference and makes things look so much less flat.[snip]
With Lumen
Someone mentioned earlier already, but just to reiterate you may need to adjust the preconceptions on Fortnite content now to be honest Especially stuff like the trees are likely the most complex assets shipped in a 60fps game to date. Yes the game is intentionally stylized but in terms of the underlying rendering tech and complexity it's certainly one of the most demanding:[snip ... in freaking Fortnite!
Tracing SDFs is actually pretty fast in most cases. The limitations with SDFs are generally more around memory and precompute... they are both quite expensive/slow to precompute and they take a lot of space (as they are effectively volumetric textures for everything). So as with the various voxel techniques, they can scale pretty poorly with world size and there is not reasonable way to handle deformation at all since they cost so much to compute. There are of course ways to mitigate the various disadvantages, but I think if you were to look at some sort of acceleration, the biggest help would be on the *construction* side. That said, they are fairly fundamentally expensive from a computational perspective so it's hard to guess at how much HW acceleration would even help.
(Aside: I believe the presentation from Brian I linked earlier in the thread touches on SDFs and voxels a little bit too.)
Media molecule goes into detail on their big presentation on the rendering — they’re point splatting *onto* sdfs, rather than raymarching them.There's some interesting work with compressing SDFs with neural nets: https://arxiv.org/pdf/1901.05103.pdf as well as some mesh to SDF conversion in realtime, though only for proxy meshes and still .Xms per mesh: https://github.com/Unity-Technologies/com.unity.demoteam.mesh-to-sdf
It's interesting to see what can be done. I wish the Media Molecule would explain what they finally settled on for Dreams, they mentioned maybe switching from point splatting to SDFs at one point? And foliage of all things just animates arbitrarily there.
There is the question of surfaces for SDF, they get bad as you go high res. People messing with NERFs have tried SDFs for basic shape representation with point splatting for surface detail with some success.
Some HW-RT on/off comparisons.
The second one is wrongly named Nanite/Lumen on/off, it's just a type, in reality it's a HWRT on/off comparison.
You really begin to see just how wrong real-time graphics and lighting has been since the very beginning. I've told many people before... once your eyes get used to seeing RTGI and AO... you'll look back and wonder how you ever thought game graphics were realistic before that point.Some HW-RT on/off comparisons.
The second one is wrongly named Nanite/Lumen on/off, it's just a type, in reality it's a HWRT on/off comparison.