Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

minecraft looks great and it's a literal block.
Minecraft never did nor will it ever win any best visuals award or even nomination.
That was never the game's appeal, and it's frankly a bit odd that this needs to be said about Minecraft.

I concede that you may personally like its visuals very much, or that some people may appreciate some user-made blocky constructions as works of art, but it's not like you see Lego constructions winning sculpture awards every day.
 
They are fundamentally incompatible the way NVIDIA does raytracing for the moment.

As I've said before, I wonder if NVIDIA would have spend more time on adaptive LOD if Epic had delivered on Sweeney's promise of a compute based rendering engine a little sooner. Now Nanite comes at the perfect time to show the problems with RTX.

Already addressed above but still. Its OK to be wrong like this, but maybe write in the lines of 'i think' instead of writing like its a fact.

Minecraft never did nor will it ever win any best visuals award or even nomination.
That was never the game's appeal, and it's frankly a bit odd that this needs to be said about Minecraft.

The Digital Foundry minecraft ray traced video says otherwise. Not only was DF mighty impressed, but so was the general public blown away.

Very nice looking for sure.

 
Last edited:
Already addressed above but still. Its OK to be wrong like this, but maybe write in the lines of 'i think' instead of writing like its a fact.



The Digital Foundry minecraft ray traced video says otherwise. Not only was DF mighty impressed, but so was the general public blown away.

Very nice looking for sure.

7 min mark is incredible. lol it's like a real world... made of square blocks. I love it lol. I guess yes it's not going to win any art awards, but it looks correct to me.
 
Geometry simplification and merging has been done for tens of years for RT lighting baking, etc, so that is a well studied problem.

Now you want the Tomb Raider shadow quality, what do you cast against? Do you insert the geometry clusters as selected into the BVH each frame? For stuff not directly visible it matters less, but for contact you really don't want a mismatch.
 
Already addressed above but still. Its OK to be wrong like this, but maybe write in the lines of 'i think' instead of writing like its a fact.



The Digital Foundry minecraft ray traced video says otherwise. Not only was DF mighty impressed, but so was the general public blown away.

Very nice looking for sure.

There's UE4 demo's from years ago that look more real that Minecraft RTX.
 
I agree in the abstract but don't think this is so simple. A cheaper voxel or surfel gi system can roughly approximate good gi, whereas no matter how many rays you cast you're not going to be able to reflect and shadow detail that's only in a normal map. Lumen and the excellent shadow maps paired with nanite make a solid case that we need more geo to really render interesting scenes.

And no matter how many polys you add you will still look totally fake with incorrect lighting.
Whereas with physically accurate light your issue will more be that the game looks like it's made of very real toys. They are both important but uncharted with 100x the geo density would looks less real than if it had more accurate lighting.

Not saying you need it to be absolutely perfect.
 
And no matter how many polys you add you will still look totally fake with incorrect lighting.
Whereas with physically accurate light your issue will more be that the game looks like it's made of very real toys. They are both important but uncharted with 100x the geo density would looks less real than if it had more accurate lighting.

Not saying you need it to be absolutely perfect.

But traditional lighting methods will also evolve this generation so giving Uncharted 100x the geo and updating the traditional lighting will potentially look better then RT lighting with old geo.

The lighting Metro Exodus RT Edition really highlights the low geometry and it looks awful in places.
 
Now you want the Tomb Raider shadow quality, what do you cast against? Do you insert the geometry clusters as selected into the BVH each frame? For stuff not directly visible it matters less, but for contact you really don't want a mismatch.

My bet is pretty soon most engines will converge into using a heavily simplified representation of the scene for RT, and deal with the mismatch by screen-space tracing a bit at ray start, and other artifact mitigation strategies... In that sense, simply merging and simplifying models may work ok. But that was never the promise of RT. That a sloppy hack, we will have to live with for a while, and will have all it's own sets of visual artifacts and work-arounds. Ideally, HW RT would work more easily with LOD hierarchies out of the box. We'll have to wait for a future gen to see that.
 
Now you want the Tomb Raider shadow quality, what do you cast against?
Against proxies, 10% was enough in my test scenes for proper HW RT self shadowing with double sided geometry enabled without noticeable artifacts. Second option - you can move rays starting point away from the shadow caster so that there is no false self intersections due to geometry mismatch, the same trick is used to get rid of shadow maps acne artifacts. Third option - use hybrid - unfiltered shadow map / screen space shadows for self shadowing, RT for the rest.
 
Last edited:
To make complex geometry look good, you need good lighting - as what is driving that geometry are materials. Light is more important than geo fidelity or texture res, by far.

I mostly agree in that to really appreciate highly detailed complex geometry you want to have good lighting.

But the flip side is that really good light quality exposes the inadequacy of geometry in all existing released games.

Having only one or the other is inadequate.

Regards,
SB
 
Against proxies, 10% was enough in my test scenes for proper HW RT self shadowing with double sided geometry enabled without noticeable artifacts. Second option - you can move rays starting point away from the shadow caster so that there is no false self intersections due to geometry mismatch, the same trick is used to get rid of shadow maps acne artifacts. Third option - use hybrid - unfiltered shadow map / screen space shadows for self shadowing, RT for the rest.

Speaking of proxies I've wonder how hard it would be to set up proxies for skinned Geometry since Lumen currently only does Screen Space GI for them. Seems like a ps2 model or even just some textured cylinders would be enough to simulate bounce lighting for characters and would alleviate the issue of slight camera angle changes making light disappear like here.

 
Speaking of proxies I've wonder how hard it would be to set up proxies for skinned Geometry since Lumen currently only does Screen Space GI for them
I guess there are many variants as well, for highly diffuse materials or GI it makes sense either use the lowest possible LOD or even decompose models into capsules / cylinders so that there is no skinning (I guess the second variant is also applicable for SW tracing, but will look goofy in many cases).
If we want self reflections on smooth surfaces, LODs should match with raster pass LODs, otherwise we can bias rays on safe distance and use less detailed LODs.

Speaking of proxies I've wonder how hard it would be to set up proxies for skinned Geometry since Lumen currently only does Screen Space GI for them.
Yep, unfortunately skinned geometry is not the only limitation, it does not handle concave or complexly shaped geometry that can't be captured with cards, so there are no GI for internal parts of trees, creaves, etc without HW GI, here is an example - https://imgsli.com/ODEwODE
 
there are no GI for internal parts of trees, creaves, etc without HW GI, here is an example - https://imgsli.com/ODEwODE
This is a significant difference, hardware is such a step up in quality over software in this example.

To make complex geometry look good, you need good lighting - as what is driving that geometry are materials. Light is more important than geo fidelity or texture res, by far.
We've had this experience on PC a decade ago through heavy tessellation, the Unigine Heaven demo had insane amount of geometry, but without good lighting it didn't look earth shattering at all.

 
This has been mentioned before, but some years ago so worth repeating. ;)

Lighting (+shadowing) is what makes a scene look real or not. Materials and polygons (or alternative geometries) are needed to represent an authentic object.

As such, a low poly model with poor textures in a perfect lighting engine will look like a very real cardboard model. A high poly model with rubbish lighting and texturing will look like a smooth computer creation. For example, an attempt to render a candlelit watermelon.

With perfect GI and a perfect texture on a low poly model, the watermelon will look like an origami construct maybe of watermelon-printed paper.
With perfect geometry of a bazillion triangles and a perfect material in a poor engine, the watermelon will look like very detailed yet obviously not real computer graphics.
With perfect GI and perfect geometry with low res textures, the watermelon will look like a real watermelon-shaped object that's clearly not a real watermelon.

Now it turns out the preference between believable lighting and detail defining 'good graphics' is subjective. Some people will take cardboard people in a real world thanks to perfect GI over beautifully modelled and shaded people in unconvincing lighting, while others prefer the opposite. Hence, as ever, the whole use of the term 'good' isn't particularly constructive for discussion. ;) It's better to say that to move from obvious CG to photorealism, you need a realistic lighting engine. A realistic lighting engine on its own can't create 'good' graphics (a Cornell Box with a glass sphere isn't going to win many people's 'best ever graphics' award), and it's quite possible to have 'good' graphics without stellar lighting through complex geometry, especially in artistic renderers where the beauty lies in the intricacy of the materials and/or textures and/or sprites and/or particle effects, etc.

Oh, and we can add animation into the mix too, and have fabulously rendered photorealistic objects, particularly people and animals, that look rubbish because they move poorly.

But mostly, throwing up a video showing what you consider fabulous graphics as proof simply ain't gonna work. I can look at Minecraft RTX and think, "that's gorgeous," while someone else won't see what the fuss is about. Trying to convince that person lighting makes the game by showing them a world of boring blocks will always fail and there's no argument that'll convince. Let's not get stuck in "what is the best ever flavour of icecream?" type arguments on B3D.
 
decompose models into capsules / cylinders so that there is no skinning (I guess the second variant is also applicable for SW tracing, but will look goofy in many cases).

Yeah, that's what I'm focused on, stopping the SW GI from totally ignoring non screen-space GI from skinned objects by using unskinned geometry as a proxy. Just seems like a good optimization for increasing accuracy without eating the cost of going for hardware rt, I'm only assuming hardware RT in Lumen has a heavy cost though. The proxy geometry could be as simple as some textured capsules or as detailed as an action figure.

71Val4eHhkL._AC_SL1500_.jpg


What kind of goofiness do you foresee? My first concern with this idea is that the proxy could cause issues if the skinned mesh itself traces against it, but if you can set it up so only other objects trace against the proxy mesh you could bypass that. I suppose there's also the issue of mismatching reflections showing extremely out of place geo.

Yep, unfortunately skinned geometry is not the only limitation, it does not handle concave or complexly shaped geometry that can't be captured with cards, so there are no GI for internal parts of trees, creaves, etc without HW GI, here is an example - https://imgsli.com/ODEwODE

Yeah, I've seen the other limitations I'm hoping that Unreal atleast includes a quick way to manually break the mesh into smaller pieces so that lumen can create better cards to alleviate this a bit and to just make the workflow better in general.
 
I suppose there's also the issue of mismatching reflections showing extremely out of place geo.
Exactly, that was the case in The Last of Us 2 - highly out of place reflections of capsules, where you can actually distinguish between those capsules even on dull materials, that destroys immersion.
I really had more negative impressions than positive from the diffuse capsule reflections in the game during walkthrough. I guess lower LODs with HW RT should look better since they should fix the discontinuity between different body parts in reflections.
 
Exactly, that was the case in The Last of Us 2 - highly out of place reflections of capsules, where you can actually distinguish between those capsules even on dull materials, that destroys immersion.
I really had more negative impressions than positive from the diffuse capsule reflections in the game during walkthrough. I guess lower LODs with HW RT should look better since they should fix the discontinuity between different body parts in reflections.

Also, right now SW Lumen reflections are already pretty disconnected and you don't really want to look too closely at mirrors so I think that a semi-detailed proxy mesh would not stand out surrounded by the other low poly, untextured stuff. So I think if that's the only concern then you would still be getting better GI with no real drawback since if you can see the character in the reflection you will already be seeing a mess.

I could be wrong and it may look terrible. Hopefully, it's possible to just exclude the proxy mesh/fallback to screenspace when tracing against very smooth, reflective surfaces, resulting in better Diffuse GI with no effect on reflections.
 
Back
Top