Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Big emissive surfaces could be area lights based on their SIGGRAPH area light presentation:
View attachment 8823
We are definitely off-topic here for this thread, but the shape is arbitrary on the area lights in the screenshot you see there, unlike the lights presented in the presentation of various pre-defined shapes. Notice how in the maxed rasterised shot I posted there is no diffuse or specular contribution to the image outside of screen-space from the big panel of lights.

If you would like, I could post some screenshots later of emissive surfaces that have completely different shapes or designs that are not typical area lights (big bulb, strip, rectangle).
 
The only thing putting a snag in this atm that I am hearing from a number of devs is that Lumen quality is not very consistent at the moment for its use in a project with diverse environments, particularly for indoor scenes. So you can get pretty great results (as demos from epic show) with the outdoor scene lighting without a lot of issues, but as soon as you go indoor or overlapping indoor and outdoor then you start running into light leaking, noise, or issues of the surface cache being black. Some rooms are fine, some rooms are noisy, some rooms have completely black indirect lighting. Some of that is down to Lumen fidelity being limited because it has to be performant on the lower-end, but some of that is art asset set up (convex vs. concave areas in a mesh). I think that latter bit is causing issues at the moment for Lumen. I hope that Epic gets rid of that art asset limitation so devs embrace real-time lighting more often.
Is getting rid of that limitation really possible while still keeping the merits of lumen on console viable? Hope so
 
Already discussed. You could have at least searched for "unrecord" before posting...
Thanks, didn't realize that's the game title. In retrospect, I should have asked bing chat for the title via desktop edge.
because I got banned from bing chat mobile
 
Is getting rid of that limitation really possible while still keeping the merits of lumen on console viable? Hope so

The art limitations are less about hardware performance and more about the basic design of Lumen. The surface cache is really intended to wrap around an object. It can’t go “inside” an object.

A mesh that’s an individual wall is fine. A mesh that represents a whole house with multiple internal walls isn’t supported. So artists are forced to break up that house into separate walls and floors in order to use Lumen.
 
The art limitations are less about hardware performance and more about the basic design of Lumen. The surface cache is really intended to wrap around an object. It can’t go “inside” an object.

A mesh that’s an individual wall is fine. A mesh that represents a whole house with multiple internal walls isn’t supported. So artists are forced to break up that house into separate walls and floors in order to use Lumen.
Is it something that's actually fixable for the lumen team? Is my real question. I think software lumen is good tech and having it be actually viable for as many circumstances as possible is paramount since a lot of devs are gonna be using it
 
I'm sure it's a real growing pain that some people (especially indies teams with zero graphics engineers or tech artists) are annoyed by but it's not an actual problem -- If somehow I'm wrong and your game is crippled because you can't cut your meshes up into convex pieces where required for surface cache lighting I'm availible for hire.
 
Is it something that's actually fixable for the lumen team? Is my real question. I think software lumen is good tech and having it be actually viable for as many circumstances as possible is paramount since a lot of devs are gonna be using it

It’s not just a software Lumen issue, it also affects hardware Lumen. The problem is with the surface cache which is used for both software and hardware RT. Only UE devs can say for sure but fixing it may require a significant overhaul. It’s a lot easier to cover the outside of an object in cache tiles. Lumen even falls back to a basic six sided cube map if the object’s silhouette is too complex. Walking the insides of an object and generating cache tiles for internal surfaces is going to be a lot more complicated. The indexing into the surface cache is done by object Id and there’s currently a limit on the number of cache tiles that need to be visited per object. That probably doesn’t work if the number of tiles increases a lot because of internal surfaces. I suspect artists will just have to deal with it.
 
I wonder how they managed to create incredible photo-realism graphics in Dead Island 2, a cross-generation game, taking into account they aren't using GI nor Raytracing. This is the good ol'-fashioned way, prebaked lighting. 😧

1682439792920.png

1682439934075.png

1682439973727.png

1682440022831.png

1682440067685.png

1682440117638.png

1682440155896.png

1682440206476.png

1682440248810.png


1682440287591.png
 
But people need to stop focusing on what it does for gamers and look at what it does for developers, imagine the possibility where they no longer have to make cube maps or no longer need to spend hours upon hours pre-calculating GI, the time and thus cost savings from RT for developers are huge.
To drag us slightly back towards topic, I think a real win here is going to be for non-experts. Ex. Fortnite Creative/UEFN. Those folks are just throwing together assets from various places and need it to look good by default without having to worry about the intricacies of how renderers work. Nanite, Lumen and VSMs are all shifting the bar in that direction with less tweaking and content work required to get to a decent result, but it will continue in the future. Certainly lights are still an area that you can't just throw down 1000 shadowed local lights and expect performance to be great in Unreal right now but we're making progress there.

But this is also why we can't just rely entirely on RT or path tracing or other high end techniques; the majority of the audience for many of these experiences will likely continue to be on switch, mobile, steam deck, and other relatively low-end platforms for the foreseeable future, and they can't be segmented off. Thus we need solutions that can scale across as large a range of this stuff as possible, as I think most of us here would agree that just catering entirely to the least common denominator of graphics/rendering is not moving us towards the future world/metaverse/whatever that we want. Obviously some experiences will target stylized or lower end graphics explicitly, but that should not be a limitation of the core tech, content and assumptions underlying it all IMO.
 
Last edited:
I wonder how they managed to create incredible photo-realism graphics in Dead Island 2, a cross-generation game, taking into account they aren't using GI nor Raytracing. This is the good ol'-fashioned way, prebaked lighting. 😧

View attachment 8824

View attachment 8825

View attachment 8826

View attachment 8827

View attachment 8828

View attachment 8829

View attachment 8830

View attachment 8831

View attachment 8832


View attachment 8833
Because no dynamic ToD so they can bake things.
 
To drag us slightly back towards topic, ....
How dare you! 😁




But yeah, the stuff you guys are building to enable essentially anyone to make extremely high quality content is extremely impressive and praiseworthy. This type of stuff would be completely out of reach for the average person normally. UEFN is just extremely cool and it's going to be awesome seeing what the next generation of creative kids will do with tools like that with such a robust feature set built right in. UEFN, Roblox, Dreams, Minecraft... how many young aspiring creators will have gotten their start from these types of applications!

I can just imagine 5-10 years from now with all this AI stuff too..
 
Right but to speak to the above, triangle RT is absolutely a non-starter in something like AncientGame. You can of course argue that the way they constructed that content is actively aggressive towards triangle RT, but the same problem existed to a lesser degree in the Lumen in the Land of Nanite demo. Triangle RT is also relatively inefficient for stuff like animated foliage. SDFs will probably still have a place for a while in dynamic aggregate geometry and kit-bashed stuff.

The thing is... we are not able to analyze and compare this ourselves. I think a-lot of people have been trying to figure out why exactly LLON had such stark excellent performance compared to all other nanite showcases while still being the best looking demo of all time. How would using lumen RT affect the performance (which was 1440p/40fps)

Even the recent Rivian Demo is going to be released (without the car) according to Epic devs. So this is in line with all UE5 and UE4 demos being released by Epic.

The only exceptions being the Troll demo and the Chaos Demo.

For those two i fully understand. For the chaos demo, chaos was nowhere near ready and was being overhauled because of extremely low performance. There was absolutely no way Epic could possibly keep that demo UpToDate.

For the troll demo, Epic had none of the copyrights for it as they partnered with Goodbye Kansas and Deep Forest Films.

But for the lumen in the land of nanite demo, I have absolutely no clue why that demo is still locked behind a vault in Epic HQ and its been 3 years!! Almost half the new consolegeneration!
I'm starting to think we will never see that demo ever again. Yet its the first thing you see on unreal engine home page and is still being used heavily in marketing promotions.
It makes no sense...
 
Last edited:
The thing is... we are not able to analyze and compare this ourselves
I'm going to put on my (rare!) moderator hat here and say - stop. We've been over this ground several times already and I have a suspicion you've also bothered other Epic employees on other platforms about this too. Consider this an official warning that it's time to drop this topic.
 
Hmm, so why is this case different? This is just a pure single bounce specular case that is really just raytraced reflections, right? It looks like the BRDF itself is different here... is this just because the RT reflection in "psycho" mode is being clamped by a roughness threshold or something?
Not sure I understood the questions, but.. BRDF is the same as everywhere - Lambertian reflectance for diffuse + GGX for the glossy reflections.
The Lambertian term adds the diffuse lighting coming from the area lights billboards sampled across the area of the billboards with RT (which are likely poorly approximated with some analytical area or point lights in raster), the lack of lighting in raster is simply the fact that these huge dynamic area lights can't be approximated with probes in a dynamic way. So there are likely no probes in the scene or probes were baked for the scene without lighting from the area lights since the lighting would be incorrect in the game anyway due to the light sources being dynamic.
RT overdrive adds more lighting to the scene because it has more bounces and better importance sampling vs diffuse lighting in the RT Ultra setting, which is clearly visible by added lighting on the walls and more lit scenes with artifiacal light sources in general.
 
It has a crazy amount of kitbashing and overlapping geometry. No real game will be built the same way.
The funny part is that there are tools in the UE5 editor that allow to merge geometry instances, though, I wasn't able to utilize them for HW RT acceleration purposes in the Valley demo:)
Yet, merging geometry instances sounds like a task that can be automated, and in fact it has already been accomplished for the distance fields volumes.
I understand that merging 3D volumes should be a way simpler task than merging polygons, but Epic already did the impossible with Nanite, so merging poligonal geometry doesn't sound unrealistic to me)
 
The funny part is that there are tools in the UE5 editor that allow to merge geometry instances, though, I wasn't able to utilize them for HW RT acceleration purposes in the Valley demo:)
Yet, merging geometry instances sounds like a task that can be automated, and in fact it has already been accomplished for the distance fields volumes.
I understand that merging 3D volumes should be a way simpler task than merging polygons, but Epic already did the impossible with Nanite, so merging poligonal geometry doesn't sound unrealistic to me)

Epic did just that. They’re merging SDFs to accelerate software tracing and they’re also merging triangle geometry as a sort of coarse grained LOD to accelerate hardware RT. No idea if the merging technique would work for objects close to the camera though.

35018E8C-1D68-493E-A265-EFFFB37C6032.jpeg

D6775C15-B6A3-4B52-80C0-AB596F6555C7.jpeg
 
Back
Top