"Material/shader-space lighting" and softness in modern rendering

inlimbo

Newcomer
First, these are two different topics I want to kill with one thread, and not necessarily related to each other as the title might suggest. Though you can tell me otherwise if that happens to be the case. Second, I want to be clear that when I make these threads I'm working from what is largely only a practical knowledge of pre OpenGL 2.0 fixed function/software techniques from when I spent 1998 to 2004 as a hobbyist programmer banging my head against the OpenGL spec on my own and eventually burning out. Or alternately from what I've cobbled together from this forum, gamasutra, hundreds of interviews and generally just from being the obsessive fans of games and graphics I am. So forgive me and correct me when I project faulty assumptions based on what I generally know about some of the fundamentals of rendering.

* * *

Anyway, back to the point. I booted up Halo Infinite for the first time in a year, palled around on some the community created forge maps (which are wild and impressive, frankly) and noticed something about some of the lighting, made more obvious in forge maps because player place lights don't seem to cast shadows, presumably for performance reasons. From what I'm observing, Infinite weirdly treats a lot of lights as if they only existed in shader/texture/material space. Rather than cast these lights on the scene, via a deferred lighting pass or otherwise, allowing them to illuminate whatever they come in contact with, these lights only really seem to exist when they are reflected in glossy materials within their vicinity. So where there should still be light cast on the ground or some similar matte structures in the area, there isn't; it's only showing up reflected in the material space of these glossy objects, like some paradoxically dynamic cubemapped reflections. And while matte and glossy objects certainly reflect light differently, light obviously doesn't ignore matte surfaces when a pool of it is illuminating some space.

If I'm right about this technique I could imagine the performance advantage and its appeal as an optimization trick but it has the effect of making Infinite's lighting feel flatter and less robust as a result, which just adds to this game's visual woes. And if that's the case, part of me wants to plead with the industry to skip this trick. But that's coming from someone who wishes games prioritized lighting before almost any other graphical feature, so yeah. Maybe Dark10x and our own Dictator touched upon something like this in their tech analysis, but I don't remember.

* * *

For the other question, it's pretty simple: why does modern TAA produce such a soft image without additional sharpening? Is it just a consequence of the filmic TAA tecniques or is it filmic TAA in concert with PBR materials and other aspects of the average pipeline these days? I can basically grasp why softness/vaseline is endemic to TAA as it does its thing with the previous frame but I struggled to grasp the degree of softness i see sometimes, a softness that seems to extend beyond just the vaseline on lens but in the texture rendering itself. I dunno. I'm out of my depth on this one but I would appreciate a rough explanation

Sorry for the wall of text as usual

edit: and if I'm at all right, this obviously isn't true of all of Infinite's lighting, only some of it.
 
Last edited:
light obviously doesn't ignore matte surfaces when a pool of it is illuminating some space.

If I'm right about this technique I could imagine the performance advantage and its appeal as an optimization trick
Hmm, your conclusion does not make so much sense. In general lights are not filtered by material for performance reasons, because such filter would cause execution divergence and only hurt performance, just for a visual downgrade.
So without seeing any screenshots, i would assume you just get the wrong impression. Likely the lights do affect matte surfaces, but the effect is subtle and not very noticeable. Contrary, a sharp reflection of the same light will be very noticeable, because it's a small but very bright highlight. Additionally you may be confused form modern day area lights. The effect on rough material can become even more subtle in comparison to point lights.
But not sure about anything ofc. Barely remember the game myself.
why does modern TAA produce such a soft image without additional sharpening?
Two reasons: 1. We try to integrate the whole area a pixel covers, so results will be more blurry than aliased point sampling. (The good blur)
2. When reprojecting the previous frame, we use a filter to sample it, which will touch at least 2x2 pixels (or 3x3 for some fancy cubcic filter). If we build a history accumulating many such filtered frames, our results become blurrier because we want to sample a pixel, but larger filter regions bleed in. It's like taking a photo and changing it's size slightly but randomly in Photoshop 100 times. Your result will be blurrier than if resizing only once to the final size. (The bad blur)
This blog post should answer your question in detail.
 
Yeah, I wrote this kind of in haste without really testing my hypothesis and after spending the last few hours with Infinite's forge mode it's pretty obvious how robust and integrated this game's lighting is, whatever issues I might have with art direction or what have you. And for the record, I think there's some rather good lighting in infinite on blance, especially a few of the launch multiplayer maps, I just have a few issues very specific to my taste in art direction. But back to forge: it's fairly impressive and does give you fairly granular control over how lights work within the engine

Also did a poor job of explaining what I was seeing. What I saw in someone's forge map wasn't so much a case of deliberately occluded lighting like I was suggesting but what seemed to be case of materials inheriting the genuinely dynamic specular reflection of a light that should be there casting a dynamic point light nearby but somehow without that light projecting anywhere on the object itself. And I swear I've seen similar shit in single player but it's been a while. I figured it was a little performance trick akin to the way a game replete with baked lighting might use spherical harmonics to adjust local dynamic character lighting. I figure I'm way off base there, so feel free to correct me. I woulnd't be surprised if I just wasn't seeing what I thought I was seeing and raced here to make a thread. Or if it was a quirk of the forge map I was trying out. I swear I wasn't just looking at cubemapped specular and mistaking it for something more dynamic, though, so I'll chock it up to my likely mistake.

And thanks for that clear and simple answer re: TAA. One question, which is hopefully not already answered in the blog post: is the sharpening that's done to rescue TAA from that blur purely a post process after building said frame or is it applied in stagrs during temporal reconstruction? I imagine the latter isn't really possible given what you've described but the final framebuffer image TAA puts out absent of sharpening has always seemed to me to look as if every material in the frame has had its roughness value turned all the way off and I wonder how PBR materials play a role in that. I want to say if you turn off the sharpening in Gears 4, for example, it's almost exactly like that - as if you suddenly dialed down all the roughness of every PBR material of the game. And I want to say this is with or without TAA enabled, but my shoddy memory might be another factor here. If it's all in the accumulated blur of the temporal reconstruction, that makes sense to me even I don't have the technical knowledge to really conceptualize how that might happen. Thanks again for the help
 
I was suggesting but what seemed to be case of materials inheriting the genuinely dynamic specular reflection of a light that should be there casting a dynamic point light nearby but somehow without that light projecting anywhere on the object itself.
That's indeed a bad explanation. If we can see a specular reflection of the light, and the material is very smooth, then this reflection already is all lighting we can expect to see.
Notice specular reflection and diffuse reflection is the same thing: Just one reflects the incoming light with the distribution of a narrow lobe (or cone, or even single ray for a perfect mirror), and the other reflects the same light at a much more uniform distribution. I'll draw an image:
1678777618816.png
On the top we have a rough material, reflecting all light pretty uniformly. Most of it is reflected along the surface normal, but the distribution with the shape of a shpere does not allow us to tell where the light comes from at all.
On the bottom we have the smooth material of an almost perfect mirror. It reflects all the light at the reflected direction given from light vector and surface normal, and we can tell where the light comes from.

The reason is the micro facets of the material at microscopical scale. On the wall it's bumpy, and the mirror it's still smooth and a straight line.
Assuming both materials absorb the same amount of light (none), and there is no self shadowing from the micro facets, the sphere and the lobe would have the same volume.
Both would reflect the same amount of light (all of it). The only difference is the angular distribution. One is the cosine of surface normal and light vector, the other is zero everywhere but one at the reflection direction.

So if you come from the days of phong shading, the separation of specular and diffuse lighting made back then was not backed by physics, but was only an artificial choice to make things easier for us.
Now we follow a theory based on real world physics ('Physically Based Shading') which addresses this problem. But at this point we must forget and unlearn the former misconceptions.
A separation into specular and diffuse terms can be still useful, but then we must combine both terms carefully so their sum preserves energy. In other words, we don't want to reflect more energy outwards than what's coming in.

In reality there exist no perfect diffuse mirror or diffuse materials, and the reflection distribution we can measure will mostly look like a blend of this sphere and lobe shapes as drawn above.
The distribution also is angle dependent. Any rough material becomes shiny at incident angles ('fresnel' term), which is why modern PBS games often show ugly reflections at the edges of objects. If we use reflection probes, the reflection often lacks accurate occlusion (would require ray tracing), so we get those wrong reflections totally off and looking like artificial rim lighting. Older games like Quake did not have this problem.

Another aspect of this same problem is, while PBS now allows us realistic materials, we still struggle to calculate the incoming light accurately ('Global Illumination').
Light bounces across the surfaces, or scatters in participating media. That's very expensive to calculate, so we may again use static diffuse probes at low spatial resolution, or some attempt to calculate GI in real time.
Idk what Halo Infinite is doing here, but it has dynamic time of day, so likely some realtime GI, at least for exteriors. It's not very accurate, and if you feel like diffuse surfaces are darker than they should be, then maybe the cause is the realtime GI can't capture all the infinite bounces lighting, and too much energy gets lost. But notice that if so, it has nothing to do with materials, but with lacking global light transport.
That's why i personally distinct between PBS and PBR (Physically Based Rendering). Anybody talked about PBR even in the PS4 days. But PBR includes accurate global illumination too, so we start to achieve this only now with ray tracing.

That's at least some theory. I'm often puzzled myself why lighting in games is so often wrong even if they bake it offline. Things look too dark, and the use of dynamic AO can't explain the lacking. Maybe they save time by reducing baked GI to just one or two bounce in cases.
Idk Inifintes MP, but if those levels are user made, they may have some restrictions to avoid potential performance issues. Similar issues in the SP campaign may have been addressed with manual tweaks not practical for user generated content.
 

Attachments

  • 1678779583076.png
    1678779583076.png
    9.5 KB · Views: 1
@JoeJ dude, fantastic reply. Really well done, I thought I knew all of this and yet I've also learned something new :)
 
Yeah, cosigned. That's a fantastic post. And I very much get your depiction of the sometimes vast difference between mirror reflection and matte/muddy/imperfect reflection of light and how that can look strange side by side. I can see that just through observation of the real world, even though I lack the physics/optics background that would allow me to understand it mathematically

I come at this more from that perspective of a visual artist who cares about lighting more in terms of cinematography, art direction and production design and less in terms of physically accurate transport of light . Though I still care about the latter for the freedom it will give artists once realtime raytraced GI is more and more the standard and for frankly the fascinating physical properties of optics (if I ever went back to school and studied physics I'd want to take optics first)

This very specific Halo Infinite case struck me as weird not because the physics weren't adding up or the lack of fidelity to the actual simulation of light or their GI solution (it's one bounce sun only GI, for the record) but because it seemed to be a mild failure of art direction related to what I thought was some performance trick. The very simple reality is the light intensity was probably set too low to really make a visual impression on the matte surfaces surrounding the glossy forerunner structure. And the backseat art director in me thinks that if you're going to place a point light in this spot it should be making a stronger impression on everything in its spherical radius. which is what prompted me to think it wasn't there at all.

Anyway, thanks again. That is such a thorough and crucially understandable primer on the basic on light transport.

edit: and when I described the flat effect of said lighting in the OP I was thinking in terms of flattening the art direction and cinematography. As in how the deep shadows and underlit qualities of noir-ish lighting might not be strictly realistic presentation, especially when you consider what you can do with photographic exposure, but adds wonderful depth, contrast and three dimensionality to the physical space represented on screen
 
Last edited:
I come at this more from that perspective of a visual artist who cares about lighting more in terms of cinematography, art direction and production design and less in terms of physically accurate transport of light . Though I still care about the latter for the freedom it will give artists once realtime raytraced GI is more and more the standard...
Curiously that might start hampering true art. Cinematography has to struggle with unwieldy physical set-ups to get the artistic lighting desired, balancing light placement and intensity. In CG, we can shade any surface any way we want without these hard limits. A unified GI solver starts to impose the same restrictions as those in photography. Pure RT may end up confined to photorealistic titles and other renderers get used for anything that wants to deviate from the physical universe. Although there are tricks that can be done in CG like invisible light sources, selective object lighting, and even negative lights.
 
Curiously that might start hampering true art. Cinematography has to struggle with unwieldy physical set-ups to get the artistic lighting desired, balancing light placement and intensity.
That goes both ways. In reality, you get GI with all it's beautiful gradients and depth cues for free.
In games, you can make sure stuff isn't black with some fill lights. You can create some mood even, after watching Hollywood movies long enough to figure out how they do it.
But then players still need to create most impression of depth and distance using exhausting post processing in their games, which is discomfort only acceptable because it's all we have.

A unified GI solver starts to impose the same restrictions as those in photography.
No, it doesn't. You can still create non realistic materials if you want, you can still use toon shading and artstyle, you can use different materials for GI than for visibility to cheat.
You can still do all the things which are not possible in the real world. But it looks better, feels more immersive and dynamic, and gives us those depth cues we need so badly.
 
Curiously that might start hampering true art. Cinematography has to struggle with unwieldy physical set-ups to get the artistic lighting desired, balancing light placement and intensity. In CG, we can shade any surface any way we want without these hard limits. A unified GI solver starts to impose the same restrictions as those in photography. Pure RT may end up confined to photorealistic titles and other renderers get used for anything that wants to deviate from the physical universe. Although there are tricks that can be done in CG like invisible light sources, selective object lighting, and even negative lights.

The real world operates on a single unified GI solver so CG should be able to at least match that. With CG you can do fake lights and also have the freedom to bend the laws of physics as needed.
 
I've once tried negative emission material for my GI solver.
I expected something cool, like the material sucking in light like a black hole \:D/
But then it looked just like spilled shit. <:(
 
Back
Top