What are contact shadows and dynamic radiosity?

shazam

Newcomer
I was looking at the Killzone 2 Develop Conference slides and at the end it said "Still a lot of features planned" and under that it said "ambient occlusion / contact shadows" and "dynamic radiosity".

-What are contact shadows? Ive looked everywhere and i really cant get a straight answer form any site. What do they mean by "contact shadows"?

-What do they mean by "dynamic radiosity"? Does it mean real-time radiosity lighting? (If its real-time thats amazing!)

Please help me. We can be great friends.
 
Realtime radiosity is what you're thinking, but not how you're thinking it. It wont' be a brute-force solution as much as some clever approximation. LBP has a very good lighting system that has this 'secondary lighting' that looks convincing but isn't a proper radiosity engine.

Contact shadows I think refers to the occlusion by objects on the ground, so there's a dark area around where they touch the floor. This is very important to making objects look solid and a part of the scene instead of floating around. That's my guess anyway.
 
I was looking at the Killzone 2 Develop Conference slides and at the end it said "Still a lot of features planned" and under that it said "ambient occlusion / contact shadows" and "dynamic radiosity".

-What are contact shadows? Ive looked everywhere and i really cant get a straight answer form any site. What do they mean by "contact shadows"?

-What do they mean by "dynamic radiosity"? Does it mean real-time radiosity lighting? (If its real-time thats amazing!)

Please help me. We can be great friends.

"Contact shadows" it's just another name for ambient occlusion.

By "dynamic radiosity" I think they refer to method of getting indirect lighting in deferred render: cast couple of random rays and place local lights in the geometry intersection points. This one was used in Stalker, although only in test code.
 
Contact shadows are a lot easier to fake then full AO. Basically a very blurry patch of shadow under the object where it meets the ground, it can be implemented with a simple alpha mapped texture that's moving under the object.
 
Yet strangely it's not common where it could make a big difference, at least in what I've seen. RnC would have benefited greatly from contact shadows IMO. As it is scenery objects just look superimposed rather than grounded.
 
Contact shadows are a lot easier to fake then full AO. Basically a very blurry patch of shadow under the object where it meets the ground, it can be implemented with a simple alpha mapped texture that's moving under the object.

and for the walls ?and for half a stair ?this is a way too old school and imperfect approach.My bets would rather be on a distance function on sphere based primitives.
You could also hierarchically organise these primitive to provide feedback for the whole body ,and on any surface.This is probably what the next SC does.
 
I'm only talking about what we've done ;) It was for tanks and trucks, some years ago.
 
My bets would rather be on a distance function on sphere based primitives.
How does that work? Is that a lighting thing, or a texturing thing? The thought popped into my head that you could have a dynamic shader on a surface that lowers the lighting values (diffuse/spec) based on the proximity of another object. So a character model pushed up against the wall would use both scene lighting (for the "normal" shadow) and the shader to give the appearance of occlusion on the wall, or on the character model too, for that matter.

Bear in mind I'm not a programmer in any way, I'm just a CG guy, so in the end, I have no idea what I'm talking about when it comes to real-time coding.. lol. :)

I saw an example a while back of a company that had shown a real-time radiosity engine, I assume by using dynamic scene lighting to duplicate reflected or ambient light. But the lack of shadows or occlusion threw the effect for me, since I'm used to seeing occlusion alongside radiosity (the two go hand-in-hand in LightWave). In terms of personal preference, I'd rather see the occlusion than accurate reflected lighting, as it helps the object feel more solid and "real".
 
i assume hes meaning where a representation of the model is made up of a collection of spheres.
IIRC theres a paper on a microsoft developers website ( i cant find it at the moment )
i also 'invented' the technique about 4-5 years ago :rolleyes:
 
From Killzone Forum:

motherh said:
Right guys, I went and talked to Michiel and he sent me some answers:

"Contact shadows are a form of ambient occlusion. These are the sort of shadows you get where object come near each other (hence the "contact" part). Image-based ambient occlusion is a hot topic in the graphics industry off-late. We are researching a number of techniques but we're unsure if we're going to put it into Killzone 2.

We are also researching dynamic radiosity which does indeed mean real-time radiosity lighting. We have got some ideas, but if it works it'll be so late that we can't put it into this game, as we're steaming ahead in full production.

The PlayStation 3 does definitely have the horsepower to do all this though. If it won't be in KZ2 it'll definitely be in other titles from us or other PS3 developers. There's a lot of performance in the PS3 left to be untapped..."

There you have it. Hope this helps, and please note, we are not certain these techniques will be in Killzone 2, so take this as informtation, not news.



Seb Downie - QA Manager - Guerrilla Games
 
How does that work? Is that a lighting thing, or a texturing thing? The thought popped into my head that you could have a dynamic shader on a surface that lowers the lighting values (diffuse/spec) based on the proximity of another object. So a character model pushed up against the wall would use both scene lighting (for the "normal" shadow) and the shader to give the appearance of occlusion on the wall, or on the character model too, for that matter.
That's pretty much what we do. As mentioned before, we use a few sphere primitives on the objects to define the regions. The center of the sphere is attached to the object by the same bone structure, so it'll move with animation and all, while the radius is basically defining the "softening" range. From the point of view of the shaders, we're really only concerned with the closest sphere.

BTW, in our case, we use it only to darken the ambient/indirect lighting component, not diffuse/spec. It becomes really important when you have characters walking around in areas which are already in shadow, and so the character has no visible shadow inside the larger shadow. When you darken diffuse/spec with it, it looks really awkward.
 
That's pretty much what we do. As mentioned before, we use a few sphere primitives on the objects to define the regions. The center of the sphere is attached to the object by the same bone structure, so it'll move with animation and all, while the radius is basically defining the "softening" range. From the point of view of the shaders, we're really only concerned with the closest sphere.
Neat, thanks. :) So I wasn't just talking out of my ass.. lol.

ShootMyMonkey said:
BTW, in our case, we use it only to darken the ambient/indirect lighting component, not diffuse/spec. It becomes really important when you have characters walking around in areas which are already in shadow, and so the character has no visible shadow inside the larger shadow. When you darken diffuse/spec with it, it looks really awkward.
I know what you mean about the shadow thing.. so many developers don't bother with that, so you get that odd shadow-in-a-shadow effect. Lately I've actually been quietly applauding developers that get it right. Most casual gamers probably don't care, but I'm a bit of a graphics whore.. hehe.

About the occlusion thing, though, given that there's likely still some ambient lighting even in a shadow, it would still darken slightly when an object is right up against something, as it's blocking even the dim ambient lighting. Or is that what you meant by adjusting the ambient lighting component? The result would probably be the same, I would imagine.. whichever is easiest to make the surface darker.

Bear in mind that when I speak of diffuse/spec, I'm looking at it from the point of view of texturing one of my CG models, and the basic way of making an surface "darker" by reducing it's diffuse value (although the practical application is of course more complex, regarding how the surface reacts to different levels of light, etc). I only brought up specularity because of the possibility of still having a specular highlight when the light source that's casting it should be blocked by the object. Although now that I think about it, I haven't really seen that happen in a game. Then again, I haven't really been looking for it, so if it's there it's never enough to draw my eye, and devs haven't been using complex specular shading very long. In CG, we generally don't worry about render time.. so while the principle may be the same as I'm used to, the implimentation is very different.



Going back to the radiosity thing, how is that usually implimented? In CG, if we need to maintain that level of lighting (i.e. drawn from a HDR backdrop image), but we need to cut down unrealistic render times, there are some automated methods that can extract a lighting rig from a HDR image. So we cut down render times by using an array of practical lights rather than full-bore radiosity. I remember thinking a while back that games could do something similar by sampling the environment around the object in question and generating a light array around it to simulate the effect of HDR illumination and reflected light. Of course, working that into an existing lighting scenario would likely be a huge pain in the ass.

Reminds me of another train of thought I had a while back, somewhat off-topic, regarding reflections. For example, cars in Gran Turismo reflect their environment, but not the other cars. I've always known the "why"... in LW, I just turn on raytracing. Great results, noticable increase in render time, so obviously not a feasible solution for a real-time engine. Originally, I had thought that the system was generating low-res environmental reflection maps on-the-fly, but that probably wasn't feasible on older hardware. One of the videos on GT5 appears to show a very low-res mesh for the background.. I'm assuming that this mesh is invisible to the gamer, but is simply there to be reflected by the car (bear in mind that I've only recently become aware that there are often plenty of objects in a game environment that aren't rendered, for collisions, etc). But I still wonder whether it would be possible to generate a full reflection map on-the-fly, that would include everything around the car? Sort of a low-res fisheye/chrome-ball image based on the position of the object, that's then applied as an environmental reflection map. I'm sure I'm not the first to think of stuff like this. :)
 
About the occlusion thing, though, given that there's likely still some ambient lighting even in a shadow, it would still darken slightly when an object is right up against something, as it's blocking even the dim ambient lighting. Or is that what you meant by adjusting the ambient lighting component? The result would probably be the same, I would imagine.. whichever is easiest to make the surface darker.
Yeah, that's basically what I meant -- normally ambient/indirect illumination would be ignored by shadows. For this particular one, we'd also attenuate that lighting component, we darken that and it basically looks like soft AO projected down against the surface. It's not as noticeable when you have it falling against a surface actually receiving direct light, but that you would get traditional shadows falling in those areas as well anyway.

Bear in mind that when I speak of diffuse/spec, I'm looking at it from the point of view of texturing one of my CG models, and the basic way of making an surface "darker" by reducing it's diffuse value (although the practical application is of course more complex, regarding how the surface reacts to different levels of light, etc).
Ah... then we were talking about different diffuse components. Sounds like you were referring to the diffuse albedo, whereas I was talking about diffuse illumination when receiving direct light. Since the ambient lighting is also modulated by the diffuse albedo, darkening the ambient will also darken the appearance of the albedo since you're taking away part of the illumination.

Going back to the radiosity thing, how is that usually implimented? In CG, if we need to maintain that level of lighting (i.e. drawn from a HDR backdrop image), but we need to cut down unrealistic render times, there are some automated methods that can extract a lighting rig from a HDR image. So we cut down render times by using an array of practical lights rather than full-bore radiosity. I remember thinking a while back that games could do something similar by sampling the environment around the object in question and generating a light array around it to simulate the effect of HDR illumination and reflected light.
Well, image-based lighting has been done in realtime for some time, but the thing they seem to be talking about is dynamic, and most likely also localized to some extent (in that they'd ignore stuff way out of range from the visible, and probably also not bother trying to be so explicit for objects in the distance. Even so, any method to try and do dynamic indirect lighting is going to rely on extremely simplified and/or undersampled representations of the scene and the light interactions between elements.

But I still wonder whether it would be possible to generate a full reflection map on-the-fly, that would include everything around the car? Sort of a low-res fisheye/chrome-ball image based on the position of the object, that's then applied as an environmental reflection map.
Number of render passes is a nasty point here. How many objects are going to have reflection maps on them, and then rendering a cubemap for each one means sending everything down n cars * 6 passes + the main render pass itself, and that's not including shadowmap passes and what not.

Doing a fisheye/chrome ball type of projection doesn't work at all, because unlike raytraced renders which effectively give you a perspective projection for every subpixel sample, with hardware rasterization, you only get projection at a per vertex level, and linear interpolation in between. That means that only the vertices will be at the correct positions, and they'll be connected by straight edges and flat polygons rather than edges and surfaces that curve according to the shape of the projection. So in practice, the only form of reflection map that can be used dynamically (if that) is a cubemap since you can use affine cameras on each face and get comparatively less distortion and no "bulging-between-the-vertices." That's one flaw in rasterization that will probably never go away at any point.
 
Back
Top