Some basic but specific rendering questions

Ijnak

Banned
OK, I just wanted to ask a few questions regarding rasterization.

1. This one is a little hard to describe. A lot of modern games have an effect where an object will have a very specific 'sheen'. Just as an example, in the first level of Call of Duty 4, some of the cargo containers have a shiny effect that changes depending on which angle you view the box. The actual geometric surfaces are flat, yet it gives back a bumpy, and somewhat curved look. I was wondering if this is a pre-baked effect and it is displayed similarly to cube mapping, or if it's some real time effect that I don't understand.

2. In general, why do dynamic objects add more strain than static ones? Is it just the extra processing and memory needed for these extra polygons and the fact that you have to keep track of the object separately, or do dynamic objects add extra costs I don't understand? Basically I'm specifically thinking of that "Lightsmark" demo that was posted on these forums, why would adding, for example, a bunch of floating objects make the demo that much slower?

3. When taking multiple render passes of a scene, does the geometry have to be transformed each time? Basically if a game has an object with 10k polygons, but it does 4 passes on that object (for stuff like specularity, for example), does it actually render 50k polygons for that object for that frame alone?

4. When referring to shadow mapping, and specifically in deferred rendering, I saw someone mention something about "rendering the shadows directly to the textures", was he just saying that the GPU was "rendering to texture" and making a sort of frame buffer of the shadows (like how I understand deferred rendering works), or can the shadows be placed on each individual texture in texture memory? Wouldn't the latter take tons of time to modify each texture? Anyways, I think I just misunderstood what he meant.

Thank you for your time.
 
Oops noticed a few small clarifications I should make. I don't think I can edit posts yet...

2. Not just 'floating' boxes but moving ones (so they're dynamic)

3. Should say '40k' (4x10k=40k :???:)
 
1. Some kind of normal mapping or bump mapping most likely, though I haven't played COD4 so I don't know exactly what you're talking about. When you do lighting at each pixel (rather than only at vertices), even though the geometry rendered is flat there can be some lighting variation across it based on texture data.

2. It's more accurate to say that static objects have less strain than dynamic ones. This is because anything static can be precomputed, so on each frame you only have to display a result you've already stored. For dynamic objects you tend to have to compute stuff on the fly since you don't know all the relationships (e.g. relative position of the light) until just before you render. So static stuff takes just as much work, it's just that you can do a lot of it "off the clock".

3. Typically, yes. Streamout in DX10 allows them to do it just once, at the cost of some memory.
 
1. Some kind of normal mapping or bump mapping most likely, though I haven't played COD4 so I don't know exactly what you're talking about. When you do lighting at each pixel (rather than only at vertices), even though the geometry rendered is flat there can be some lighting variation across it based on texture data.

2. It's more accurate to say that static objects have less strain than dynamic ones. This is because anything static can be precomputed, so on each frame you only have to display a result you've already stored. For dynamic objects you tend to have to compute stuff on the fly since you don't know all the relationships (e.g. relative position of the light) until just before you render. So static stuff takes just as much work, it's just that you can do a lot of it "off the clock".

3. Typically, yes. Streamout in DX10 allows them to do it just once, at the cost of some memory.

Thank you very much for your answers.
 
4. Shadow mapping works that way. You render the shadow to a texture from light's point of view and then project this texture onto the scene.

None of this questions regard rasterisation. Why are you asking anyway? Is it just interest? Half-knowledge is seldomly usefull.
 
4. When referring to shadow mapping, and specifically in deferred rendering, I saw someone mention something about "rendering the shadows directly to the textures", was he just saying that the GPU was "rendering to texture" and making a sort of frame buffer of the shadows (like how I understand deferred rendering works), or can the shadows be placed on each individual texture in texture memory? Wouldn't the latter take tons of time to modify each texture? Anyways, I think I just misunderstood what he meant.
There are a couple of things one could take from this. One is what Zengar mentioned about shadow mapping being effective a texture projection and that the shadow map is basically a render-to-texture Z-buffer that stores the distance of the first hit from the light, and it's used for comparisons against a projected distance for an arbitrary point on screen. And yes, each shadow map takes up correspondingly more texture memory.

Then there's the thing about deferred shadowing, which nAo has mentioned a few times before, where you do your shadow-mapping as full-screen pass(es) over the scene which you then apply simply as a multiplier when actually doing your lighting/main render pass.

Then there's shadowing and lighting in texture space for individual objects, where instead of projecting shadow maps onto the scene, you take the sampled positions of points in world space and project shadow maps (and usually illumination as well) onto that object in the object's own texture space. In doing so, you allow some texture-space operations and filtering to be done decoupled from any scene/geometric complexity. This isn't really done very often, but the prime example of a use case for this would be the many skin shaders where you light and shadow in texture space and blur that "lightmap" to fake the diffusion approximation.
 
OK, I just wanted to ask a few questions regarding rasterization.

1. This one is a little hard to describe. A lot of modern games have an effect where an object will have a very specific 'sheen'. Just as an example, in the first level of Call of Duty 4, some of the cargo containers have a shiny effect that changes depending on which angle you view the box. The actual geometric surfaces are flat, yet it gives back a bumpy, and somewhat curved look. I was wondering if this is a pre-baked effect and it is displayed similarly to cube mapping, or if it's some real time effect that I don't understand.

2. In general, why do dynamic objects add more strain than static ones? Is it just the extra processing and memory needed for these extra polygons and the fact that you have to keep track of the object separately, or do dynamic objects add extra costs I don't understand? Basically I'm specifically thinking of that "Lightsmark" demo that was posted on these forums, why would adding, for example, a bunch of floating objects make the demo that much slower?

3. When taking multiple render passes of a scene, does the geometry have to be transformed each time? Basically if a game has an object with 10k polygons, but it does 4 passes on that object (for stuff like specularity, for example), does it actually render 50k polygons for that object for that frame alone?

4. When referring to shadow mapping, and specifically in deferred rendering, I saw someone mention something about "rendering the shadows directly to the textures", was he just saying that the GPU was "rendering to texture" and making a sort of frame buffer of the shadows (like how I understand deferred rendering works), or can the shadows be placed on each individual texture in texture memory? Wouldn't the latter take tons of time to modify each texture? Anyways, I think I just misunderstood what he meant.

Thank you for your time.

1. For me this effect sounds like basic normal mapping. Might be parallax mapping also. The shiny effect is called a specular highlight.

2. Dynamic objects cause more strain, because most graphics engines store objects in (at least partially) precalculated data structure. Games must render the frame 60 times in a second, and the game world might have hundreds of thousands of objects. It's critical that the engine gets the list of potentially visible objects very fast. To boost up the performance, lots of visibility data is precalculated, and you cannot precalculate visibility data for moving objects. Also in mostly static scenes lighting and shadowing can be almost completely precalculated. This boosts up the performance a lot (and was highly used technique in last generation games). Current games still precalculate lots of lighting data in static objects, while moving objects are calculated in real time.

3. The geometry must be transformed again, unless you store the transformed geometry in graphics card memory (render to VB, streamout, etc). These techniques aren't fully supported in DirectX 9 API, and because of this, most developers aren't yet utilized them. Deferred rendering completely solves the problem of transforming the geometry multiple times. All shading is done in image space buffers, and geometry is not used at all in the shading process.

4. Rendering shadows directly to textures sounds like lightmapping to me. Lightmaps are usually rendered during the content production process. Many new engines still use lightmapping. For example static terrain shadows from landscape in id-software's megatexture technology are precalculated (and/or drawn) in the huge terrain texture.
 
Back
Top