OK, I just wanted to ask a few questions regarding rasterization.
1. This one is a little hard to describe. A lot of modern games have an effect where an object will have a very specific 'sheen'. Just as an example, in the first level of Call of Duty 4, some of the cargo containers have a shiny effect that changes depending on which angle you view the box. The actual geometric surfaces are flat, yet it gives back a bumpy, and somewhat curved look. I was wondering if this is a pre-baked effect and it is displayed similarly to cube mapping, or if it's some real time effect that I don't understand.
2. In general, why do dynamic objects add more strain than static ones? Is it just the extra processing and memory needed for these extra polygons and the fact that you have to keep track of the object separately, or do dynamic objects add extra costs I don't understand? Basically I'm specifically thinking of that "Lightsmark" demo that was posted on these forums, why would adding, for example, a bunch of floating objects make the demo that much slower?
3. When taking multiple render passes of a scene, does the geometry have to be transformed each time? Basically if a game has an object with 10k polygons, but it does 4 passes on that object (for stuff like specularity, for example), does it actually render 50k polygons for that object for that frame alone?
4. When referring to shadow mapping, and specifically in deferred rendering, I saw someone mention something about "rendering the shadows directly to the textures", was he just saying that the GPU was "rendering to texture" and making a sort of frame buffer of the shadows (like how I understand deferred rendering works), or can the shadows be placed on each individual texture in texture memory? Wouldn't the latter take tons of time to modify each texture? Anyways, I think I just misunderstood what he meant.
Thank you for your time.
1. This one is a little hard to describe. A lot of modern games have an effect where an object will have a very specific 'sheen'. Just as an example, in the first level of Call of Duty 4, some of the cargo containers have a shiny effect that changes depending on which angle you view the box. The actual geometric surfaces are flat, yet it gives back a bumpy, and somewhat curved look. I was wondering if this is a pre-baked effect and it is displayed similarly to cube mapping, or if it's some real time effect that I don't understand.
2. In general, why do dynamic objects add more strain than static ones? Is it just the extra processing and memory needed for these extra polygons and the fact that you have to keep track of the object separately, or do dynamic objects add extra costs I don't understand? Basically I'm specifically thinking of that "Lightsmark" demo that was posted on these forums, why would adding, for example, a bunch of floating objects make the demo that much slower?
3. When taking multiple render passes of a scene, does the geometry have to be transformed each time? Basically if a game has an object with 10k polygons, but it does 4 passes on that object (for stuff like specularity, for example), does it actually render 50k polygons for that object for that frame alone?
4. When referring to shadow mapping, and specifically in deferred rendering, I saw someone mention something about "rendering the shadows directly to the textures", was he just saying that the GPU was "rendering to texture" and making a sort of frame buffer of the shadows (like how I understand deferred rendering works), or can the shadows be placed on each individual texture in texture memory? Wouldn't the latter take tons of time to modify each texture? Anyways, I think I just misunderstood what he meant.
Thank you for your time.