Carmack on low polies on models and other things

Kristof said:
JC said:
Several hardware vendors have poorly targeted their control logic and memory interfaces under the assumption that high texture counts will be used on the bulk of the pixels. While stencil shadow volumes with zero textures are an extreme case, almost every game of note does a lot of single texture passes for blended effects.

Interesting... keep an eye on the PowerVR download pages we should be able to illustrate the impact of increased stencil usage soon :eek:

I would guess he's talking about trends like 4 texture engines per pipe and generally optimizing for multitexturing at the expense of single or no texturing. If a lot of time is spent rendering shadow volumes then those costly texture engines are sitting idle.
 
If they were indeed demoing with FSAA turned on (even if only 2x) then I am even more impressed with the current performance of the [unoptimized] engine (and beta R300).

Not to mention the amazing clarity of the IQ... simply amazing and much better than the current SV implimentation.

Reverend...

Sorry, for the tone, it just seems like someone *outside* of Id people have no way of telling you how fast it was, or which GF4 they used. Except information you may have gotten from someone at Nvidia. However their info seems to clearly contradict what JC said about the GF4 part he used.... Just looking At the Picks the filtering and FSAA is simply stunning. I dont see any shortcuts in there...

Everyone else....

Sorry, somehow i missed his comments regarding Parhelia....
 
Interesting... keep an eye on the PowerVR download pages we should be able to illustrate the impact of increased stencil usage soon

Quake III grinds to a halt on my Evil Kyro when stencil buffered shadows are enalbed. I'm a little surprised considering the cool stencil demo that runs like silk.
 
Dang, too late. I would've liked to know if he's still on target for 10x7 on a GF3 at 30fps, and for what quality settings that was forecast. Not very important, but interesting from the POV of whether the engine is progressing as planned.
 
Jerry Cornelius said:
Interesting... keep an eye on the PowerVR download pages we should be able to illustrate the impact of increased stencil usage soon

Quake III grinds to a halt on my Evil Kyro when stencil buffered shadows are enalbed. I'm a little surprised considering the cool stencil demo that runs like silk.

Me and PC Chen (or was it pascal? Actually I think it WAS Pascal) still have a friendly gentlemans wager as to which card, a kyro2 or GF3 will be faster in Doom3. Dont think I've forgotten!! ;)
 
GPSnoopy said:
two questions i'd like to ask
1. Is there any other technic than stencil buffer used for shadows? It looks like from the movies/screenshot that the shadows of some part of the maps aren't stencil shadows but shadowmaps. (or maybe I just dreamed it)
Were they dynamic at all? It'd be easy enough to switch lights on and off with multiple 'light maps' (generated from radiosity pre-processing), but moving them around would be a pain. Using shadow maps (as invented by Williams (who also invented MIP mapping)) might be doable but you'd have to spend extra rendering time generating each map from the light's perspective, and I'm not even sure if the hardware pixel shaders have the necessary precision to prevent errors (aliasing might also be prevalent).
 
Humus said:
Quite obviuosly they used 2x FSAA as shown on this zoomed image:

It looks like 4x ordered grid in other places, its hard to tell though with the jpeg artefacts. It is doing the shadows as well so is it still supersampling?
 
Well, I only see two gradient levels from the black thing in the left, black and mid-grey.
Odd that you found jpeg artefacts in a .png file ;) ... which btw is derived from an uncompressed .tga
 
Thanks :)

I'm staying with 4x OGAA. There are 2 single pixel 1/4 and 3/4 shades on the picture you posted but heres some more obvious ones:

doom3aa1.jpg

doom3aa3.jpg
doom3aa2.jpg


Even more impressive if its doing 4xAA :)
 
I realize that I might be slightly offtopic here, but I've been trying to collect information on Doom3 technology, and realized that some of the stuff from the "old" B3D boards are lost, at least for me. I don't have any links to the old site so I don't know if it's still online, but I suppose it's not. Could anyone help me with this? I'm particularly interested in what the different passes contain and how each frame is drawn, but every bit of info is welcome :)
 
Why do some people still use the terms 'Dynamic Lighting' or 'Dynamic Shadows' when they talk about Doom 3 graphics? AFAIK Carmack has never mentioned anything being just dynamic, he used the term 'Unified Lighting' instead.
 
Because as seen in all Doom 3 demonstartions, lighting and shadows have changed (hence the dynamic) in realtime. Perhaps, it should be called realtime dynamic shadows and lighting?
 
Thanks Cybamerc, after killing a few cookies I've managed to enter. FYI, here's the info I was looking for:

In the interview, Carmack talks about the doom3 rendering techniques:

It's a generalized unified lights/shadows rendering technique.

The first pass lays down the basic physical properties of the object. (Presumably color. And also dp3 bump mapping, I guess.)

The second pass lays down an invisible shadow volume in the stencil buffer.

The third pass paints the light onto the object. (Using projected textures.)

Passes two and three are repeated for each set of non-overlapping lights. (For example, you could design a scene with 1 main light and lots of little lights. If the little lights don't overlap each other, but do overlap the main light, then you could render the whole scene in five passes (1 + 2 * 2).)

The second pass requires generating shadow volume, which can be done either on the CPU or the GPU. Using the CPU is slightly faster, but of course eats up the whole CPU.

Much of the hard work in doom3's rendering engine involves:

+ Good algorithms for generating shadow volumes out of a dynamicly changing world.

+ Good algorithms for re-optimizing triangle meshes that are generated durihg the lighting passes. He believes he has a general way of doing this.

+ How to build a BSP without regenerating verticies.

+ How to generate completely optimized shadow beam trees. (Which would help when rendering static lighting scenes.)

Off-the-cuff comments include:

+ Typical scene needs 5 rendering passes, 9 textures. That fits in with the earlier comments of 1 base pass + 2 * lights.

My question is, how do the 9 textures work? 1 base + 1 bump map + 2 * 1 projected light texture. But that's only four textures. What are the other five textures used for?

+ Curved surfaces are a waste of time, both for scenes and for characters.

I wonder about the overlapping lights part... I'd imagine that you could collapse non-overlapping lights into one pass, because those would have discrete shadow volumes. Overlapping lights could have surfaces that are shadowed from one light, shadowed from both, or illuminated by both, and this would suggest a need for seperate passes if I'm correct. Am I?
 
Shadow volume is a problem for overlapped lights. Volumetric shadows require the ability to increase or decrease the values in the stencil buffer. Since most 3D hardwares can only increase or decrease by one, there is no good way to seperate a 8 bits stencil value into two 4 bits stencil values for two overlapped lights.
 
Which means that the engine can only collapse non-overlapping lights into one pass then. Thanks.
Although it might be reasonable to turn off shadow casting for some lights, like gunfire, except when it's the only light source in a room.

Level building for Doom3 will certainly not be easy...
 
My guess for the textures :

- bump/normal map
- diffuse map : color of diffuse reflected light
- gloss map : amount of specular reflection)
- specular map : i have never known exactly what this
is, but maybe some sort of color of the
specular reflected light? note this is listed
as one of 5 textures for the Doom3 models.

also needed :

- cube map for vector normalization
- a texture used for light attenuation (maybe just a 1d texture?)
- a cube map for the color/intensity of light emmitted by a point source

That would be 9 textures...

[edit] The first pass cannot be for color/dp3 bump mapping. bump mapping must be done by the lighting passes. AFAICS laying down color before doing lighting won't work if light sources overlap - one light would get the base color from the frame-buffer, the overlapping one would get color values modified by the first lighting pass...

My guess is the first pass simply lays down the z-buffer (required for the shadow volumes), and nothing else. This is basically the software deferred rendering approach discussed previously on this board.

Note: This is guess work on my part, NOT fact.

Regards,
Serge
 
Back
Top