Doom III, a step in the right direction...or not?

Nagorak said:
Maybe I misunderstood your post, because it seemed like you were implying that my only purpose of starting this thread was to criticize John Carmack and that such naysaying was unacceptable... Anyway, I'm sorry if I took in the wrong way and offended you.

Peace.
No long term offense taken.

I'm also criticizing John Carmack for getting rid of the use key, but that's a completely other story.

I just think that an unified lighting model is the way to go for 3D realtime engines, especially games.

Will the engine be good for multi player purposes? Hell, (most likely) no!
Only if it comes with a "disable all" button (and a client server architecture).

The truth is, I haven't played a really great id game yet, but as far as engines go, I'm completely willing to admit they are good. Although, by the same token, I don't really care what engine a game uses, as long as it's fun.

I agree that the engine isn't the part that makes a game fun, but it's certainly a reason why a fun game can be bad.

While I always loved id software's games (I honestly don't know why), I never felt that Unreal (which I heavily dislike) should run on the QuakeX engine or that QuakeX should run on any other engine.

So far there are only very few games I wish I would see in a specific engine - System Shock 3 running on the Doom3 engine would make my day for example. Other than that I'm fairly free of emotions which franchise should use whatever engine.

Speaking from the standpoint of someone who produces custom content for gaming engines I really have to admit that the insight offered by the leaked Doom alpha did make me believe that this engine will have a stronghold for years. This opinion isn't exclusively based on dynamic lights or stencil shadows.

Funny thing is that this will be the least limited id software gaming engine ever, not the most limited.

But as you've also said it - the Doom engine isn't designed to be the swiss army knife of gaming engines. John Carmack has stated all over that this engine is tailored to Doom, not to any other game.
So a comparison of a special solution vs. general purpose solutions just doesn't seem right to me.

But to me it seems farily obvious that the Doom engine out of the box will make a bad flight simulator :)
 
I would like to see the Jedi Knight and the light saber with the Doom3 engine

Me too, me too :)

I think Doom3 Engine is the step in the right direction. Low polygon and stuff, don't really matter much when things are moving. Beside there is no reasons poly count can't be increased when the hardware performance is available.
 
JF_Aidan_Pryde said:
I would like to see System Shock 3 using the Doom3 engine.

My god, that might be a 1st in the PC Game industry.

Unparalleled graphics with Unparalleled gameplay!!

A sure fire hit I think... :D
 
The problem with "unified lighting systems" is that:
A) They are usually not unified at all. A lot of people have been using this term to mean that all geometry casts shadows.. where they got that interpretation is beyond me.

and

B) Right now the unification is done at great expense to quality - In Doom they are treating every surface the same from a lighting point of view in that they are treating every surface like a Blinn/Phong surface. Phong is just a radically simplified version of the BRDF equation used as an approximation for plastic like surfaces, and Blinn is just a further simplification.

Yes, a real unified lighting system would be cool. And by real, I mean one where the lighting is done via evaluating the BRDF for the surface at hand. Then all surfaces are treated the same from a tech point of view in that you are no longer using different shaders for metal, different shaders for plastic, etc but one shader for all surfaces. From a physics and artistic point of view every surface is getting treated as it is in the real-world - differently. Every surface reacts differently to light, treating every surface as plastic like Doom does is not the right thing to do looking forward. Granted that when Doom first went under production hardware wasn't even close (and even now it isn't something you do without a good deal of thought) to being able to evaluate general BRDF (note: cube-map based "seperable BRDF" does NOT count..) equations at every pixel on the screen, so they probably did make a pretty good decision for that point in time.


*note: in about 18months from now feel free to replace 'BRDF' with 'BSSRDF'
 
Dave H said:
Uh, Half-Life? Still possibly the best integration of plot and environment, with absolutely no disconnect or magic movement (I'm purposely erasing the alien planet part from my memory), entirely within the most brilliantly designed single building in gaming history. Ok, there are some brief outdoor areas ("outdoor" in terms of bright ambiant lighting, but not in terms of draw distance, terrain, etc), but it's basically an indoor game.
Half-Life didn't take place in a single building (not even counting the alien planet...and yes, that part sucked). And yes, I thought the "outdoor" areas were poorly-done, and did take away from the game.
 
Ilfirin said:
The problem with "unified lighting systems" is that:
A) They are usually not unified at all. A lot of people have been using this term to mean that all geometry casts shadows.. where they got that interpretation is beyond me.
I believe those were the words JC himself used.

The idea is that as far as the code goes, it appears that DOOM3 uses the same code for each polygon in the game, with only variations in shaders used. That sounds pretty unified to me.

What you've described is obviously the next step, but we're not there just yet. DX9-capable hardware may be capable of it, but it will be a while before we see it in games.
 
Chalnoth said:
Here's my two cents on the original topic:

2. Outdoor areas add a ton to any game with a story. It's just completely unrealistic to always be inside all the time. I don't like the disconnect that being magically moved to another place gives (usually), and an entire game taking place in a single building these days is just poor design. It's always going to be too boring of an atmosphere.

When you are in a location without enough oxygen to survive indoor areas actually make sense. But as I said before it is simply a personal preference.
 
Not really a reply, but I wanted to say I actually like Id games too, I am glad someone else out of all of you did. I find their graphics pallete choice to be nice. Maybe I like drab stone walls :) And I actually liked the gameplay at the time I played it. Of course I liked half life too. ONe of my favorite things about Id is they used less gimicks. That allowed things like quake2 done quick, and that is one of the coolest things ever to watch. If you haven't download it and watch seriously amazing.
 
Ilfirin said:
B) Right now the unification is done at great expense to quality - In Doom they are treating every surface the same from a lighting point of view in that they are treating every surface like a Blinn/Phong surface.

True, but that is mainly because they are still targeting DX7 level hardware. Thus there is no true trade off here as far as I can judge. Once you move up to DX9 level hardware as minimum you can start to treat every surface with the best suited shader.

I still like to think of the way how 3ds max have a number of different rendering options - 'shaders' - that fit different materials. It can be a bitch to setup, but it gives you very good speed over some generic ray traced overall algo.

Anyway, I'm not sure we disagree here, but Carmack didn't have that many options. :arrow: Doom III is all about fairly realistic light/shadows before realistic shading of different materials.
 
LeStoffer said:
:arrow: Doom III is all about fairly realistic light/shadows before realistic shading of different materials.

From what I've seen Doom is capeable to set up rather complex shaders with various options to shade a material.
It's not like everything looks the same.

To be honest - to me Doom is more about high quality surface rendering than about ultimatively realistic shadwos.

This of course is based on the leaked alpha, so things may change, but even this build is capeable to render the most lifelike surfaces I've ever seen in a realtime engine.
 
For the unification stuff:

Unified lighting means that all lighting is done the same way. Or, in other words, all lighting uses the same lighting equation. For it to mean shadow and lighting are unified they would have to be done via the same algorithm/equation (like ray-tracing, for instance..).

Doom fits the unified lighting term pretty well in that every surface is lit as a phong/blinn surface. It definitally does not fit the shadowing definition.


To LeStoffer:
Yes, I said all that at the bottom of my original post :)

And I never implied that a different shader should be used for each surface, that is actually the opposite of what I said. Unified lighting from a programming point of view essentially means one lighting shader for all surfaces or, in other words, one lighting equation for each surface. Having a different lighting equation for each type of surface you support (i.e. a brushed metal shader, blinn/phong shader, general anisotropic lighting shader, etc) would be far from 'unified lighting' and is quite possible on DX8 class hardware (though it's complete hell on the programmer). What I was suggesting was a fully generalized BRDF shader - unified from a tech point of view, and yet still treats all surfaces uniquely.




There are some interesting problems with this though in that obtaining BRDF data has proven to be remarkably hard to do. Only a handful of universities around the world have BRDF measuring devices, and these things are far from cheap. There are libraries of BRDF data, but many of those are either restricted, or in proprietary binary formats.

Then there's the problem that many of the surfaces you might find in a game are out of science fiction and hence have obviously never been measured in the real-world. Using a heightmap isn't quite adequate. I've played with the idea of building virtual surfaces at the microscopic level in normal modelling programs and then run a BRDF measuring simulator over it to generate the BRDF for the surface.. but that would be hell and very counter-intuitive (trying to get the right lighting by putting microscopic scratches in the surface, I mean).

Definitally interesting to have to start moving the R&D behind engine development to real science research labs.

*note: I haven't looked into image-based BRDF measurements too much. That might very well work well enough to use, and only requires a digital camera and a lightsource.
 
LeStoffer said:
Ilfirin said:
B) Right now the unification is done at great expense to quality - In Doom they are treating every surface the same from a lighting point of view in that they are treating every surface like a Blinn/Phong surface.

True, but that is mainly because they are still targeting DX7 level hardware. Thus there is no true trade off here as far as I can judge. Once you move up to DX9 level hardware as minimum you can start to treat every surface with the best suited shader.

So what it means is that we should wait for DOOM4 or quake 4 or may be 5 (since carmack said the next engine would be based on DX9) for better lighting/shadows AND polygon counts?

Another question that I would like to ask is why having some sort of n-patches renders stencil volumes useless? I mean why can't we have better looking models but keep the shadows based on older (low poly or b4 n-patches) models. Or better yet why can't we apply n-patches to shadows after they have been calculated?

Here, I am assuming that every polygon that is displayed be it shadows or a hard object is actually a polygon just with different sets of colors.
 
To sabeehali:

There are a lot of reasons why stencil shadow volumes really suck, some of which you mention in your post.

The reasons you can't do the shadowing on the low-res geometry and n-patch the displayed model is that self-shadowing would end up getting extremely screwy.

N-patching the shadow volume is something I haven't thought of though.. that might work if you preserve the normals from the silhouette, but then you will be doing the n-patching on horribly stretched geometry. I have my doubts as to whether it would look good or not.

All these problems with the low-poly geometry, no n-patching, etc all pretty much go away with shadow buffers/maps (where triangle counts don't go through the roof when you add a light to the scene), which is one of the many reasons why they sound so much better looking forward, than shadow volumes. Granted they have their precision problems, but that's what PCF is for.
 
Ilfirin said:
To sabeehali:

There are a lot of reasons why stencil shadow volumes really suck, some of which you mention in your post.

The reasons you can't do the shadowing on the low-res geometry and n-patch the displayed model is that self-shadowing would end up getting extremely screwy.

N-patching the shadow volume is something I haven't thought of though.. that might work if you preserve the normals from the silhouette, but then you will be doing the n-patching on horribly stretched geometry. I have my doubts as to whether it would look good or not.

All these problems with the low-poly geometry, no n-patching, etc all pretty much go away with shadow buffers/maps (where triangle counts don't go through the roof when you add a light to the scene), which is one of the many reasons why they sound so much better looking forward, than shadow volumes. Granted they have their precision problems, but that's what PCF is for.

Thx for the reply, I am not a programmer but from what I understand this is what is happening or supposed to happen

A shadow is just a polygon with a gradient color map i.e. as the distance of the shadow from the object increases the intesity/hue/luminence of the color (in our case gray) decreases.

Now here we have a polygon that is essentially skewed/elongated/distorted that has a color gradient on it. To calculate a shadow we calculate the polygon itslef, using the equations that we have for the polygon as well as the position of the light source + its intensity and get the color gradient of the polygon. The next step is to add the color component to the final scene which should not be difficult as its essentially an add operation on a LOT of pixels and owing to the IMR nature of the current gen of 3d chips (i.e. dividing the 2d screen in multiple blocks) any good SIMD/vector/tnl unit should be able to do it. The result is where ever there is a shadow the objects colors are muted /darkened and together they give the impression of a shadow.

Now the fact that shadow by nature itself is a bit distorted I don't think having a low poly model to base a shadow on is that big a deal as long as the shadow itself is ok. What I mean is this ... The npatches effect more of the 3D nature of a polygon the shadows are by definition 2D. The edges of a polygon with low number of triangles will be blocky so to speak but not horribly so with a suffiecient number of polygons. On a 2D shadow this effect will not be as much apparent as on a 3D polygon itself. So as long as the shadows are generated in the correct manner (i.e. their size, shape, color) I surely wouldn't mind my shadow having blocky elbows (not that they aren't in real life) ;)

If I understand correctly from your post shadow buffers/maps employ the same thing that I am proposing so the question is why aren't people using it?
 
Eh, not quite.

Shadow volumes work by determining the 3D silhouette (this is generally an expensive operation that has to be done on the CPU (or poorly done on the GPU).. an expensive operation that grows with the complexity of a model) of a model from the light's point of view and then projecting the silhouette away from the light. The volumes are then drawn into the stencil buffer and pixels that aren't in shadow (stencil value of 0) are lit. Repeat for n lights. It's just a rudamentary binary mask, there is no gradient.

When using low-res geometry for the shadow volume and smooth, high-poly geometry for the actual display of the geometry you will often end up with shadows that look just damn wrong because of sufficently different 3D silhouettes.


Shadow maps/buffers (they have so many names..) work by rendering the scene from the light's point of view and only updating the depth. This is a [relatively] very quick pass that doesn't increase the polycount or anything. Then, when lighting, you project the depth buffer (rendering the 'depth to light' to a texture provides the same results, which is what you have to do with most hardware) from the previous pass over the scene and calculate the depth to the light again. If there is any difference between the calculated depth and the projected depth buffer read, the pixel is in shadow so you don't light it. Performance doesn't drop anywhere near as fast with increased polycounts when using shadow maps (vs shadow volumes) and thus you can use the fully smoothed geometry with much less of a hit.

The reason not everyone uses shadow maps is that their accuracy depends heavily on the resolution of the shadow map and the precision of that map. Stair-stepping aliasing artifacts will pretty much always arrise. Luckily there is percentage-closer filtering (PCF) that was specifically designed to take care of many of the aliasing artifacts. Good PCF wasn't possible at good enough frame-rates until DX9 shaders (well.. ps1.4 did it OK too).

Now.. if the IHVs would just implement native adaptive shadow maps we could all be happy. ;)
 
Ilfirin said:
Good PCF wasn't possible at good enough frame-rates until DX9 shaders (well.. ps1.4 did it OK too).

I'm pretty sure Geforce3/4 can do this without the use of the pixel shader, i.e. they have it built into their texturing units.

Ilfirin said:
Now.. if the IHVs would just implement native adaptive shadow maps we could all be happy. ;)

Have you heard of perspective shadow maps? They're pretty good at reducing aliasing (they're not perfect though), and can be done on today's hardware. Adaptive shadow maps seem pretty poorly suited to hardware implementation.
 
Back
Top