PC polygon performance

If your trees have leaves made by polys, then finding the contour won't be hard, it's the contour of pretty much every poly in the scene. :( The fillrate requirements will of course (as you say) be horrible.

If the trees are made of transparent textures, with the outline of the leaves in the alpha channel, then you'll get a hard time finding the contour since it doesn't exist at geometry level.
 
Ack, I was afraid that was the answer. This means there would have to be some sort of "hack" to generate such things without the insane performance penalty of doom?
 
Well, the best way would be to use fully-hardware shadow generation techniques. This will probably be done in the next big engines that come after DOOM3 (not just from id...).

The only method that I see working in all scenarios would be one that renders the image from the viewpoint of the light (ortho transforms for directional lights, perspective transforms for point/directional lights) to detect shadows. On the game engine side, you'd probably turn off everything but alpha tests (there'd be no reason for an alpha blend here), or any other PS test that toggles transparency.
 
The solution for trees is to just use a shadow map (which handles alpha textures properly and doesn't cause a huge fill hit).

I think for future engines (and even in Doom III, to some degree), you will see hybrid shadow approaches -- some objects will use stencil shadows, while others will use shadow maps or projected shadows. Both techniques have advantages and disadvantages, and where one technique fails miserably, the other might not have any problems at all.
 
Isn't this kinda like what NWO tries to do? It uses static shadow maps for very small and static world objects and dynamic for the rest.
 
I seriously doubt that any sort of hybrid method can every look all that great.

I truly think that John Carmack has it right. The absolutely ideal way to do things is to treat every object in-game in the same way. While there will always be performance penalties to this, it will most definitely give much more freedom to the artists, allowing them to create much more varied and complex content without as much work as having to deal with many different types of rendering for different objects.
 
I seriously doubt that any sort of hybrid method can every look all that great

Doom III uses projected shadow textures for some things -- the trick to making things look good is to know what the strengths and limitations of each technique are, and design accordingly.
 
Saem said:
Isn't this kinda like what NWO tries to do? It uses static shadow maps for very small and static world objects and dynamic for the rest.

This was posted a while back by the programmer for the NWO engine. It was posted in the NWO games forum.

Ghost,

I'll give you quite a bit of info here. :)

It's all about speed. The way shadows work today, with current 3D-cards, is that basically you have two choices - projected (can be extended to shadow-maps on GF3/GF4) and stencil shadows for realtime moving shadows.

Stencil shadows are ultra-sharp at all times and can be quite heavy on the 3D-card as they eat up fillrate and you need extended geometry to create them.

Projected shadows are the fastest as they require no 'extra' passes and no extended geometry. They can also be smoothed out to create high-quality soft-shadows. Only problem with them is when you can't afford to smooth them enough, or you're out of texture memory so you can't allocate them big enough - then quality can suffer.

NWO uses projected shadows / hardware shadow-buffers. It is the technique that has been used in professional renderers such as RenderMan from Pixar as so on, for quite some time - except this is realtime so quality is not yet as high.

For comparison, Doom3 uses stencil shadows - which are very sharp - same as Blade Of Darkness a couple of years back.

Now, shadow-buffers have the nice property that they can be cached (saved in memory) over consecutive frames, if the light or the receiving geometry does not move. That means that all the time needed for the 3D-card to calculate these buffers is gone - you just re-use the already prepared content.

The DVA engine employs such tactics alot. Everything that can be cached and re-used is. Think of it as DivX compression for games - instead of filesize going down framerates go up.

As such, when you use a healthy amount of lights and moving lights (flashes and so on) you get the most out of the engine. If you want to create a show-off map with tons of moving/changing lights then the DVA engine will only run a tad faster than anything else coming out soon.

Quality is another issue. You don't want ultra-sharp shadows, you need a little softening. Also, the 32-bit blending of current 3D-cards is a huge problem. NWO employs a ton of tricks to get 64-bit blending of lighting and textures.

Keep in mind that NWO runs pretty much as fast as Counter-Strike on a P3600 with a MatroxG400 or a TNT2 card.

To finally answer your question; when we see what we can get away with on medium-specs machines, we might move around more (or all) of the lights and the shadows.

Keep your eyes peeled for the showing of the realtime player shadows and you will see what this engine can do with shadows.

Jimbo
Termite
 
alexsok said:
noko said:
Doom 3 looks like it needs N-Patches :). Still not sure why J.C. has an issue with N-Patches and shadows. That the shadows won't reflect the blocky head? Or will just not work?

In any case seems like Doom3 really slows down the frame rate doing all the lighting it does.

TruForm is not an option, because the calculated shadow silhouettes would no longer be correct.

So rather than having the model look good and the shadow not quite matching... Carmack would rather have both the shadow and model match and both look like crap? That makes a LOT of sense... :-?
 
alexsok said:
Hyp-X - well, like Carmack said, id already had a couple of propositions for licensing the engine, but his concern when writing the engine from the start wasn't to suit it to the features so the developers who license the engine would be happy, but to make it suitable to what the artists & level designers were asking.

Rest assured, id won't have any problems licensing the DOOM III engine, u can be sure of that! :D

I'm sure id will get people to license the engine, but your response is bordering on another naive "where id is being discussed, reality does not apply" response. I've noticed a lot of people using engines other than id's engines lately. Yeah we still have some people using Quake 3, but they don't have nearly as many licensees as back in the Doom/Quake/Quake 2 eras where their engines were the defacto standard.

If their engine can't do trees it is potentially a huge problem, and you'd better believe it may drive some developers away from the engine (depending how bad the problem is). John Carmack may just go about doing his own thing and really not care what his licensees want. Unfortunately, other companies like Epic seem to take the opposite approach and tailor their engines to be as modular and easy to develop for as possible. So, if that's really Carmack's attitude, it must make Tim Sweeney and the Epic crew pretty happy.
 
Nagorak - well, the ut2k3 engine is nothing special ya know... some improvements here and there, increased polycounts, huge outdoor areas (where that comment of theirs of "100x times more detail" comes from), etc...

Definetly not on par with DOOM III, but since it can handle outdoor areas very well and has some pretty modern features (cubemaps, etc...), i see a path of success for Epic here.
 
Nagorak said:
So rather than having the model look good and the shadow not quite matching... Carmack would rather have both the shadow and model match and both look like crap? That makes a LOT of sense... :-?

No, it isn't.

If the shadow volume doesn't mach it can cause serious lighting problems on the object itself. (Remember that it does self-shadows too, not just casted shadows!)

Also - depending on the optimization - it can cause light leakage.
That means if there's a hole between the object and the volume, looking trough that hole you will see everything inverted (the most simple case), so shadows where it shouldn't be and light where it shouldn't be.
It's a highly annoying bug and everyone would notice it, and say the engine sucks.
 
I think that the NWO developer described the advantages of shadow-maps versus shadow-volumes quite well.

Notice he didn't say that shadow-volumes are not the right way.

He only stated that shadow-maps are:
- faster
- better looking
:LOL:
 
Well, those advantages are true in some cases.

Unfortunately, if an object casts a very long shadow, shadow maps will alias uncontrollably, while shadow maps remain well defined the entire distance (a SIGGRAPH paper this year on perspective shadow maps fixes this problem almost entirely, though).

Also, there isn't a particularly good shadow map solution available (yet) for handling visible point lights.

Shadow volumes are also the only way to get volumetric effects caused by occluders, without resorting to something really nasty like ray marching through a shadow map.
 
i dont think shadow volumes is a solution for the future. Whole lot of per-poly lighting calcs get wasted with shadow volumes.
Raytraced shadows are much more efficient, you just dont perform lighting calcs on pixels that don't "see" particular light, i.e. you are just calculating exactly what gets displayed in the end. Its kinda like deferred renderer only using computational resources only on pixels that really get displayed.
With increasing poly+light counts, IMHO shadow volumes become prohibitively expensine. Remember the original GF256 Tree demo ? i dont think you'd ever get realistic shadows there with shadow volume techniques.
I'd prefer high-poly first ( like those DOA3 shots ) and only then clever texturing tricks ( doomIII "polybumps" etc )
 
My guess is that we need +-1GHz CPU (P3 or Athlon class megahertz) for each 6~10 millions polygons/second with a DDR card with a typical game.

Probably JC
worship.gif
decided to push mainly the GPU. I say mainly because the CPU will be pushed too. With 100.000 polygons per scene it is as much as UT2003 and U2.

One of the problems is that the lights are very good and make the polygons much more noticible.
 
alexsok said:
Nagorak - well, the ut2k3 engine is nothing special ya know... some improvements here and there, increased polycounts, huge outdoor areas (where that comment of theirs of "100x times more detail" comes from), etc...

Definetly not on par with DOOM III, but since it can handle outdoor areas very well and has some pretty modern features (cubemaps, etc...), i see a path of success for Epic here.

Actually, Unreal and Unreal Tournament only have polycounts in the range of 200-300 per scene, in terms of world polygons. Performance started to really fall fast after about 500 world polys/scene. The reason is simple: The older versions of the Unreal engine implemented a form of software HSR.

The main benefits of the new rendering engine in Unreal 2 include some major performance optimizations for high geometry throughput on modern T&L video cards. Basically, Tim Sweeney looked at what was made possible by the GeForce, and took a different path than John Carmack. Personally, I think JC made the right decisions for the type of game that he made, and TS made the right decisions for his game. You see, JC's game can run at low framerates, but dramatic use of graphics is a must. TS's game must run at high framerates, and good graphics is secondary.

If you ask me, Unreal Tournament 2k3 will actually look a fair bit better than DOOM3 in screenshots, but DOOM3 will look better in motion (where it really counts). In other words: anybody disappointed by DOOM3 screenshots should see the game in motion before judging.

Oh, and I truly believe that a multiplayer-focused game with the dramatic lighting effects of DOOM3 will not be feasible until about a year after DOOM3's release.
 
I'm not trying to kiss your ass Chalnoth but I agree with Doom 3 looking better in motion. In motion you will be able to see the light change on the environment and people. It will look real sweet.
 
gking said:
a SIGGRAPH paper this year on perspective shadow maps fixes this problem almost entirely, though

Yeah, I've just recently implemented perspective shadow maps in our engine it's quite an improvement.

Also, there isn't a particularly good shadow map solution available (yet) for handling visible point lights.

Hmm, it seems that GF4Ti supports depth textures as cube textures...
(I haven't tried it yet.)

Shadow volumes are also the only way to get volumetric effects caused by occluders, without resorting to something really nasty like ray marching through a shadow map.

I agree.
I wonder if it should really be called shadow volumes in this case though.
 
Back
Top