Doom III, a step in the right direction...or not?

From what I have seen and heard Doom III sounds like a legacy engine even before any games are released. I would expect ISV's licensing the engine to upgrade it or least take better advantage to shaders than id has done.
 
Ilfirin said:
Luckily there is percentage-closer filtering (PCF) that was specifically designed to take care of many of the aliasing artifacts. Good PCF wasn't possible at good enough frame-rates until DX9 shaders (well.. ps1.4 did it OK too).

NVidia hardware has had decent shadow-map hardware since GF3. Not until PS_2_0 could ATI produce that same kind of quality (PCF and 24 bit precision), but ATI have to use alot of shader instructions todo what is effectively free in the nvidia architecture.

AFAIK Its the reason ARBFragmentProgram disallows the use of shadow mapping and fragment programs, they could conflict on the ATI architecture.
 
Mintmaster said:
Have you heard of perspective shadow maps? They're pretty good at reducing aliasing (they're not perfect though), and can be done on today's hardware. Adaptive shadow maps seem pretty poorly suited to hardware implementation.

Perspective shadow maps have there own set of problems, they really don't handle objects behind the view at all well.

IMHO Standard shadowing mapping (or its forward variant) are the best Dx9 level shadow mapping technique. Especially if you can afford to use Woo's shadow mapping. I've never been a big fan of stencil shadow techniques (though upto now they have been the better choice for PC games).

There were some interesting stuff about some new research for shadow volumes at Dusk To Dawn. Potentially a new hardware extension to early reject stencil-shadow fragments against the depth buffer. You tell the API the min and max range of depth value that can possible effect the shadow, don't think any hardware does it yet mind.
 
Right, yes. I forgot about nVidia's native implementation of shadow maps w/ PCF (I haven't used native HW shadow mapping in a while - it can be pretty limiting and setting it up in DX is just a series of hacks, basically). Though I tend to achieve significantly better results doing the filtering manually anyway.

The comment about adaptive shadow maps was just ideolistic wishful thinking, I know it probably won't happen at all, and if it does it will be too late to matter anyway :)
 
Ilfirin, how are you doing your depth map shadows on dx?

Anyone implemented perspective shadow maps like the article at http://www-sop.inria.fr/reves/ (year 2002) talks about?

My geometry is self-shadowing itself using shadow volumes with finite projections, doing this with hw t&l pipe and results are ok. Quite fast on my gf2, recreating shadows all the time, not using shadow outline. It could go much faster if I made my lights static and pre-calculate shadow volumes and extract the shadow outline in vertex shader. I'm using per pixel point/spot lights. The only thing I haven't thought of yet is how to handle planar textures with transparent areas like tree leaves, walkways, fences, etc. Depth map shadows handle this automatically. Maybe I have to resort to projective texture shadow casters/receivers w/o self-shadowing casters, which is ok since casters are planar. Wonder how JC does this.
 
JD said:
Anyone implemented perspective shadow maps like the article at http://www-sop.inria.fr/reves/ (year 2002) talks about?

I haven't yet read the article :oops:, but I did perspective shadow maps in our engine. (Heard the explanation from a friend than I reinvented the math for it.)

It has some problems with shadow precision falling with distance - it needs a non-linear projection transformation to fix, and I was too lazy to rewrite everything in VS just for that feature.

The great thing in shadow maps that it has a trivial non-self-shadowing fallback for older-cards / lower detail.
(Objects only cast shadows on the ground.)

Software implementing shadow maps in PS1.4 / PS2.0 is still to go...
 
Ilfirin said:
Right, yes. I forgot about nVidia's native implementation of shadow maps w/ PCF (I haven't used native HW shadow mapping in a while - it can be pretty limiting and setting it up in DX is just a series of hacks, basically). Though I tend to achieve significantly better results doing the filtering manually anyway.

I think the filtering in nVidia's hardware is conceptually broken, that's why it requires slope-scaled depth bias.

It compares all 4 samples in bilinear with the same depth value, then it interpolates the comparsion result (0 or 1)
It causes jaggy patterns all over the place.
Point-sampling doesn't fix this as it still compares the wrong value.

What should be done is bilinear-interpolation of the depth-texture value and compare that with the computed depth value.
Unfortunately there's no bilinear support for floating point textures, so one should use 16bit fixed point textures (it might be sufficient precision).

The alternative of implementing bilinear with 'frc/sub' insturctions, 4 dependent texture reads and a couple of 'mul's isn't very appealing.
 
I thought the reinventing of wheel is what gaming industry is all about. So if you're doing that you're on the right track :D Besides, it's better if you wrote it and understood it, who knows? you might invent something better or improve old tech in the process.

Anyways, I'm now confused w.r.t. shadow mapping. I know the gf3/4 have that hardware thingy going where you have to create a special depth texture and render into it then setup the rendering stages with proper flags and the hw will do it then. I thought the same applied to ati cards but instead they're showing how it's done with pixel shader. I assume this is what you refer to as software shading even though ps is executed on the card?. You don't mean the reference rasterizer, do you? I haven't worked with pixel shaders so you think the same can be done with ps 1.0-1.3 on nvidia hardware? Is there a way to do shadow maps w/o hw support with dx9 so I could run them on my gf2? Thanks.
 
JD said:
I thought the reinventing of wheel is what gaming industry is all about. So if you're doing that you're on the right track :D Besides, it's better if you wrote it and understood it, who knows? you might invent something better or improve old tech in the process.

Reinventing wheel rulez. :)

Anyways, I'm now confused w.r.t. shadow mapping. I know the gf3/4 have that hardware thingy going where you have to create a special depth texture and render into it then setup the rendering stages with proper flags and the hw will do it then.

Yep,

I thought the same applied to ati cards but instead they're showing how it's done with pixel shader. I assume this is what you refer to as software shading even though ps is executed on the card?.

Yes. A shader program is a piece of software after all, isn't it? :)

You don't mean the reference rasterizer, do you? I haven't worked with pixel shaders so you think the same can be done with ps 1.0-1.3 on nvidia hardware?

GF3/GF4 only has 8 bit precision while R200 can do 12bit which is an important difference in this.
(And 12bit is still not enough.)

Is there a way to do shadow maps w/o hw support with dx9 so I could run them on my gf2? Thanks.

You could run the shader version in the refrast if you don't mind waiting 10-20 second for every rendered frame...
I wouldn't actually call it "run".
 
Hyp-X, ok I'm now starting to understand the hw support for shadow maps. I posted then saw your new post. So, you use ps to do the filtering of depth samples to avoid aliasing, I think.

I only skimed the perspective shadow mapping article but it seems somewhat complicated. It's also view dependant thus remaking depth maps every frame(possibly every 2-4 frames). I was just wondering if all that is worth it instead of shadow volumes. I wonder about speeds as well, but most likely shadow mapping is faster in high poly scenes. The shots in that article are pretty impressive to me especially the fine detail at near viewpoint. Anyone can explain their point rendering?
 
Hyp-x thanks for explaining the precision thingy and everything else. I got to stop posting in real time because then it confuses me and everyone else :) I made another post above this one at the same time you posted the one with the precision explanation. Whew, I'm going to take a break. Thanks again.
 
I'm confused again - is self shadowing possible with shadow maps? If so, then how difficult is it compared to stencil shadows where self shadoing is automatic?
 
I'm confused again - is self shadowing possible with shadow maps? If so, then how difficult is it compared to stencil shadows where self shadoing is automatic?

Self shadowing is possible with shadow maps. Its probably easier compared to stencil shadows.
 
DeanoC said:
Perspective shadow maps have there own set of problems, they really don't handle objects behind the view at all well.

The paper I read had a solution to this, and really all they did was use a new post-perspective space slightly different than the camera space, so that objects behind the camera didn't get screwed up.

With this solution, the worst case scenario is your perspective shadow maps become a traditional shadow map. Since the light source is behind the camera when this happens, this is actually the best case for the normal shadow map.
 
Hyp-X said:
I think the filtering in nVidia's hardware is conceptually broken, that's why it requires slope-scaled depth bias.

It compares all 4 samples in bilinear with the same depth value, then it interpolates the comparsion result (0 or 1)
It causes jaggy patterns all over the place.

What you're describing is the very definition of percentage closest filtering (PCF) that everyone else is talking about here. It gives you a soft edge on the shadow. NVidia even extended it to include more than 4 (and up to 256 I think) shadow map samples, though I'm not sure on the specifics.

If you want to do PCF, you need slope-scaled depth bias.
 
JF_Aidan_Pryde said:
I would like to see System Shock 3 using the Doom3 engine.

Acutally.. for sanity sakes, I might not. :oops:

SS3 would probably be based on the DX2 (modified Unreal engine), since Ion Storm owns the rights to that franchise and they've already put in so much effort into it.
 
sabeehali said:
So what it means is that we should wait for DOOM4 or quake 4 or may be 5 (since carmack said the next engine would be based on DX9) for better lighting/shadows AND polygon counts?

I really hope that after Doom3, they stop rehashing the same 2-3 ideas over and over (also Doom and Quake seem to be merging somewhat :oops: ). Of course, then again, Quake2 had pretty much absolutely nothing to do with Quake1...except maybe DM?

Still, they really need something fresh for their next game. I think that might do a lot to make their games seem more unique (how many times can you run through killing Doom monsters?) ;)
 
Nagorak said:
SS3 would probably be based on the DX2 (modified Unreal engine), since Ion Storm owns the rights to that franchise and they've already put in so much effort into it.

There is no System Shock 3.
Warren Spector was the designer of System Shock 1 and he's at ION Storm now, but I believe the copyrights of System Shock are spread across various existant and non-existant companies.
 
BNA! said:
Nagorak said:
SS3 would probably be based on the DX2 (modified Unreal engine), since Ion Storm owns the rights to that franchise and they've already put in so much effort into it.

There is no System Shock 3.
Warren Spector was the designer of System Shock 1 and he's at ION Storm now, but I believe the copyrights of System Shock are spread across various existant and non-existant companies.

Well, I know...but hypothetically it would be them who made it.
 
Back
Top