PC polygon performance

Anonymous

Veteran
I'm sorry I've "pushed" (spammed?) this subject in other forums like nvnews.net but I want your opinion as well.

Is the polygon performance of the PC platform in a bad shape, i.e too low in comparison to the consoles?

Before you answer please take a look at this:
http://217.8.136.112/root/pix/Doom3/fatty.jpg (edgy heads)
http://217.8.136.112/root/pix/Doom3/Doom3_05-01.jpg (fingers and head)
http://217.8.136.112/root/pix/Doom3/handy_smurf.jpg (fingers and tighs)
http://217.8.136.112/root/pix/Doom3/shot0233.jpg (pipes in the ceiling)

Here's a quick blurb I've written:
http://217.8.136.112/pc_low_poly.html

Am I onto something? Should we mention Quake3 is only pushing the CPU (it scales very well) and fill-rate of the 3D-Card when presenting benchmarks from it?
 
You have to remember that the way DoomIII works results in the same geometry being submitted multiple times. So a scene with maybe 20K polies can turn into something with over 100K simply because how the lighting model works (1 initial depth and ambient pass then x passes per light - depending on the HW capabilities). Also do not underestimate the not visible polygons like stencil polygons.

You mention HOS, but GF4 nor Radeon8500/9700 are exposing true HOS as in RTPatches (D3D Caps), just N-Patches with all their limitations and problems. Are you sure that people are using these on XBOX ? Do you have any links that indicate this ?

K-
 
Doom3 purposely sacrifices polycount in order to produce the nice lighting effects. It will have low triangle count in comparison to other games out for the PC at the same time.

For example, if you want to look at some decently-high polycounts, take a look at Unreal Tournament 2003.

Another game that's out right now that has pretty good polycounts is Morrowind.
 
Carmack said himself that DOOM III pushes around 150k polygons, compared to 10k in Q3A (his words) and that's without taking into account the Dynamic Lighting & Shadowing (which is the reason the sacrifice was made, as was pointed out by the guys here).
 
Take a look at this:

pcvscon.gif


http://www.cs.brown.edu/~tor/sig2002/ea-shader.pdf

Thomas
 
Doom 3 looks like it needs N-Patches :). Still not sure why J.C. has an issue with N-Patches and shadows. That the shadows won't reflect the blocky head? Or will just not work?

In any case seems like Doom3 really slows down the frame rate doing all the lighting it does.
 
The big problem is said to be that the shadow volumes will no longer fit the models. Now, unless I am mistaken, Npatches just work by interpolating the normals across the egdes. Can anyone tell me why it would be a problem if the shadow volumes also had Npatches applied using the same normal across the entire extruded section?

Now, the only issue I can think of is if the npatches cause a convex join between triangles to become concave. Of course, i have a feeling there is something that I'm missing.
 
noko said:
Doom 3 looks like it needs N-Patches :). Still not sure why J.C. has an issue with N-Patches and shadows. That the shadows won't reflect the blocky head? Or will just not work?

In any case seems like Doom3 really slows down the frame rate doing all the lighting it does.

TruForm is not an option, because the calculated shadow silhouettes would no longer be correct.
 
We'd need hardware generated shadow volumes.

But it's not likely to happen.

Cos' hardware is going the shadow buffer way instead.

So no matter how great an engine writer JC is, what he does in Doom3 might actually turn up to be a dead-end.

He can tell their artists not to make trees - because the engine doesn't work well with them - but how they supposed to sell this engine to third party developers? ;)

Ok, I know. Not all games require trees :D
 
Hyp-X - well, like Carmack said, id already had a couple of propositions for licensing the engine, but his concern when writing the engine from the start wasn't to suit it to the features so the developers who license the engine would be happy, but to make it suitable to what the artists & level designers were asking.

Rest assured, id won't have any problems licensing the DOOM III engine, u can be sure of that! :D
 
Hyp-X, wouldn't the P-10 be able to generate shadow volumes locally? Or is AGP bandwidth and then local storage too limiting?
 
Saem said:
Hyp-X, wouldn't the P-10 be able to generate shadow volumes locally?

I'm not totally aware of all the P10 features.
Can it create or delete vertices/triangles in the vertex shader?
If it does - then it can.

Saem said:
Or is AGP bandwidth and then local storage too limiting?

I don't think either of them has anything to do with it's ability.
 
I doubt P10 could do it, because you need to put the countour finding in a point in the gfx pipeline where you don't have any programmability. Just before the triangle setup, and still having vertex indices.

[Edit]

There is of course another way to generate the shadow volume. Nvidia have a demo where vertices with a normal that points away from the light source is stretched away. This will form a shadow volume directly in the the vertex shader, without finding the contour. To bad that the shadow volume is incorrect at the contour. But maybe it's good enough as long as you have a highly tesslated model.

Another problem with stencil shadows on high poly models is that the contour will consist of small segments, and this means that the shadow volume will consist of lots of very long and thin triangles. And that will make the rasterization less efficient.
 
Hyp-X, wouldn't the P-10 be able to generate shadow volumes locally?

No. To generate shadow volumes, you need information about entire primitives (rather than just vertices), and to generate efficient shadow volumes, you want to be able to loop back through the entire data set multiple times to remove (or insert) degenerate triangles and weld coplanar triangles.

Or is AGP bandwidth and then local storage too limiting?

Well, even if it were possible, since there would be no way to store the generated silhouette primitives, the silhouettes would need to be recalculated every frame. This is a waste of resources, since the silhouette could just be precalculated once on the CPU.

anyone tell me why it would be a problem if the shadow volumes also had Npatches applied using the same normal across the entire extruded section?

In order for NPatches to work correctly on the silhouette, the normals on the back of the silhouette wouldn't be backfacing.

And the caps would pose an additional problem -- potentially huge if they intersect the near clipping plane (or far clipping plane, if using Z fail shadows).
 
Some more on finding contours:

The way I can see to find the contour would involve to mend the triangles into a mesh with 2D connectivity, and find the edges in this mesh where surface normals change from facing to not facing the light. It could be done pretty much on the fly with some caching of edge info. And it could be done with a fallback if the mesh is complex enough to flood the cached mesh info. The rendering would consist of one stage that rendered the models polys (where polys that don't face the light is moved away from it) and output a list of vertices for the contour. And a second stage that adds "contour polys".

It can't be done in the vertex shader since it needs surface normals, and that's an unknown object in the vertex shader.
 
BTW, there was problems mentioned with trees and tree like objects. Is this due to the rather intense amount of calculations that need to be done to attain the necessary shadow volumes and then a very significant fill rate to get all of that in the scene? Or is it something entirely different?
 
Back
Top