If Doom 3 had been written in D3D....

Scali said:
Since you can process the mesh hierarchically for collision, you still don't need to skin the entire mesh.

Thank you for reiterating my point.

Besides, why would you only skin on the GPU when no triangle collision is needed?

I stated this in the context of only taking into account collision, if you only care about collision. The only time you'd want the cpu to perform the skinning is if you are going to do triangle level collision on the mesh if one was only to take into account collision.

You can do the collision on the CPU and still do the complete skinning on the GPU.

True enough but you will still end up doing some skinning on the CPU. Again this holds true only if you are only taking into collision and rendering without considering lighting/shading.

Yes, but why would you want to do this in the first place?
You can simply do bruteforce extrusion like Battle Of Proxycon, which works on vs1.1, and is very fast anyway.
As I said before, if the naive way is faster, do the naive way.

Why would you indeed? But again you talk about extrusion, what of sillouette generation? or by "bruteforce" do you mean a method that bypasses the need for sillouette generation, could you please explain?
 
Proxycon uses degenerate edges in the model and extrudes them in the shader. The trade off is more overdraw and more vertex work for less CPU usage. There's a paper on nvidias site somewhere that describes the technique.

Proxycon is designed to be GPU limited, so I'm sure it makes sense for 3DMark, this isn't a technique I'd choose for a game that has to run adequately on lowend video cards.

Proxycon is also clearlya benchmark and not a game, and as such clearly not diretly comparable to D3 where tradeoffs have to be made ccomodate minimum spec and high end machines. I doubt Carmack is getting bent out of shape by someone who doesn't undertand the tradeoffs criticising his decisions, so I'd just forget the argument...... it's not worth it.
 
I stated this in the context of only taking into account collision, if you only care about collision. The only time you'd want the cpu to perform the skinning is if you are going to do triangle level collision on the mesh if one was only to take into account collision.

From what I understood, you wanted to skin an entire mesh on the CPU if there was a collision with that mesh, and use the result for rendering aswell.
I don't consider it to be "the cpu to perform the skinning" if it only skins a handful of triangles that are actually in the collision area.
It doesn't perform 'the skinning', it performs 'some skinning', which cannot be avoided. This however, will obviously be a much smaller amount of work than skinning the entire mesh, or the entire scene.
I guess you meant the same thing, but you formulated it in a way that it sounded more like something else.

True enough but you will still end up doing some skinning on the CPU. Again this holds true only if you are only taking into collision and rendering without considering lighting/shading.

Again this is what I mean. You do 'some skinning', namely only the positions, and perhaps the surface normals, of the possibly colliding triangles.
For rendering you will also need to skin tangent and binormal vectors, and perhaps even more. However you put it, collision never requires the same amount of work for skinning as rendering does. So offloading at least some of the work that is not required for collision to the GPU, sounds like a good idea to me.

Why would you indeed? But again you talk about extrusion, what of sillouette generation? or by "bruteforce" do you mean a method that bypasses the need for sillouette generation, could you please explain?

I thought it was common knowledge that you can simply insert degenerate quads for each edge into a mesh, and extrude only those that belong to backfaces? You implicitly get near and far caps aswell, this way, in a single call (required for zfail volumes), because the original mesh will take care of that... Its backfaces get extruded, so they are projected away, forming the far cap... The frontfaces are untouched, forming the near cap.
 
Proxycon uses degenerate edges in the model and extrudes them in the shader. The trade off is more overdraw and more vertex work for less CPU usage. There's a paper on nvidias site somewhere that describes the technique.

Yes, and looking at the performance of Proxycon, you can clearly get away with this technology quite easily on 9700+.

Proxycon is designed to be GPU limited, so I'm sure it makes sense for 3DMark, this isn't a technique I'd choose for a game that has to run adequately on lowend video cards.

Now, if you agree with me that a 9700 is not a lowend card... What method would you choose?
I agree that for first-generation shader cards, this is probably not the best approach (depending also on polycount ofcourse), and the CPU-approach, while slow, would be less slow than the GPU-approach in this case. But I would put in a GPU-path for the faster cards, since they relieve the CPU, getting higher framerates, and allowing higher polycount.

So far I have used two paths... A CPU-based path for non-shader cards, like GF1/2, and a vs1.1 path. For some early shader cards, the CPU-path would be better, I suppose, so that can be used, it already exists anyway.

Proxycon is also clearlya benchmark and not a game, and as such clearly not diretly comparable to D3 where tradeoffs have to be made ccomodate minimum spec and high end machines.

That is the problem... Why this tradeoff? Why do I need to look at 6-sided cans of soda on a 4 GHz system with a 6800U because someone wants to play it on his 1.5 GHz with a GF2?
There is only one set of geometry, and only one approach to rendering the shadows.
This doesn't make the engine very future-proof if you ask me. To me it already looks outdated.


I doubt Carmack is getting bent out of shape by someone who doesn't undertand the tradeoffs criticising his decisions, so I'd just forget the argument...... it's not worth it.

No I don't understand why he doesn't support faster hardware, since he wants to license this engine for the next 5 years or so.
I think he is going to lose a lot of customers, because they want higher polycount and less CPU-heavy geometry processing. If Doom3 doesn't make that possible, but HL2 or UE3 will, the choice is easy.
 
Scali, the way the PC market works, the big buck in the next 3 years are going to be made in average by selling to people whose computer specs are around or max at a Radeon 9000-9600 or less and an Athlon XP 2100-2500 (with the occasional cheap PC system with "very new CPU and slow everything else in the system") + about 256 MB of RAM.

What about after 3 years ? Id will have their next engine ready to license.
 
Back
Top