GeForceFX and displacement mapping?

About skinning and HOS: in offline rendering, we only perform the skinning on the control points for the higher order surface, wether it's NURBS, Bezier patches or subdivision surfaces. We then let the tesselation algorythm treat the surface as it would be static - it does not have to be aware that the model is subject to any kind of deformations.
With enough control points, it usually looks perfectly right and is a lot faster, easier to set up and animate as well, as you only have to deal with a relatively simple model. Although some guys I know here have their control meshes for the subdiv surface at 100.000 polygons... :))
 
arjan:
OK, I did misunderstand you.
But how would the VS know the matrix indices?

Equal to "slot number"?
Then you either need to have slots for all the matrices used in the mesh, or you need to switch matrices between base triangles.

In a VS constant?
Better, but you still need to change this constant between base triangles.

I think that changing constants between base polys would give a too high performance hit.


Actually, when I think of it, I suspect that nothing at all should be transformed before (N-patch) tessellation, even with matrix palette skinning.
But I agree with what you said first. The normals of the control points are needed as input to the tesselation, and they aren't known until after the T&L. How would it be possible to avoid transforming anything before tesselation?
 
About having skin sliding on the muscel tissue.

I haven't thought much about it in games. Probably because there are enough of other errors to think about. But I've seen places where it was rather disturbing.

The Kaya movie had some sequences where she moved her eyebrows up and down. And instead of the skin sliding, she moved the whole bone under the eyebrow.
Well, that's at least somthing I can't do. :)
 
Basic said:
arjan:
OK, I did misunderstand you.
But how would the VS know the matrix indices?

Equal to "slot number"?
Then you either need to have slots for all the matrices used in the mesh, or you need to switch matrices between base triangles.

In a VS constant?
Better, but you still need to change this constant between base triangles.

I think that changing constants between base polys would give a too high performance hit.
For the setup I had in mind, there would need to be a new piece of hardware alongside the tessellator for the specific purpose of collecting a set of unique matrix indices for use with each generated vertex in the N-patch and compute the appropriate set of blend weights for each generated vertex. The same way that the tessellator itself supplies coordinates and normal vectors per vertex, this circuit would supply per-vertex blend indices and weights (although its exact operation would be quite different). As seen from the vertex shader itself, these data would appear in its input registers just like any other per-vertex input data.

When collecting the indices, you will need one slot for each matrix that affects the control vertices - for a total of no more than 3N slots in the worst case (when allowing N indices per vertex). There is no need to have a 1:1 mapping between the slots and the entire matrix palette or to change the matrix palette/vertex shader constants for each patch.

Actually, when I think of it, I suspect that nothing at all should be transformed before (N-patch) tessellation, even with matrix palette skinning.
But I agree with what you said first. The normals of the control points are needed as input to the tesselation, and they aren't known until after the T&L. How would it be possible to avoid transforming anything before tesselation?
The tessellation would normally work on an object-space representation of the object to tessellate - at that point, there is not yet a need to transform coordinates and normal vectors.
 
OK, new hardware, with a new "indexed stuff repacker/interpolator".
Yep that should work.

Base mesh would have at most N {index weight} tuples, just as much as needed for that vertex. The repacker makes a vector with at most 3N tuples, where the weights are interpolated, and the indices constant. (Just to make everything as uniform as possible, it might be easier to interpolate everything. The repacker has already made sure that the indices are "interpolated" between two identical constants.) The final number of tuples are also supplied so it's possible to do the right number of loops in the VS.

And an "indexed stuff repacker" could be useful for other things as well.

One question though, is if we could expect to get such a unit. It's rather special purpouse but complex for hardware, so I guess we'll need a programmable PPP.


But back to what I ment with my initial comment.
With the current hardware, it seems kind'a messy to mix matrix palett skinning and truform (and as an extension, DM). So I wonder how often it will be used.
I thought that UT2003 used that mix, does that mean that I have missed some easy trick, or have they actually done some of the methods mentioned here?

vogel:
Would you care to comment that, or is it internal stuff that you can't talk about publicly?


Laa-Yosh:
Are you saying that matrix palette skinning (MPS) is useless in the long run? Or that we need MPS with some texture coordinate shifter to simulate "skin sliding", and tools to add "muscel bones" that simulate muscel movements?
I e an arm would consist of the actual bone with the joints, and then have an added "biceps-bone" that is coupled to elbow angle and tension.



arjan de lumens said:
The tessellation would normally work on an object-space representation
Uhmm... DOH :oops: :D Too late, going to bed now.
 
Umm .. i think tessellation should be done in camera/clip space .. i.e. after tranform, otherwise how do you do it adaptively based on depth value ? Triangle's depth value isnt known before all three vertices are through the VS.
This way you wouldnt need to worry about indices/any other custom per-vertex component either, because VS outputs are limited to clip space position, texcoord, color, fog and point size.
All those can be easily interpolated/copied/perturbed in tessellator, as specified by "tessellator program".
Now when i think about it, if NV and ATI indeed have dedicated hw to do HOS, have they really placed it in front of VS in pipeline ? Between VS&PS sounds like a lot more logical place.
 
Performing tessellation in clip space, after the perspective transform, will yield weird results since the mapping/scaling between object space and clip space is not the same for each of X, Y and Z axes. You risk getting artifacts like e.g. bulges on objects that depend on camera location and view frustum size - which would look weird as hell.

ATI does tessellation in hardware before vertex shading, Nvidia is probably doing (emulating?) the same.

Adapting tessellation based on depth values can be done with two-pass vertex shading: first, you run each control vertex through the shader once to get the transform results - this would give you just the depth values needed for adaptive tessellation (with proper deadcode elimination, this should typically take about 2 to 10 vertex shader instructions per control vertex). Then you tessellate the polygon based on the collected depth values, and pass the resulting vertices to the vertex shader again.
 
arjan de lumens said:
Performing tessellation in clip space, after the perspective transform, will yield weird results since the mapping/scaling between object space and clip space is not the same for each of X, Y and Z axes. You risk getting artifacts like e.g. bulges on objects that depend on camera location and view frustum size - which would look weird as hell.

Use homogenous clipping and a rational representation instead then (still saves you from having to transform each generated vertex, just needs the perspective divide so it cuts back on the feedback latency for adaptive tesselation).
 
Which would probably work for vertex coordinates - how well does it work with normal vectors? You will need the normal vectors for per-pixel ligthing as you, when doing tessellation after vertex shading, cannot rely on the vertex shader to do any meaningful lighting calculations.
 
Well I was assuming you would use a sensible HOS representation from which the hardware would just determine the normal by itself (via partial derivatives, you can do this before the world-to-eye transform since it produces completely independent HOSs which you can tesselate seperately to the same depth as the main surface).

Marco

PS. if you want to use differences instead of differentials (so without the partial derivatives) to determine the normal, working in eye space has the slight drawback that you will have to perform an inverse transform to world space of course, win some loose some (but if you want to perform fully adaptive tesselation lowering the feedback latency is probably more important).
 
The problem with processing a HOS in clip space, even with rational functions and partial derivatives, is that you still end up generating normal vectors in clip space, which then need to be reverse-transformed (and normalized) back into eye space to actually be useful for lighting.
 
The partial derivatives can be taken in world space ... if we use tensor patches the derivatives themselves are tensor patches, they can be determined analytically from the tensor representation of the original HOS (in whatever space you like).
 
I should have said polynomial tensor product patches actually, but it is such a mouthfull ... you might be more familiar with the specific representation using Bernstein-Bezier polynomials :)
 
DemoCoder said:
Walt,
Adaptive tesselation is possible and displacement mapping is possible, but the two cannot be enabled at the same time because the R300 doesn't support "true" displacement map sampling. It only supports a kind of displacement map sampling that doesn't work unless tessellation is turned off.

Pre-sampled displacement maps use the index of each vertex to find the sample. The developer has to line up the mesh and the map ahead of time statically. Adaptive tesselation changes the number and index (position) of vertices in the stream, so that they will no longer match up with the pre-sampled map, hence, tesselation has to be disabled. ATI's Flash presentation isn't developer documentation, it is a bullet pointed feature list that does not list the limitations of their bullet points.

To do "real DM", the R300 would need a texture unit in the vertex shader. This texture unit would be able to convert vertex positions into interpolated texture coordinates and fetch displacement map values (possibly with atleast bilinear filtering). I don't believe this unit exists.

Thanks much for the clarification, DM--I didn't realize it was an either-or situation. And, as the displacement map feature is under the vertex shader listings in the demo, and the adaptive tesellation remark I quoted is under the Trueform section--it's obvious the n-patch correlation I thought might exist does not in this case. And, right--I don't see evidence of a texture unit in the vertex shader. Thanks again for pointing this out, and I'll let this post serve as thanks to anyone else who may have taken the time to respond.
 
arjan de lumens said:
Performing tessellation in clip space, after the perspective transform, will yield weird results...
hm yes i see that when tessellating after projection, things could get messy. In eye space though ( after Model->world and world->eye transforms ) there shouldnt be any problem. But, we have projection matrix in the vertex shader, often concatenated with modelview.
Doesnt sound like there is any "clean" solution. When tessellating before vertex shader, you get into problems of how to deal with data in vertices that is not meant for interpolation ( do you copy it for new vertices, interpolate linearly or .. ? ) Also, is it desirable to run vertex shaders on all tessellated polys ?
Tessellation in eye space would be "cleanest" but then we have to have a separate unit doing projection, further complicating things.
Tessellation in clip space is problematic because of "nonlinearity"

Well.. GL2 doesnt even touch on tessellation subject anywhere in pipeline, so obviously graphics HW will not do any DM or HOS for next ten years :-?
 
Basic said:
Laa-Yosh:
Are you saying that matrix palette skinning (MPS) is useless in the long run? Or that we need MPS with some texture coordinate shifter to simulate "skin sliding", and tools to add "muscel bones" that simulate muscel movements?
I e an arm would consist of the actual bone with the joints, and then have an added "biceps-bone" that is coupled to elbow angle and tension.

Yes, I think MPS will have to be replaced.
The problem is that in reality, skin does not simply rotate or translate around joints, it is pushed and pulled in all kinds of directions by the underlying muscles. Thus, it cannot be simulated by simply letting some bones to transform it.

'Muscle bones' have been a trick used for many years now, that is to insert extra bones to modify the skinning. You could scale such bones as well, to simulate muscle flexing, but the whole thing is both too complicated to set up and not realistic enough to use it. Dragging aroung texture coordinates might help, but once again you shouldn't force artists / technical directors to mes saround with it.

The already existing solution is to do real muscle simulation. I'm not familiar with the mathematical background but I'm sure it's not too complex as some of these systems can already work in real time on x86 CPUs. In this case you treat the muscle as a real object, with volume, maybe as a polygonal mesh. Then the skin would be wrapped around this complex system of muscles and pushed and pulled by it... Can't explain it any better, but here's an article:
http://www.3dfestival.com/story.php?story_id=399
 
Hate to bring this one up againg since Dave is going to ask Nvidia about their DM.
Anyway, just was made aware of this from NvNews:
Reactor Critical previously posted a story about NVIDIA's GeForce FX lacking Displacement Mapping support, which was quite surprising as it's one of the requirements for a DX9 capable card. Jason from Computer Games Magazine contacted NVIDIA about it and this is what they had to say:
GeForce FX supports several types of displacement mapping. The defining features of DX9 are vertex and pixel shader 2.0 (and beyond), GeForce FX has the most complete support for DX9 vertex and pixel shaders, and is clearly a DX9 GPU.

I believe that statement ends the debate... Heh.
http://www.nvnews.net/#1038318858

So if they support "real" DM, why are they moving the conversation towards PS/VS...
 
It will probably turn out to be the same as ATI's DM, and/or

o use vertex shader + D-map stored in constant registers
o use render-to-vertexbuffer
 
As I've mentioned earlier, neither the R300 or NV30 have 'real' HW displacement mapping, unlike the Parhelia.
 
Back
Top