GeForceFX and displacement mapping?

Discussion in 'Architecture and Products' started by tb, Nov 19, 2002.

  1. Laa-Yosh

    Laa-Yosh I can has custom title?
    Legend Subscriber

    Joined:
    Feb 12, 2002
    Messages:
    9,568
    Likes Received:
    1,455
    Location:
    Budapest, Hungary
    About skinning and HOS: in offline rendering, we only perform the skinning on the control points for the higher order surface, wether it's NURBS, Bezier patches or subdivision surfaces. We then let the tesselation algorythm treat the surface as it would be static - it does not have to be aware that the model is subject to any kind of deformations.
    With enough control points, it usually looks perfectly right and is a lot faster, easier to set up and animate as well, as you only have to deal with a relatively simple model. Although some guys I know here have their control meshes for the subdiv surface at 100.000 polygons... :))
     
  2. Basic

    Regular

    Joined:
    Feb 8, 2002
    Messages:
    846
    Likes Received:
    13
    Location:
    Linköping, Sweden
    arjan:
    OK, I did misunderstand you.
    But how would the VS know the matrix indices?

    Equal to "slot number"?
    Then you either need to have slots for all the matrices used in the mesh, or you need to switch matrices between base triangles.

    In a VS constant?
    Better, but you still need to change this constant between base triangles.

    I think that changing constants between base polys would give a too high performance hit.


    But I agree with what you said first. The normals of the control points are needed as input to the tesselation, and they aren't known until after the T&L. How would it be possible to avoid transforming anything before tesselation?
     
  3. Basic

    Regular

    Joined:
    Feb 8, 2002
    Messages:
    846
    Likes Received:
    13
    Location:
    Linköping, Sweden
    About having skin sliding on the muscel tissue.

    I haven't thought much about it in games. Probably because there are enough of other errors to think about. But I've seen places where it was rather disturbing.

    The Kaya movie had some sequences where she moved her eyebrows up and down. And instead of the skin sliding, she moved the whole bone under the eyebrow.
    Well, that's at least somthing I can't do. :)
     
  4. arjan de lumens

    Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,274
    Likes Received:
    50
    Location:
    gjethus, Norway
    For the setup I had in mind, there would need to be a new piece of hardware alongside the tessellator for the specific purpose of collecting a set of unique matrix indices for use with each generated vertex in the N-patch and compute the appropriate set of blend weights for each generated vertex. The same way that the tessellator itself supplies coordinates and normal vectors per vertex, this circuit would supply per-vertex blend indices and weights (although its exact operation would be quite different). As seen from the vertex shader itself, these data would appear in its input registers just like any other per-vertex input data.

    When collecting the indices, you will need one slot for each matrix that affects the control vertices - for a total of no more than 3N slots in the worst case (when allowing N indices per vertex). There is no need to have a 1:1 mapping between the slots and the entire matrix palette or to change the matrix palette/vertex shader constants for each patch.

    The tessellation would normally work on an object-space representation of the object to tessellate - at that point, there is not yet a need to transform coordinates and normal vectors.
     
  5. Basic

    Regular

    Joined:
    Feb 8, 2002
    Messages:
    846
    Likes Received:
    13
    Location:
    Linköping, Sweden
    OK, new hardware, with a new "indexed stuff repacker/interpolator".
    Yep that should work.

    Base mesh would have at most N {index weight} tuples, just as much as needed for that vertex. The repacker makes a vector with at most 3N tuples, where the weights are interpolated, and the indices constant. (Just to make everything as uniform as possible, it might be easier to interpolate everything. The repacker has already made sure that the indices are "interpolated" between two identical constants.) The final number of tuples are also supplied so it's possible to do the right number of loops in the VS.

    And an "indexed stuff repacker" could be useful for other things as well.

    One question though, is if we could expect to get such a unit. It's rather special purpouse but complex for hardware, so I guess we'll need a programmable PPP.


    But back to what I ment with my initial comment.
    With the current hardware, it seems kind'a messy to mix matrix palett skinning and truform (and as an extension, DM). So I wonder how often it will be used.
    I thought that UT2003 used that mix, does that mean that I have missed some easy trick, or have they actually done some of the methods mentioned here?

    vogel:
    Would you care to comment that, or is it internal stuff that you can't talk about publicly?


    Laa-Yosh:
    Are you saying that matrix palette skinning (MPS) is useless in the long run? Or that we need MPS with some texture coordinate shifter to simulate "skin sliding", and tools to add "muscel bones" that simulate muscel movements?
    I e an arm would consist of the actual bone with the joints, and then have an added "biceps-bone" that is coupled to elbow angle and tension.



    Uhmm... DOH :oops: :D Too late, going to bed now.
     
  6. no_way

    Regular

    Joined:
    Jul 2, 2002
    Messages:
    301
    Likes Received:
    0
    Location:
    estonia
    Umm .. i think tessellation should be done in camera/clip space .. i.e. after tranform, otherwise how do you do it adaptively based on depth value ? Triangle's depth value isnt known before all three vertices are through the VS.
    This way you wouldnt need to worry about indices/any other custom per-vertex component either, because VS outputs are limited to clip space position, texcoord, color, fog and point size.
    All those can be easily interpolated/copied/perturbed in tessellator, as specified by "tessellator program".
    Now when i think about it, if NV and ATI indeed have dedicated hw to do HOS, have they really placed it in front of VS in pipeline ? Between VS&PS sounds like a lot more logical place.
     
  7. arjan de lumens

    Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,274
    Likes Received:
    50
    Location:
    gjethus, Norway
    Performing tessellation in clip space, after the perspective transform, will yield weird results since the mapping/scaling between object space and clip space is not the same for each of X, Y and Z axes. You risk getting artifacts like e.g. bulges on objects that depend on camera location and view frustum size - which would look weird as hell.

    ATI does tessellation in hardware before vertex shading, Nvidia is probably doing (emulating?) the same.

    Adapting tessellation based on depth values can be done with two-pass vertex shading: first, you run each control vertex through the shader once to get the transform results - this would give you just the depth values needed for adaptive tessellation (with proper deadcode elimination, this should typically take about 2 to 10 vertex shader instructions per control vertex). Then you tessellate the polygon based on the collected depth values, and pass the resulting vertices to the vertex shader again.
     
  8. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    Use homogenous clipping and a rational representation instead then (still saves you from having to transform each generated vertex, just needs the perspective divide so it cuts back on the feedback latency for adaptive tesselation).
     
  9. arjan de lumens

    Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,274
    Likes Received:
    50
    Location:
    gjethus, Norway
    Which would probably work for vertex coordinates - how well does it work with normal vectors? You will need the normal vectors for per-pixel ligthing as you, when doing tessellation after vertex shading, cannot rely on the vertex shader to do any meaningful lighting calculations.
     
  10. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    Well I was assuming you would use a sensible HOS representation from which the hardware would just determine the normal by itself (via partial derivatives, you can do this before the world-to-eye transform since it produces completely independent HOSs which you can tesselate seperately to the same depth as the main surface).

    Marco

    PS. if you want to use differences instead of differentials (so without the partial derivatives) to determine the normal, working in eye space has the slight drawback that you will have to perform an inverse transform to world space of course, win some loose some (but if you want to perform fully adaptive tesselation lowering the feedback latency is probably more important).
     
  11. arjan de lumens

    Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,274
    Likes Received:
    50
    Location:
    gjethus, Norway
    The problem with processing a HOS in clip space, even with rational functions and partial derivatives, is that you still end up generating normal vectors in clip space, which then need to be reverse-transformed (and normalized) back into eye space to actually be useful for lighting.
     
  12. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    The partial derivatives can be taken in world space ... if we use tensor patches the derivatives themselves are tensor patches, they can be determined analytically from the tensor representation of the original HOS (in whatever space you like).
     
  13. arjan de lumens

    Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,274
    Likes Received:
    50
    Location:
    gjethus, Norway
    Hmmm ... any links to documents that actually explain tensor patches?
     
  14. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    I should have said polynomial tensor product patches actually, but it is such a mouthfull ... you might be more familiar with the specific representation using Bernstein-Bezier polynomials :)
     
  15. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Thanks much for the clarification, DM--I didn't realize it was an either-or situation. And, as the displacement map feature is under the vertex shader listings in the demo, and the adaptive tesellation remark I quoted is under the Trueform section--it's obvious the n-patch correlation I thought might exist does not in this case. And, right--I don't see evidence of a texture unit in the vertex shader. Thanks again for pointing this out, and I'll let this post serve as thanks to anyone else who may have taken the time to respond.
     
  16. no_way

    Regular

    Joined:
    Jul 2, 2002
    Messages:
    301
    Likes Received:
    0
    Location:
    estonia
    hm yes i see that when tessellating after projection, things could get messy. In eye space though ( after Model->world and world->eye transforms ) there shouldnt be any problem. But, we have projection matrix in the vertex shader, often concatenated with modelview.
    Doesnt sound like there is any "clean" solution. When tessellating before vertex shader, you get into problems of how to deal with data in vertices that is not meant for interpolation ( do you copy it for new vertices, interpolate linearly or .. ? ) Also, is it desirable to run vertex shaders on all tessellated polys ?
    Tessellation in eye space would be "cleanest" but then we have to have a separate unit doing projection, further complicating things.
    Tessellation in clip space is problematic because of "nonlinearity"

    Well.. GL2 doesnt even touch on tessellation subject anywhere in pipeline, so obviously graphics HW will not do any DM or HOS for next ten years :-?
     
  17. Laa-Yosh

    Laa-Yosh I can has custom title?
    Legend Subscriber

    Joined:
    Feb 12, 2002
    Messages:
    9,568
    Likes Received:
    1,455
    Location:
    Budapest, Hungary
    Yes, I think MPS will have to be replaced.
    The problem is that in reality, skin does not simply rotate or translate around joints, it is pushed and pulled in all kinds of directions by the underlying muscles. Thus, it cannot be simulated by simply letting some bones to transform it.

    'Muscle bones' have been a trick used for many years now, that is to insert extra bones to modify the skinning. You could scale such bones as well, to simulate muscle flexing, but the whole thing is both too complicated to set up and not realistic enough to use it. Dragging aroung texture coordinates might help, but once again you shouldn't force artists / technical directors to mes saround with it.

    The already existing solution is to do real muscle simulation. I'm not familiar with the mathematical background but I'm sure it's not too complex as some of these systems can already work in real time on x86 CPUs. In this case you treat the muscle as a real object, with volume, maybe as a polygonal mesh. Then the skin would be wrapped around this complex system of muscles and pushed and pulled by it... Can't explain it any better, but here's an article:
    http://www.3dfestival.com/story.php?story_id=399
     
  18. Tokelil

    Regular

    Joined:
    Mar 27, 2002
    Messages:
    329
    Likes Received:
    0
    Location:
    Denmark
    Hate to bring this one up againg since Dave is going to ask Nvidia about their DM.
    Anyway, just was made aware of this from NvNews:
    http://www.nvnews.net/#1038318858

    So if they support "real" DM, why are they moving the conversation towards PS/VS...
     
  19. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    It will probably turn out to be the same as ATI's DM, and/or

    o use vertex shader + D-map stored in constant registers
    o use render-to-vertexbuffer
     
  20. CMKRNL

    Newcomer

    Joined:
    Jul 12, 2002
    Messages:
    91
    Likes Received:
    0
    As I've mentioned earlier, neither the R300 or NV30 have 'real' HW displacement mapping, unlike the Parhelia.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...