Basic said:Another problem with combining matrix palette skinning with adaptive tesselation (or any tesselation for that matter), is how to "interpolate" the matrix indices.
It's possible to solve with an abundance of indices, but it's inefficient.
arjan de lumens said:Also, a question (to all): Barring the case of amplified/on-the-fly generated geometry, is there anything that can be done with VS3.0 that cannot be done with, say, PS3.0 and render-to-vertex-array?
ERP said:Taking the risk of offending every Matrox fan on the board.
Matrox style displacement mapping is really nothing more than a stop gap solution, the right way to do it is texture reads in the vertex shader (as in VS3.0).
OK... you render to a standard pbuffer/offscreen color buffer using the usual rendering pipelines/pixel shaders. Then, you point the vertex shader to the buffer and tell it to interpret the data in the buffer as a vertex data stream. It's conceptually rather similar to 'render-to-texture', altough the outcome is obviosuly very different. This technique obviously comes with the issue/disadvantage that there must be a 1:1 correspondence between the 'pixels' in the buffer and vertices that are sent to the vertex shader. It is, of course, an advantage if the pixel shader supports a floating-point pixel format.Kristof said:Could you define "render to vertex array" and how it differes from other mechanisms like texture reading in the vertex shader (possibly linked to render to texture using the pixel shader in a previous pass), output to memory from the vertex shader ?
Does Render to Vertex Array come with the disadvantage of a one to one mapping between the vertex array you rendered to and the pass that uses the result of render to vertex array ?
In short just define Render to vertex array
K-
Humus said:If it can render to an offscreen float buffer it can do render to vertex array. It's just a matter of getting the driver to accept the rendered texture as a vertex buffer. So it shouldn't be a problem for the R300.
Kristof said:Well the main problem I have with such a buffer is how do you tell the pixel shader to act this way. You need to move the pixel shader from processing pixels to processing what in essence are vertices. This means you need a special mode or you need to generate "triangles" (since that is what the PS traditionally processes) to match up with the positions in the pbuffer and this most likely means generating one pixel triangles which are very inefficient... unless as said you have a special mode, but even then it gets tricky since the pipelines of the PS tend to depend on processing 2x2 areas or a triangle...
I am also not fully convinced that using the pixel shader resources to do things the vertex shader should do. We are moving towards similar capabilities between PS and VS, they have the same capabilities but one processes pixels and one vertices. What you seem to propose is to link PS and VS for "some" processing so that texture accesses can be handled by the PS which for a moment becomes part of the VS ?
K-
I don't see why: on vertices that lie on the edge between two control vertices you would only need to fill slots corresponding to the 2 vertices, as the weights of the other vertices in the 2 original triangles would compute to exactly 0 on that edge - so you would need 2N slots on the edge and 3N slots elsewhere. You do, of course, get a problem if the edge is a crease (that is, the normal vector is discontinuous on the edge), but that is a well-known problem with e.g. N-patches in any case.Basic said:arjan:
I think that what you described was the "abundance of indices" was what I was thinking of. But if the base triangle is a part of a mesh, then you'd need to add empty index slots for the indices that the neighbouring triangles uses too. The emty slots could often be reused, but not always.
So it would need even more than 3N triangles.
Umm, no - wouldn't you use normal vectors for that, to control the tessellation? Ummm - OK, now I see the problem: before you tessellate, you need to transform the normal vector using the matrix palette, which means you need to do a lot of work both before and after tessellation. Still nothing unsolvable, but we're moving fast towards the shade-tessellate-shade-again scheme that gking suggested here.As an simple example, think of a tetrahederon with max 1 matrix per vertice. (Not much need for matrix palette skinning here, but I'm just trying to keep it simple.) It would need 4 indices per vertex, just to make the position of the indices in the interpolated vector right.
Kristof:
IIRC this is why Matrox is actually proposing to do the skinning on the CPU on the low tesselated model
no_way said:hmm.. indeed, maybe Transform - tessellate - Light pipeline would be the most universal solution ?
Light stage and fragment shaders could even be unified, given that tessellator is guaranteed to output fragment-sized polygons.. er micropolygons. Now, where is this all heading to ? 8)
Nappe1 said:No way: does "säteen seuranta" ring the bell?
to the others, it's ray tracing in english.
Ummm, my whole idea was a scheme to avoid interpolating the indices at all. Consider, for example, a triangle/N-patch where the matrix indices at each vertex/control point are, say {M0, M1, M2}, {M2, M3}, {M1, M3, M4}. In this case, you would have 5 distinct matrix indices for the triangle/N-patch - {M0, M1, M2, M3, M4}, and thus, for a generated vertex within the N-patch, you need 5 slots to collect the 5 distinct matrix indices. For their weights, you determine the weight at each control point of each of the 5 matrices (inserting weight = 0 if a matrix isn't listed for a control point) and interpolate them linearly across the N-patch. This scheme should work just the same regardless of the order in which the vertices/control points are laid out in/fetched from memory.Basic said:Maybe I see the difference in how we're thinking.
I wanted the base mesh (control points) to be designed so it had a good T&L and cache efficiency even when not using it for N-patches. So each vertex should be one entity, and that includes the matrix inidces and weights. And each vertex must carry matrix indices and weights for all vertices of all triangles using the first vertex.
You can't reuse a slot in vertex A so it's one index in triangle ABC and another index in triangle ADE, even if the weight at vertex A is 0. Because the index is also interpolated and will have some messed up value in the middle of the base-triangle.
Actually, when I think of it, I suspect that nothing at all should be transformed before (N-patch) tessellation, even with matrix palette skinning.The "transform normal before tesselation" is indeed a problem (that I didn't think of). But it should already be solved, since that's a problem even without matrix palette skinning. It's probably solved by first T&L'ing the control points, tesselating, and then T&L'ing the tesselated values.
gking said:That's why movie studios use blend shapes and displacement maps to handle character animation, in addition to conventional skinning.