This is exactly the same for real vertex texturing. If you want meaningful texture LOD in a vertex texture fetch, you must compute that LOD yourself. There is no automatic LOD at the vertex level.991060 said:The texture filtering algorithm depends on the size and position of the fragments on the screen.
When rendering to VB, you're effectively rendering to a Nx1 render target, I don't know how the filtering unit can do any reasonable work in this situation.
And you can do the same thing on the fragment level. If you match up the "texels" from your "source vertex buffer texture" to the "pixels" you're going to write into the new vertex buffer, you have a 1:1 mapping texels to pixels automatically. And you really should do that, because you want to generate one new set of vertex attributes for every input vertex, after all. Minification/magnification don't make a whole lot of sense there.
In the 1:1-mapping case, no matter how fancy your filtering settings are, you'll get unfiltered samples from the base mipmap level. To sample from anywhere else you need LOD bias and or explicit (computed) LOD. This, too, is exactly on par with what you get at the vertex level.
Linear filtering is useful for making smooth transitions.991060 said:Also, a vertex is a mathmatical defition, it occupies no space or area in the 3D space or on the screen. A fragment can cover more than one texel is the reason why we need txture filtering in the first place. For vertex, why we need such capability?
Say you put a heightfield into a vertex texture and put it onto a highly tesselated rectangle. If you scroll around the heightfield over the surface by smoothly varying the texcoords (could easily be done by adding an FP32 VS constant to the texcoords before the lookup), the heightfield entries will over time "pop" around without filtering. A high peak surrounded by lows can't smoothly move to a position between two vertices. It can either be at vertex A or vertex B. It cannot be halfway across unless there is another vertex between these two, but then we could recursively continue ad inifinitum. This is no solution.
So you really want a linear filter to produce smooth in-betweens, and also to hide the limited resolution of your height field.
Beyond basic linear filtering, making the case for mipmapping on vertex textures is more complicated, but I'd rather have that capability as well.
You don't need it. Rearranging the vertices in the order given by the index buffer ...991060 said:What I mean by the loss of tolology is that you can't use the original index bffer(by which the topology of the mesh is determined) when doing R2VB.
a)would produce an incompatible "R2VB" result. You'd have to write the rearranged original vertices as well, to match them up.
b)can lead to reprocessing of duplicates. The output buffer would be larger (in terms of vertices) than the input buffer.
You don't want that.
But no problem. You can process the vertices in any order, because vertex processing can't access neighbours anyway.
By virtue of the 1:1-mapping, the produced vertex buffer has the same ordering as the input buffer, so you can still use your original index buffer for rendering.