991060 said:
A few questions concerning R2VB:
1.how do I take advantage of texture filtering in PS unit when the topology information is lost during the R2VB pass?
You can't access topology information in a vertex program either, so this is no loss.
991060 said:
2.Only in certain situations, R2VB can be faster than VTF: the texture can be directly used as a vertex buffer. In most other cases, when the vertex needs to acess arbitrary location in the texture, a "synthesis" pass is inevitable for R2VB, and hence it's slower than VTF. Do I understand it correctly?
Not sure what you're referring to.
The instant you are writing a vertex buffer out of fragment processing, you already have your synthesis pass. You shouldn't need another one. My knowledge about more advanced DXG topics is rather sketchy, so I should mention that for this to work you need a mechanism to reinterpret a pre-existing vertex buffer as a texture, so that it can be read into fragment processing.
(The {NV|ARB}_pixel buffer extensions allow this in OpenGL)
Otherwise you'd just move your vertex shader code more or less verbatim over to the fragment processor. Instead of vertex attributes, you now have texture samples, but the math will be the same. If you sample the "vertex buffer texture" at the same place where you're going to write the matching "vertex texture lookup" to the render target, this instantly matches up.
I.e. just render a quad with trivial vertex processing and let the texcoord interpolators handle your "vertex fetch".
A much more pressing question would be: how often will you have to render the vertex buffer? And that depends on the effect. E.g. if the original vertex program calculates vertex texture fetch coords in dependence of transform matrices, you may be forced to repeat the whole process whenever your matrices change.