DemoCoder said:With VTF, you only render to texture only if you need to update the texture every frame (e.g. e.g. iterated physics) and when you do it, the possibility exists to do it only once, and reuse it with multiple vertex streams. With R2VB you update both the texture you are sampling (e.g. iterated physics) AND must render a new vertex buffer. Every new vertex stream requires a separate render.
That's incorrect. There's no more need to update the texture every frame for R2VB. You can render to it once, then reuse how many times you want. This is not at all different from VTF. Everything with regards to the texture update is exactly the same. The only difference is how it's accessed in the vertex shader. The vertex shader accesses it through a sampler. R2VB reinterprets the texture memory as a vertex buffer.
DemoCoder said:The VTF requires extra instructions to figure out texture addresses, but the R2VB requires extra texture samples to lookup the input vertices.
Huh, "lookup the input vertices"? What's that supposed to mean?
DemoCoder said:The assumption that vertex textures are created by the rasterizer I think is flawed. That may be the case in some instances, with the assumption of physics done on the GPU, but is also likely the case that these are done by the CPU, which is certainly more like the case on XB360/PS3, as the very system architecture was designed to allow CPU driven procedural synthesis.
That's the whole purpose of VTF to begin with, so that you can feedback results from the end of the pipe to the beginning. If you're updating stuff on the CPU there's absolutely no reason to use textures. Use a vertex buffer.
DemoCoder said:It R2VB is such a huge win, why don't we just get rid of vertex shading on GPUs, get rid of unified shading, and just make pure PS rasterization cards? I think it is the wrong model, and that the current segmented model (vertex shader, pixel shader, and future geometry/tesselation shaders) is the correct one going forward.
It doesn't get rid of vertex shading or setup, but you can certainly shift over workload from the vertex pipes to the fragment pipes. On a unified architecture this is not needed. That doesn't mean that R2VB isn't something you'll see in the future since DX10 in many ways builds on similar concepts.