Multi-pass shaders for VS 3.0?

ET

Regular
It occured to me recently that VS3 may be able to save the result of shader processing for further passes -- allowing for example to do skinning only once, and use the result for several lighting passes.

The reason I suspect this is mainly the ProcessVertices function, that in DX9 allows, only when used with VS3, to provide an output vertex declaration into which the results will be saved. VS3 no longer has specific outputs (oPos...) but defines them like inputs, so it's possible to define a normal as an output, for example.

But I'm wondering if it's a must have part of VS3 or an option (or if I'm missing the mark completely).
 
I don't beleive the runtime will use HW to accelerate process vertices. However, I haven't tried this but, in theory you could do this on PS2.0 or later HW by setting a simple passthrough shader in the PS and using floating point render targets which you then reuse as inputs to the VS. 9700 may have problems with this due to its 24 bit ps precision, although that really depends if they could just bypass the rast pipe and jpump the VS results back out to memory...

John.
 
No, you can't do multipass for vertex shaders in DX9 easily (not even vs_3_0 model). Of course ProcessVertices asks for output declaration, since vs_3_0 introduces output usage semantics (and ProcessVertices outputs into vertex buffer, not render target).
ProcessVertices doesn't use hardware, so eventually it would be possible to do multipass for software, but you could use vs_3_sw, which has very high instruction limits anyway.
Even vs_3_0 hardware will still accepts triangles, and outputs only pixels (and not vertex buffer). Of course you could render into floating point render target and then copy data back into vertex buffer, but this is kind of tricky thing to do in DX (you can't Lock default pool textures, unless texture is dynamic...).
 
Hmm, yes for some reason D3DUSAGE_RENDERTARGET has been excluded as a valid create flag for a VB. However, in VS3.0 you could just use the sampler to access previously rendered textures as vertex data...

John.
 
MDolenc said:
Of course ProcessVertices asks for output declaration, since vs_3_0 introduces output usage semantics (and ProcessVertices outputs into vertex buffer, not render target).

I don't see the "of course" in this. Previous versions of ProcessVertices only allowed outputing transformed and lit vertices. Microsoft could have easily kept this for DX9. Instead, they allow VS3 to output whatever it wants, and the output can include not only standard vertex shader outputs (that pixel shaders read) but thing like normals. These are mapped correctly to the declarations, at least with the software vertex shaders, which are the only ones currently available.

While this serves as an interesting means of integrating software shaders with the fixed function pipeline, it doesn't make sense to me that Microsoft would enhance ProcessVertices for this purpose only. After all, it seems very reasonable to me that shaders will be able to output into buffers.
 
Though you can declare that you'll use specific register to output normal, it doesn't matter much. They are just values sent to interpolators. DX9 documentation has this to say (under "dcl_usage (Vertex Shader)"): "There always has to be one o register with _positiont0 declaration when not used for process vertices. The positiont0 semantic and the pointsize0 semantic are the only ones that have meaning to Microsoft® Direct3D®, beyond simply allowing linkage from vertex to pixel shaders."
I haven't worked much with ProcessVertices but I do think that behaviour is still the same in DX9 as it was in DX8. All this declaration stuff only serves to bind vertex buffers with vertex shaders as easy as possible. You can easily use a register that has been declared as normal for position data, texture coordinate (vs_3_0)...
 
Back
Top