Chalnoth said:MfA said:Geometry is not send very often as it is now, it will almost never be sent in the future. Geometry will be referenced, or created on the fly by the GPU. If you know beforehand in which tile the geometry is going to end up without creating and or transforming every vertex of it, from the bounding volume, then you can defer the creation and or transformation until you are rendering the tile (in which case just like an immediate mode renderer you can use the result and throw it away). So your scenebuffer just has to store the rendering commands and the geometry references, instead of the transformed vertices, which constitutes vastly less data.
But to do this completely, you'd need to transform every vertex twice
If you have the (hierarchical) bounding volumes and they are sufficiently finegrained (compared to the tile size) then no, you dont have to do that.
UT2k3 has the ability to reference the same geometry multiple times, but it turns out that it is slower to do it that way).
I am not going to touch that
Let me ask a simple question ... do you think that future engines (of the kind which could push enough polygons that storing a scenebuffer would be a problem) would not 99% of the time be using drawing commands referencing vertex buffers created well ahead of time?
I cant imagine us disagreeing on this really ... to be able to use the available performance in the future, engines will have to almost never touch a vertex themselves (not to change it anyway). That is just how it will have to be, since from DX9 onward vertex shaders will be flexible enough to handle all animation/deformation/etc needs it wont be a problem for developers either.
Marco