JohnH said:
DiGuru, you're not telling me anything that allows me to form an opinion, e.g.
To do all that and more, we need a clever way to store all those surfaces, so we can look up all relevant (sub-) pixels as fast as possible. And we need descriptors, for things like textures, transparency, fog and light sources (intensity).
So far you haven't suggest a "new" model you've basically said "if you do something properly it'll work really well", well duh.
Maybe if you elaborate in more specific terms people could comment.
John.
Ok.
First, we need a scene. Our input would probably consist of a bunch of vertices that have to be fitted together to form triangles. (Although the whole bone-objects would be better.)
That would not be a whole scene, that would just be a bunch of sorted triangles. We want a structure that puts them all together. Make a single entity of them. And what would work better (especially when looking at the input) then a single mesh that describes all of them?
Of course, when you throw a ball (or fire a bullet), that would be a free-floating object without interconnects. But that doesn't matter.
When you have your input of lots of individual triangles, you want to transform them, occlude them and do things like displacement mapping. So, we fit them together. And we use a structure that describes the volume, just to have an index. (Like a 3D array that stores pointers to the foremost points of all triangles and their maximum XY length and grows backwards, a Z-buffer for vertices.)
At that moment, all triangles that are no part of the surface would have been culled or transformed. And we would cap the remaining space to a single volume. That leaves us with a single mesh (albeit with some loose objects floating around inside), that would represent the outer surface. Curved to encapsulate everything within. And with a useable index.
Now all we have to do is render the inside surface of that mesh. We can start with texturing it, procedural if possible. Fix the camera somewhere and do some ray-tracing to get the desired image.
But when we have a scene like that, we can do some very neat optimizations. For starters, we can do collision detection while generating it. And we can have the objects interact while doing the displacement mapping, to produce nice waves in water, for example.
As it would be a waste to texture everything, it would be much nicer to use ray-tracing and texture the texel just before the ray hits it. Nice and smooth. And when doing that, why not split the beam when it hits an edge? That gives us real anti-aliasing.
While the hardware would have to transform all vertices and create all those triangles off of them every frame for the whole volume, that is not as bad as it sounds, as the surfaces themselves and the pixels (texels) they contain would only have to be produced at the moment a ray hits it.
Can you think of a method that requires less work with better results? Because all effects you can imagine could be done this way. And only the actions that are needed to display the image are executed after the mesh is build.
Edit: I know this sounds just like a generic ray-tracer. And it is. (Although you could use rasterization as well, but why would you?) Then again, it would solve the current problems with brute force hardware nicely, wouldn't it? Without using more resources, just by using a clever way of transforming and referring the whole scene.