Less is more

DiGuru, you're not telling me anything that allows me to form an opinion, e.g.

To do all that and more, we need a clever way to store all those surfaces, so we can look up all relevant (sub-) pixels as fast as possible. And we need descriptors, for things like textures, transparency, fog and light sources (intensity).

So far you haven't suggest a "new" model you've basically said "if you do something properly it'll work really well", well duh.

Maybe if you elaborate in more specific terms people could comment.

John.
 
JohnH said:
DiGuru, you're not telling me anything that allows me to form an opinion, e.g.

To do all that and more, we need a clever way to store all those surfaces, so we can look up all relevant (sub-) pixels as fast as possible. And we need descriptors, for things like textures, transparency, fog and light sources (intensity).

So far you haven't suggest a "new" model you've basically said "if you do something properly it'll work really well", well duh.

Maybe if you elaborate in more specific terms people could comment.

John.

Ok.

First, we need a scene. Our input would probably consist of a bunch of vertices that have to be fitted together to form triangles. (Although the whole bone-objects would be better.)

That would not be a whole scene, that would just be a bunch of sorted triangles. We want a structure that puts them all together. Make a single entity of them. And what would work better (especially when looking at the input) then a single mesh that describes all of them?

Of course, when you throw a ball (or fire a bullet), that would be a free-floating object without interconnects. But that doesn't matter.

When you have your input of lots of individual triangles, you want to transform them, occlude them and do things like displacement mapping. So, we fit them together. And we use a structure that describes the volume, just to have an index. (Like a 3D array that stores pointers to the foremost points of all triangles and their maximum XY length and grows backwards, a Z-buffer for vertices.)

At that moment, all triangles that are no part of the surface would have been culled or transformed. And we would cap the remaining space to a single volume. That leaves us with a single mesh (albeit with some loose objects floating around inside), that would represent the outer surface. Curved to encapsulate everything within. And with a useable index.

Now all we have to do is render the inside surface of that mesh. We can start with texturing it, procedural if possible. Fix the camera somewhere and do some ray-tracing to get the desired image.

But when we have a scene like that, we can do some very neat optimizations. For starters, we can do collision detection while generating it. And we can have the objects interact while doing the displacement mapping, to produce nice waves in water, for example.

As it would be a waste to texture everything, it would be much nicer to use ray-tracing and texture the texel just before the ray hits it. Nice and smooth. And when doing that, why not split the beam when it hits an edge? That gives us real anti-aliasing.

While the hardware would have to transform all vertices and create all those triangles off of them every frame for the whole volume, that is not as bad as it sounds, as the surfaces themselves and the pixels (texels) they contain would only have to be produced at the moment a ray hits it.

Can you think of a method that requires less work with better results? Because all effects you can imagine could be done this way. And only the actions that are needed to display the image are executed after the mesh is build.

Edit: I know this sounds just like a generic ray-tracer. And it is. (Although you could use rasterization as well, but why would you?) Then again, it would solve the current problems with brute force hardware nicely, wouldn't it? Without using more resources, just by using a clever way of transforming and referring the whole scene.
 
You seem to be describing some kind of approach to scene manegement that, at aguess, encloses objects in bounding meshes for occlusion query, throwing in ray tracing and suggesting that it solves many problems.

I don't see it myself (at least not from what you've written), besides ray tracing is not the solution to all problems, e.g. you're still left with the problem of global illumination, to mention but one.

John.
 
Yes, the main thing is the scene management and transformations. When you have that, you can use a clever model to render it. I'm not sure what the best model would be.

What would you suggest, like for a good global illumination?
 
DiGuru said:
Another thing that is very interesting right now is to calculate the texture per pixel on demand. So, no pre-calculated textures, but just a formula that calculates how the pixel looks when a certain (procedural) texture would be applied. Some things are rather hard to calculate purely mathematical and would require a lookup table (ie. a texture) or flow control to do right. Or you could (but you wouldn't want to) just calculate all iterations to be sure the result would be correct.

Procedural textures can never have the richness and artistic quality of a hand painted bitmap (even if the painting involves a lot of procedural steps).
You should also consider the calculation overhead. Layers upon layers of fractal noise maps will result in longer render times - a lot longer, actually, and then you want to add raytracing... Think about textures as the cached results of a procedural shader - you can trade proccessing power for memory space and bandwith, and because there can be many other calculations for each pixel, it makes sense to cache as much of the data as you can. Or think about the light maps in Quake - Carmack simply choose to cache the results of the lighting calculations for the levels. And I'm sure lightmaps will beused 10 years after Quake as well...

An interesting example of this approach is getting used in offline rendering for character animation. Skin deformations, cloth simulations and such can get so complex that the lighting and rendering of the scene (which a process of iterative refinement) can get unbearably slow. Or, you want to render a fleet of a thousand Greek ships, complete with sails blowing in the wind and people walking around the decks, which would also take a lot of time to calculate for every frame.
With the amount of memory and the increased speed of harddisks in todays computers, many studios choose to simply write out the results of these deformations for each vertex, for every frame. It can take up a lot of space, and might sound a bit absurd first, but the speed difference is night and day for the lighters; especially for cloth animation, which would take hours to re-simulate each time the scene is loaded. Or you can just load the same deformations multiple times for the sails and little CG humans while rendering a massive scene.
 
Ok. But isn't it hard to use such textures and maps when you transform everything first? How does that work with displacement mapping?
 
Back
Top