Voxel rendering (formerly Death of the GPU as we Know It?)

Morkins: correct but do note that Carmack's original plan for id Tech 6 is/was a hybrid solution of SVO for static world geometry + traditional polygon/shaders for animated/dynamic geometry.
 
Morkins: correct but do note that Carmack's original plan for id Tech 6 is/was a hybrid solution of SVO for static world geometry + traditional polygon/shaders for animated/dynamic geometry.

Like Outcast in 1999 then, except it was using a different voxel approach.
 
You can do animation with Voxels, just they will have massive tool problems (do they exist??). There is no real technical reason why you can't transform voxels using a skeleton like you do with vertices. Conceptualy each voxel could be treated the same as a vertex as far as transformations go. Obviously though significantly more computationally expensive to do it as a model of voxels is likely to have significantly more voxels to transform compared to the vertices in a texture mapped trimesh. That is making an assumption that there is only one 'colour' per voxel. Could have a coarser geomety and finer texture detail on a voxel model (but why not just use a trimesh instead).
 
There is no real technical reason why you can't transform voxels using a skeleton like you do with vertices. Conceptualy each voxel could be treated the same as a vertex as far as transformations go.
It's not that simple. Vertices in polygon graphics are just simply projected to the screen as a part of the rasterization process. You can move vertices freely, since the structure is just a simple array, and there is no location dependence.

If you use SVO (sparse voxel octree) to store voxels, you cannot simply move single voxels. The position/scale/connectivity of each voxel depends entirely on it's location in the data structure. You need to rebuild the data structure if you want to move some voxels. You cannot simply update the position of some voxels (voxels do not even have a property called position).

Simple transformations such as rotating a whole node (and all its children) by 90 degrees is easy to do, but free rotations, scaling and movement for a subset of voxels pretty much requires you to have multiple static SVOs laid on top of each other (making the raycasting more expensive in areas where multiple SVOs intersect). And this allows only linear transformations, since you do not modify the SVO (only the ray entering the SVO is transformed by the SVOs inverse transform matrix).

For blended skinning (non linear tranform) you would need to modify the SVO (regenerate parts of the SVO every frame) and that's pretty expensive. Or you need some other data structure that is designed to hold dynamic data. Of course rendering skinned objects using polygons is also possible.
 
I'd also wouldn't like to see a character rigger's reaction to skinning a multi-million voxel character, painting weights and editing influences and all. Chances are that the guy wouldn't be happy and instead he'd start to beat you with his keyboard or so.
 
Relevant document:
http://bautembach.de/wordpress/wp-content/uploads/asvo.pdf

There's an error in that document however, since a voxel is NOT a little cube. (neither cuboïd...)

et tu brute? At first I thought he used that in the abstract or whatever, but yeah, having someone write a thesis on voxels and not describe them correctly in the body of the thesis seems a bit sloppy.

also a question:

thesis said:
Voxels are inappropriate for representing flat surfaces, which can be found in architecture and furniture for example. Such surfaces can be modeled with very few triangles, whereas voxel models must always have maximal resolution to appear smooth - no matter the kind of shape they represent. This is the main reason why one should not stick to one technology exclusively but mix both, tapping their full potential.

That's the full entry. Does he mean inapropriate in terms of storage or rendering? If the former, it can be argued that it gives artists extra flexibility if later on that surface is no longer flat and if he means the later, maximal resolution (? I'm assuming he means voxel lod level) is screen-resolution and depth dependent so can anyone explain what exactly is meant by that?
 
Morkins: correct but do note that Carmack's original plan for id Tech 6 is/was a hybrid solution of SVO for static world geometry + traditional polygon/shaders for animated/dynamic geometry.

I would be interested to know how you would implement something like a trapdoor in the ground opening - if the static surroundings were done using SVO and the trapdoor was done with traditional polygons, would the trapdoor not stick out among the rest of the floor due to using a "lower quality" rendering? A bit like older cartoons where anything that moved was obviously a different colour to its surroundings.
 
It'd actually be the other way around IMHO - the voxel parts of the level would be more constrained in the level of detail they an achieve, compared to the more flexible polygons.

Deferred shading could be used on both voxels and polygons to get a similar, cohesive look.

Complex curved surfaces like cars and anything coming from industrial design would indeed be very problematic with voxels, especially when using shiny reflective materials (which tend to exaggerate surface continuity problems). Even something as simple as a sphere or a pipe could become problematic if the level of detail (size of voxels) is not sufficient. Polygons let you match vertex density to the curvature of the surface to maintain a constant level of smoothness, you can't do that with voxels and octrees. This kind of geometry is better suited for rough, organic stuff like terrain, vegetation, characters and so.
 
IEven something as simple as a sphere or a pipe could become problematic if the level of detail (size of voxels) is not sufficient. Polygons let you match vertex density to the curvature of the surface to maintain a constant level of smoothness, you can't do that with voxels and octrees.

Note that this is only true if you don't have the storage capability to have voxels smaller than screen-size pixels. Once you reach that point, a sphere or whatever constructed with voxels would be just as "round" as any poly-based sphere, because in both cases, rasterised pixels still form a non-circular silhouette.

There are probably some diferences on what types of AA you could then apply in either case (still reading up on that) but if push came to shove, SSAA ought to work the same in both situations.

Of course, a 120 poly basketball + 128^2 texture * 9 channels would probably take up less space than a voxel basketball AND be hardware accelerated.
 
Yes, but even in the case of a car's wheel and rims, just try to imagine the amount of data you'd need with voxels to make it reasonably smooth... Just to reach the quality level you can have today with polygons, like in GT 5. It'd be several orders of magnitudes more than a simple polygonal representation, and then there's going to be tessellation and displacement as a realistic approach within a few years on the next gen consoles, making the poly based datasets even more efficient.
Not to mention that even a relatively rough polygon based model (think previous generation) is a lot more pleasing to the eyes compared to a voxel version. And this is a real-life scenario, game devs have to deal with it in nearly every kind of game.

Now car bodies are a lot more complex, sculpted surfaces with sharp edges on the main lines, smooth transitions between them, panel separation lines just a few milimiters wide, and so on. And the reflections make every tiny discontinuity a lot more evident, which is why all normal maps for such objects have to be very high quality (I recall demonstrations of 8-bit vs 16-bit normals some years ago). It would probably take gigabytes of voxel data to properly represent such a surface.

Not to mention that a car has to, you know, move ;) at least in most games...
 
Heh. ;)

Well, the huge storage requirements is par for the course. You have a much better point about lower quality renderering. Today if you run the game at minimum you may get simpler lighting, maybe no environment mapping (or the OLD environment mapping heh) but usually models stay the same (maybe you go back one LoD level or whatever).

With voxels, if you go back one LoD level and start having voxels twice the size of on screen pixels you start seeing staircase artefacts everywhere (like a really bad aliasing day AKA Grand Theft Auto 4).

So, while normal quality voxels can look as detailed (if not better) than polys, once you drop down some quality rungs in the options screen you're back to Comanche circa 1992.

WRT to cars moving around you can do displacement. But yeah, animation looks to be a much more important problem to solve. (see this, animated car starts at 2m27s)
 
Characters are quite obviously polygons, but software rendered and with a form of bump mapping.

Some years ago there's been extensive information available online about the various techniques they were using for water, shadows and so on. I can't seem to find the web pages now, though...
 
Characters are quite obviously polygons, but software rendered and with a form of bump mapping.

Some years ago there's been extensive information available online about the various techniques they were using for water, shadows and so on. I can't seem to find the web pages now, though...

And it's not exactly a shining example of animation: when he runs, the character appears to have a full size baseball bat inserted in his rectum.
 
Note that this is only true if you don't have the storage capability to have voxels smaller than screen-size pixels. Once you reach that point, a sphere or whatever constructed with voxels would be just as "round" as any poly-based sphere, because in both cases, rasterised pixels still form a non-circular silhouette.

Are voxels axis-aligned? If yes you will have severe problems with surface-cuvature, as like expressed in normals. I can only imagine that voxels are better treated as sampled positions (as in a heightfield) and that you reconstruct the surface by connecting the samples along the surface by fe. a quad-patch. Though that is then some form of sample-volume to (again) polygon conversion in the (let's say) vertex shader.
I mean there are a lot of attributes which express sub-pixel information, I don't doubt you can reconstruct those via voxels, but the storage cost becomes then extreme prohibitive (position + orientation + extend + ...) and the back-end (renderer) more than traditional ... no?

Maybe I should read one of those papers ... :)
 
Are voxels axis-aligned?

In the context of modern voxel implementations using sparse octrees, no. They aren't cubes, their faces aren't aligned.

If yes you will have severe problems with surface-cuvature, as like expressed in normals. I can only imagine that voxels are better treated as sampled positions (as in a heightfield) and that you reconstruct the surface by connecting the samples along the surface by fe. a quad-patch. Though that is then some form of sample-volume to (again) polygon conversion in the (let's say) vertex shader.

Possibly but not necessarily, instead think how a bumpmap is a texture without being a texel, its information is used elsewhere to compute screen-pixels. The voxel structure describes the position and relation of each voxel both in terms of 3D coordinates as well as depth/detail but it doesn't have to be rendered directly. For instance you could cast rays into the structure.

I mean there are a lot of attributes which express sub-pixel information, I don't doubt you can reconstruct those via voxels, but the storage cost becomes then extreme prohibitive (position + orientation + extend + ...) and the back-end (renderer) more than traditional ... no?

In theory you should only need colour information, deriving all other necessary information from the structure. There seems to be some discussion on what exactly you have to store though. Regardless, it requires much more storage and because it's not a collection of "solid vertices" floating in worldspace you can't animate them as easily.
 
After MineCraft creator Notch called shenanigans (the actual word he used was "scam") on Euclideon's recently released Unlimited Detail Real-Time Rendering Technology Preview Video, many suspected that would be the last word on this for a while. Contrary to that, there's now a video interview on HARDOCP where Euclideon founder and lead engineer Bruce Dell offers a lengthy defense of his claims, attempting to address some of the questions that have been raised about all this, such as why we don't see any animation, whether what they're creating is a voxel engine, and more.

http://www.hardocp.com/article/2011/08/10/euclideon_unlimited_detail_bruce_dell_interview
 
Thanks Davros,

So, fairly high end machine, running between 15-25fps below the display's resolution.
 
Back
Top