Can anyone recommend any good algorithms for converting a 3D Polygonal Mesh to a Volume (3D) Texture (or voxelized data)? Specifically I'm looking for a way to do this with some level of hardware acceleration (D3D9 level hardware).
One idea I've had is to use the stencil buffer to mask out the pixels for individual volume slices, however, this fails miserably in the pathological case when polygons are coplaner to the camera frustum planes, e.g. a double-sided square where each face is labeled with it's direction (-x,+x,-y,+y...) will have -x/+x missing if rendered using an orthographic projection.
Another crazy idea was to build a 3D Voxel Field and for each voxel a cube map is rendered to with only the polygons _inside_ that voxel (so render origin is at the middle of a voxel wall looking towards the voxel origin). The colors are then summed and averaged which results in a "color" value (rgb) for that voxel. The z-buffer can be used in a similar way to determine "solidness" of that voxel. A 32x32 cubemap is probably more than sufficient and the summing can be done by taking numerous cubemap samples within a special shader (no cpu touching). While slow this would technically still be hardware accelerated.
Other options are rasterizing and voxelizing the pixels, or taking the vertices, inserting into a voxel grid and making them solid if a vertex is in that voxel. This means missing texture data though I could probably just manually sample the texture map for a given vertex to get an approximated vertex color... ugh.
Note that the efficiency of whatever algorithm I choose to use is merely necessary to reduce development time and is not consequential to the end result as long as it is of sufficient quality. The goal is eventually to voxelize large data sets (1 million polygons+) into a collection of volume textures (at a specified granularity) but right now I'm just hoping to be able to render a 10,000 poly model to a 64x64x64 3d texture in under a minute.
I'm likely over thinking this so some thinking outside the box would be very helpful. Thanks ahead of time for any suggestions.
One idea I've had is to use the stencil buffer to mask out the pixels for individual volume slices, however, this fails miserably in the pathological case when polygons are coplaner to the camera frustum planes, e.g. a double-sided square where each face is labeled with it's direction (-x,+x,-y,+y...) will have -x/+x missing if rendered using an orthographic projection.
Another crazy idea was to build a 3D Voxel Field and for each voxel a cube map is rendered to with only the polygons _inside_ that voxel (so render origin is at the middle of a voxel wall looking towards the voxel origin). The colors are then summed and averaged which results in a "color" value (rgb) for that voxel. The z-buffer can be used in a similar way to determine "solidness" of that voxel. A 32x32 cubemap is probably more than sufficient and the summing can be done by taking numerous cubemap samples within a special shader (no cpu touching). While slow this would technically still be hardware accelerated.
Other options are rasterizing and voxelizing the pixels, or taking the vertices, inserting into a voxel grid and making them solid if a vertex is in that voxel. This means missing texture data though I could probably just manually sample the texture map for a given vertex to get an approximated vertex color... ugh.
Note that the efficiency of whatever algorithm I choose to use is merely necessary to reduce development time and is not consequential to the end result as long as it is of sufficient quality. The goal is eventually to voxelize large data sets (1 million polygons+) into a collection of volume textures (at a specified granularity) but right now I'm just hoping to be able to render a 10,000 poly model to a 64x64x64 3d texture in under a minute.
I'm likely over thinking this so some thinking outside the box would be very helpful. Thanks ahead of time for any suggestions.