The impact of streaming architectures on Voxels?

ector said:
Let's see, an american football field is 110x49 meters. To get good looking grass you need voxels far below millimeter size, but just for the argument let's say we're gonna use 1x1x1 mm voxels. The voxel field would need to be at least 15cm high. So, 150x110000x49000 voxels, let's estimate 1 byte per voxel. We need 808 GB of RAM just to store that data. Now you want to run a physical simulation on that to get waving grass, chunks of turf etc, which also would increase the necessary height of the voxel field tenfold. Yeah right...

Why couldn't the field just be stored as a height map?


As far as the massively increased storage requirements, Holographic Disc drives are projected to store over a Terrabyte of informantion and be able to have a data transfer rate of a gigabyte per second. I'm assuming the cost of a striped down read only HVD drive would have a reasonable cost by 2011. A game should be able to stream off a HVD fast enough.
 
Am I understanding this technology right? A sphere rendered in Voxels is divided into cube spaces. The file-data storing the sphere object stores an x by y by z box holding these cube 'building-blocks'. A voxel space of resolution 100x100x100 could hold a sphere of radius 50 blocks. Rendered to screen at a distance so one block fits one pixel, we get a 100x100 circle.

If so, surely the moment you move close into the scene and the voxels become larger than one pixel in size, you get aliasing. This is the aliasing Mr. Wibble is talking about, but it's be a trillion times worse than the aliasing we all know and hate. Zoomed in close on a 720p display so the sphere reaches from top to bottom, each voxel would fill 1% of the vertical, or 7 pixels per block. At a time when we're wanting AA to smooth out single-pixel aliasing, people are suggesting using a technology that creates multi-pixel aliasing?! :oops:
 
Brimstone said:
Why couldn't the field just be stored as a height map?

I thought you wanted actual blades of grass and lumps of dirt flying around? A heightmap lawn would look like a fakir's bed of nails :p

Voxels aren't really practical for other things than MRI brain scans.
 
MrWibble said:
We pretty much surpassed that ratio with PS2. The issue is LOD - which voxels are likely to be a lot worse at solving than a surface based technique.
we still have a way to go before each pixel on the screen has 1 or more polygons occupying it. while may ps2 library isn't all encumpasing, every game i own has polygons that are several pixels big.

MrWibble said:
Organic shapes will probably look worse with voxels because voxels will make everything up from tiny cubes. Try making an organic shape from lego...

I think you'd be much better off using level sets and implicit surfaces.
the problem with voxels will always be that they look poor at low resolutions. the problem with polygons is the opposite, because at higher resolution you can start to see the geometry that makes up an object. neither solution is artifact free.

MrWibble said:
This is a hopeless argument. Your "theory", trying to make your primitives so small that they'll never be bigger than a pixel, is doomed to failure and pointlessness. You can apply the same principle to polygons and they'll look better, be cheaper, and use less resources.
that would be interesting to benchmark. given the football field example given earlier in thsi thread, is there any hardware capable of rendering a 808,500,000,000 polygon (making an equal polygon:voxel ratio) football field?
 
ector said:
Let's see, an american football field is 110x49 meters. To get good looking grass you need voxels far below millimeter size, but just for the argument let's say we're gonna use 1x1x1 mm voxels. The voxel field would need to be at least 15cm high. So, 150x110000x49000 voxels, let's estimate 1 byte per voxel. We need 808 GB of RAM just to store that data. Now you want to run a physical simulation on that to get waving grass, chunks of turf etc, which also would increase the necessary height of the voxel field tenfold. Yeah right...
I'm rather dubious about some of the memory calculations in this thread.

A set voxels is stored as an octree. The root node represents the entire object modelled, each child of the root represents one eighth of the model.

There seems to be an assumption that every branch in the octree is the same length, so each leaf node represents the same quantity of volume. Why? If a given node, even a large one, represents a homogenous volume, why subdivide it further? Even if we don't paint textures onto voxels, there's no need to subdivide nodes not visible. For the above deformation example, the octree need only be extended as the object is deformed.

Not that I'm saying voxels are a good solution for the above...

The best use for voxels would be where data on the innards of an object is needed. For example, a fog cloud with varying density. Whether this would be better than using particles is a different matter.

"Voxel" terrains, as used by Novalogic in the past, use heightmaps rather than octrees. They are highly simplified. Late Novalogic voxel games were capable of 6DOF, whereas most voxel terrains were limited to 4DOF. I assume they could be implemented easily on a GPU using shaders, and using the GPU to sample the heightmap: whether there's any advantage over polygonal techniques though...
 
see colon said:
the problem with voxels will always be that they look poor at low resolutions. the problem with polygons is the opposite, because at higher resolution you can start to see the geometry that makes up an object.
Uhm, what are you talking about? :oops: Just compare Quake 1 enemies and Painkiller enemies for example. Surely you would not say the latter has MORE obvious geometry?

given the football field example given earlier in thsi thread, is there any hardware capable of rendering a 808,500,000,000 polygon (making an equal polygon:voxel ratio) football field?
The example is a bit silly (and I suspect, fundamentally flawed) though, because there's no need to store that many polygons. For starters, no need to cover the empty air, so there goes a shitload of polys. Second, no need to store all the separate straws of grass either, it's enough with just one polygon and then store how it should be distorted, which could be done in some much more efficient format than storing the position of all vertices of every straw in the entire field...
 
Well I'm a total layman, but from what you're describing, voxels seem ideal for some sort of proceedural algorithm to generate or tesselate detail. They are pixels...so if a litte 1000 voxel sphere is zoomed in on, couldn't a cpu just keep subdividing the pixels to make it appear smooth no matter the distance?
 
[URL="member.php?u=2708 said:
see colon[/URL]"]we still have a way to go before each pixel on the screen has 1 or more polygons occupying it.
Not in terms of polycounts. What was said was "getting more polygons on screen then there are pixels" and there are PS2 games that do exactly that.

Like MrWibble said - the issue is with polygon distribution (of which LOD is a very large part of) - drawing 300k polygons (~640x480) is not the issue - distributing them evenly across those 300k pixels is.

that would be interesting to benchmark. given the football field example given earlier in thsi thread, is there any hardware capable of rendering a 808,500,000,000 polygon (making an equal polygon:voxel ratio) football field?
You should really make up your mind here - do you want one polygon per pixel or one milion polygons per pixel?
 
GwymWeepa said:
Well I'm a total layman, but from what you're describing, voxels seem ideal for some sort of proceedural algorithm to generate or tesselate detail. They are pixels...so if a litte 1000 voxel sphere is zoomed in on, couldn't a cpu just keep subdividing the pixels to make it appear smooth no matter the distance?

In order to adaptively subdivide the voxels you're going to need to know where to put the new voxels. Either you just guess based on where the existing ones are, which is going to be pretty much the same as drawing polygons over the convex hull of the voxel locations except slower and more complex, or you have some kind of higher-order representation of the surface. Like a polygon.
 
amk said:
The best use for voxels would be where data on the innards of an object is needed. For example, a fog cloud with varying density. Whether this would be better than using particles is a different matter.

"Voxel" terrains, as used by Novalogic in the past, use heightmaps rather than octrees. They are highly simplified. Late Novalogic voxel games were capable of 6DOF, whereas most voxel terrains were limited to 4DOF. I assume they could be implemented easily on a GPU using shaders, and using the GPU to sample the heightmap: whether there's any advantage over polygonal techniques though...

As I said earlier, for represetating a participating media, using a 3D texture to hold what are essentially voxels and then using the GPU to render it seems like a reasonable approach to that particular problem.

Regarding the heightmap idea, recently people have been doing precisely that on GPUs - not to render whole models (where I think polygons are much more suitable) but for rendering surfaces with faked displacement maps.

So I agree that similar techniques are being used to good effect already, but only for small subsets of problem.

Also, you're quite right about not having to subdivide space down if surfaces aren't intersecting any given volume of it, but I would still maintain that for the majority of environments and even objects, a voxel representation in an oct-tree will be significantly more data than a polygonal or other surface based approach, as well as being a far worse fit for the geometry even for extremely high levels of subdivision.
 
see colon said:
we still have a way to go before each pixel on the screen has 1 or more polygons occupying it. while may ps2 library isn't all encumpasing, every game i own has polygons that are several pixels big.

the problem with voxels will always be that they look poor at low resolutions. the problem with polygons is the opposite, because at higher resolution you can start to see the geometry that makes up an object. neither solution is artifact free.

I challenge this. Voxels and polygons both have size. Both get bigger when you get close to them. Higher-resolutions will make both cover more pixels.

You seem to think that voxels magically solve the problem of LOD. They do not.

Where an objects surfaces are all planar, a polygonal representation will be completely accurate using only as many polygons are there are surfaces, and a voxel representation will need as many voxels are there are pixels at the closest possible viewpoint.

For a curved surfaces, both techiques can only approximate the surface and both techniques would require more primitives than there are pixels in order to be really accurate. However with voxels I'd probably need to store that in advance, whereas with polygons it would be fairly trivial to store a high-order representation and use adaptive subdivision.

Unless the voxels happen to be axis aligned with the surfaces of the object, the voxel method will probably be a bad choice. Even if it isn't, the polygonal representation will typically look better and be much much cheaper.

The only example I can think of where a voxel rep would be cheaper for actual geometry, would be a lego model, and that's only if you ignore the little bits on top for sticking the bricks together or the hollow insides...

that would be interesting to benchmark. given the football field example given earlier in thsi thread, is there any hardware capable of rendering a 808,500,000,000 polygon (making an equal polygon:voxel ratio) football field?

Neither would be practical with that many primitives. The point is that using polygons you would't need to match the ratio because it could do the job much cheaper and better using far fewer. With voxels you'd probably need significantly more and the cost of processing them to make them move as described would be astronomical.
 
MrWibble said:
As I said earlier, for represetating a participating media, using a 3D texture to hold what are essentially voxels and then using the GPU to render it seems like a reasonable approach to that particular problem.
But - correct me if I'm mistaken here - 3D textures only render the texels that the polygonal plane actually touch when it intersects the 3D texture volume. So it wouldn't be a true representation of a transparent object with depth, but rather just a thin slice of it.
 
Guden Oden said:
But - correct me if I'm mistaken here - 3D textures only render the texels that the polygonal plane actually touch when it intersects the 3D texture volume. So it wouldn't be a true representation of a transparent object with depth, but rather just a thin slice of it.

3D textures don't render anything - pixel shaders render things.

There's nothing stopping you from writing a pixel shader to iterate through a 3D volume to accumulate samples.
 
MrWibble said:
3D textures don't render anything
Sorry, poor wording. Polygons using 3D textures would (under normal circumstances) only draw texels that touch the intersecting plane, afaik.

What would a pixel shader that iterates through a reasonably large texture volume from any arbitrary angle and accumulates the texels do to pixel shader instruction count and performance? Pretty horrendous things I would think... Perhaps point sprites would be faster/more manageable, that way the renderer would use the vidcard's standard alphablending functions.
 
Guden Oden said:
Sorry, poor wording. Polygons using 3D textures would (under normal circumstances) only draw texels that touch the intersecting plane, afaik.

What would a pixel shader that iterates through a reasonably large texture volume from any arbitrary angle and accumulates the texels do to pixel shader instruction count and performance? Pretty horrendous things I would think... Perhaps point sprites would be faster/more manageable, that way the renderer would use the vidcard's standard alphablending functions.

It doesn't have to be prohibitively expensive. Bear in mind you're getting a continuous effect and can leverage the GPU for occlusion/z testing.

Techniques using the GPU to step through a texture (2d or 3d) are getting increasingly popular for view-dependant things such as parallax mapping.

I don't know if the number of samples required for a really complex volume are completely practical right now for in-game effects (they probably are for some thing) but doing lots of samples from an array of data is precisely the kind of thing GPUs are tuned to do really fast.

Point-sprites, if they are to be alpha blended, would generally need to be sorted and might not give as a good a result. You'd be pushing an awful lot more through the setup engine too.
 
MrWibble said:
The only example I can think of where a voxel rep would be cheaper for actual geometry, would be a lego model, and that's only if you ignore the little bits on top for sticking the bricks together or the hollow insides...

Actually, I believe that voxels are OK for a grass field. Let's say you have 2 or 3 volumetric textures (64*64*16) which are repeated all over the field. They are rendered using horizontal alpha-blended slices (no ray-cast please!). Ok, you kill the fill-rate, but you can use a different number of slices according to the distance with the camera. LOD is done using conventionnal 2d mip-map.

You can animate the grass as well by slightly moving the slices (not too much). But you cannot have foot track because of the use of 3d patterns.
This is one of the biggest drawback of volumetric representation: hard to animate, or you blow the memory.

Still, fill-rate is the killer. But with next-gen console, you can achieve something like 30 screens per frame (in theory), which give you some space to do a nice grass field. Of course, the grass musn't be too high. Or perhaps it can be turned to 3d geometry just when the grass is really close...
 
purpledog said:
Actually, I believe that voxels are OK for a grass field. Let's say you have 2 or 3 volumetric textures (64*64*16) which are repeated all over the field. They are rendered using horizontal alpha-blended slices (no ray-cast please!). Ok, you kill the fill-rate, but you can use a different number of slices according to the distance with the camera. LOD is done using conventionnal 2d mip-map.

You can animate the grass as well by slightly moving the slices (not too much). But you cannot have foot track because of the use of 3d patterns.
This is one of the biggest drawback of volumetric representation: hard to animate, or you blow the memory.

Still, fill-rate is the killer. But with next-gen console, you can achieve something like 30 screens per frame (in theory), which give you some space to do a nice grass field. Of course, the grass musn't be too high. Or perhaps it can be turned to 3d geometry just when the grass is really close...

That's no different to how fur and grass is rendered now (except typically it would use a 2D texture and not a 3D one). The example was using voxels to provide a fully deformable pitch, not make some grass wave about.
 
Guden Oden said:
Uhm, what are you talking about? :oops: Just compare Quake 1 enemies and Painkiller enemies for example. Surely you would not say the latter has MORE obvious geometry?
that's exactly my point. rendered at, say, 320*240 a quake 1 model might look pretty decent. crank up the resolution and you'll be able to see the geometry that makes up the model. i think i should have specified pixel (or output) resolution, because i think we are seeing the same words and drawing different meanings.

MrWibble said:
You seem to think that voxels magically solve the problem of LOD. They do not.
when did i say this? i can't remember saying (or typing, as the case may be) this, nor do i believe it. LOD will always be an issue.

MrWibble said:
Where an objects surfaces are all planar, a polygonal representation will be completely accurate using only as many polygons are there are surfaces, and a voxel representation will need as many voxels are there are pixels at the closest possible viewpoint.

i can agree with this in an overall sense. but can you answer me a question. maybe i'm taking what you say (or how you say it) the wrong way, because it feels like you're trying to "break me of my heretic voxel ways", but i don't think i have any. i reviewed what i'd posted here, and we're agreeing most of the time, or at least i think we are, but you keep poking at me with the polygon stick.
 
Still, fill-rate is the killer. But with next-gen console, you can achieve something like 30 screens per frame (in theory)
Why wait for nex-gen? There have been games on PS2 at least since 2003 that fill more then 30 screens per frame.
 
Back
Top