Voxel rendering (formerly Death of the GPU as we Know It?)

This is definitely interesting and should be for everyone playing games.

Whether or not investors jump on the bandwagon will determine if this is going to be vapourware or a real deal.

Hopefully they will consider a PC, Mac and Console versions of the editor.
 
I would not sell myself to this.

It sounds like the typical “Ray tracing could handle more geometry in the same time as rasterisation.” But I am missing a statement about the amount of memory that is needed to store this “better” geometry. Additional I could not find a useful word how texturing and shading works with this. But the biggest problem isn’t even mention. How this technique does handles skinning. Most accelerated raytracing technologies breaks apart when it comes to modify the geometry in real time.
 
Hehe all this voxel stuff lately is giving me flashbacks to my elementary school science fair project (a voxel terrain renderer)... back then voxels were very cool, but they haven't really moved a long way in the last decade compared to polygons. That said, I'm willing to revisit the idea and indeed voxels still do have several desirable characteristics.

Let me respond to a few points in the article though. The guy did admit that he wasn't a graphics guy though so I'll cut him some slack ;)

When displaying a forest, for example, it makes a tree and then puts another tree in front.
It has been a while since we rendered using the painter's algorithm ;)

their system has unlimited power.
Ooh, so all that remains is to formulate the halting problem, or even TSP/3SAT/etc. in terms of voxel ray casting and I've put a lot of CS people out of their jobs, not to mention solved a lot of terribly hard problems. Reminds me of this fun paper :).

Point cloud data is much more efficient then polygon data. That’s not in dispute. It’s more accurate and models can be hand made or laser scanned in, but either way the result is that it looks better.
Meh... that's kind of what you're trying to prove one way or another - you can't just "declare" it to be true. Particularly absurd considering no performance figures are given, which means you can't really draw any "efficiency" conclusions whatsoever. The whole conclusions is actually pretty simplified and misleading, but I appreciate what they were trying to say.

They're also falling into the ray tracing trap of making claims like "we can render a bazillion peta-tera-bytes of data!" where such a measure is completely irrelevant and misleading. I can render a bazillion polygons too, if the majority are occluded, offscreen or at lower LOD/tessellation ;)

Anyways there is one, big, huge argument for using voxels in my opinion: easy, efficient, scalable LOD. They mention it briefly in the article, but really this is entirely the motivation for using such a data structure IMHO. Certainly doing good LOD with polygons isn't impossible (and it can be done without popping contrary to what the article says - check out the skydive from Crysis!), but it's also pretty complicated.

I'm interested to see where this stuff goes, and I can understand Carmack's interest in that the design elegantly flows together with virtual texturing (although arguably with voxels you could represent the texture data right in your data set as well, without needing parameterization... maybe that's what he's planning).

Thanks for the link!
 
Last edited by a moderator:
It could be interesting if a mod try to invite Bruce Dell on this board, no?
It seems he is reachable on the tkarena forum :)
 
This all looks very similar to something someone was trying to sell to us. I never thought it looked very good.
 
It could be interesting if a mod try to invite Bruce Dell on this board, no?
It seems he is reachable on the tkarena forum :)

Definitely invite him to this thread. I'd be very interested to hear about the technology.

BTW, anyone have a link to download the demo?
 
I would not sell myself to this.

It sounds like the typical “Ray tracing could handle more geometry in the same time as rasterisation.”
Yes and No.

The by far most significant point of ray tracing is that it is much more efficient in rendering geometric detail that has greater resolution than the rendertarget.
But the real strong point of this approach compared to triangle meshes is that you can solve the issue of oversampling with voxels very easily and that you can compress them much more efficiently (like pixels).
Ray tracing adaptively compressed voxels is definitely very promising.

But I am missing a statement about the amount of memory that is needed to store this “better” geometry.
Adaptive 3D space partitioning can bring down the amount of data very significantly.
You can decide the maximum resolution dependent on resources and desired quality. Think about clip maps in 3D space.

Additional I could not find a useful word how texturing and shading works with this.
You could do it just like with normal surface fragments.

But the biggest problem isn’t even mention. How this technique does handles skinning. Most accelerated raytracing technologies breaks apart when it comes to modify the geometry in real time.
That is the most complicated part of course, but I am confident that good enough solutions for this will emerge.
 
Yes and No.

The by far most significant point of ray tracing is that it is much more efficient in rendering geometric detail that has greater resolution than the rendertarget.
But the real strong point of this approach compared to triangle meshes is that you can solve the issue of oversampling with voxels very easily and that you can compress them much more efficiently (like pixels).
Ray tracing adaptively compressed voxels is definitely very promising.

At least it seems a better approach then raytracing the geometry that we have today.

Adaptive 3D space partitioning can bring down the amount of data very significantly.
You can decide the maximum resolution dependent on resources and desired quality. Think about clip maps in 3D space.

I know. But I am still curios how much memory the objects in the example shots needs.

You could do it just like with normal surface fragments.

My fault. I was thought about sphere voxels instead of cubic ones.
 
There was a 3D mark demo some years ago of a point cloud horse that spun around to test the maths processor of your machine. That horse was roughly 384 000 points where as this model is roughly 1.5 million points.

that demo was labeled as point sprites - is this what this cloud data is ?

ps: how is it better to store a triangle as lots of points rather than 3 points(or vertexs) like we do today ?
 
The by far most significant point of ray tracing is that it is much more efficient in rendering geometric detail that has greater resolution than the rendertarget.
But the real strong point of this approach compared to triangle meshes is that you can solve the issue of oversampling with voxels very easily and that you can compress them much more efficiently (like pixels).
Yes indeed ray tracers' ability to turn "slow" into "aliasing hell" isn't of particular interest to me, but I am very interested in how this sort of voxel approach lends itself very nicely to simple, dynamic LOD in a very similar manner to texture filtering. Slap on some paging on top of it and it's not so surprising to see this as a natural development on the MegaTexture idea...

I'd certainly be interested in seeing some demos of this in action, although it seems to introduce its share of issues as well - efficiently handling data structure updates being one of them.

You were coding a voxel terrain renderer when you were enrolled in elementary school ?!?
It's not really as impressive as it sounds... in the simple case of 2D height fields, voxel terrain rendering simplifies to jsut marching along your height-field and comparing heights, while drawing vertical scan-lines up the screen. Come to think of it, it may have actually been Grade 9 that I did the project, but it was many years ago in any case.
 
Last edited by a moderator:
I should also mention that this company (Unlimited Detail) are giving a presentation at our company in early April. So I can probably give some more details on any non-NDA stuff then.

They also mention that at this time they are only looking at a "hybrid" solution of using this point cloud technology for game backgrounds and using traditional polygons for game characters.

I think they are presenting at a few game companies, so if you work at a game company you may be able to request a presentation.
 
Isn't "Image Based Rendering" a better way to render those kinds of objects showed in the screenshots? For game background, an IBR based algorithm would be much more efficient, like Concentric Mosaic, Lumigraph, etc.

Also, point cloud rendering is not anything new, it has already been used for massive rendering like huge amount of crowd. Those kinds of things just come and go, never reached to a full fledged stage.
 
Yes indeed ray tracers' ability to turn "slow" into "aliasing hell" isn't of particular interest to me, but I am very interested in how this sort of voxel approach lends itself very nicely to simple, dynamic LOD in a very similar manner to texture filtering. Slap on some paging on top of it and it's not so surprising to see this as a natural development on the MegaTexture idea...

I think that being reminded of http://www.pcper.com/article.php?aid=532 (id Tech 6) upon visiting this thread was something many people here felt naturally occurring ;).
 
Hi every one , I’m Bruce Dell (though I’m not entirely sure how I prove that on a forum)

Any way: firstly the system isn’t ray tracing at all or anything like ray tracing. Ray tracing uses up lots of nasty multiplication and divide operators and so isn’t very fast or friendly.
Unlimited Detail is a sorting algorithm that retrieves only the 3d atoms (I wont say voxels any more it seems that word doesn’t have the prestige in the games industry that it enjoys in medicine and the sciences) that are needed, exactly one for each pixel on the screen, it displays them using a very different procedure from individual 3d to 2d conversion, instead we use a mass 3d to 2d conversion that shares the common elements of the 2d positions of all the dots combined. And so we get lots of geometry and lots of speed, speed isn’t fantastic yet compared to hardware, but its very good for a software application that’s not written for dual core. We get about 24-30 fps 1024*768 for that demo of the pyramids of monsters. The media is hyping up the death of polygons but really that’s just not practical, this will probably be released as “backgrounds only” for the next few years, until we have made a lot more tools to work with.

SQRT may I ask what company you are from, all appointments in America where pushed till May.
Please contact me unlimited_detail@hotmail.com

Kindest Regards
Bruce Dell
 
This all looks very similar to something someone was trying to sell to us. I never thought it looked very good.

Hi every one , I’m Bruce Dell (though I’m not entirely sure how I prove that on a forum)
That's easy. Just answer "Where did the government suggest that Bruce Dell take a trip to for a trade mission?"

Seriously though, you seem to be the same person who contacted us in 2005. What struck me was, to be honest, that it didn't actually look that good.

Any way: firstly the system isn’t ray tracing at all or anything like ray tracing. Ray tracing uses up lots of nasty multiplication and divide operators and so isn’t very fast or friendly.
Unlimited Detail is a sorting algorithm that retrieves only the 3d atoms (I wont say voxels any more it seems that word doesn’t have the prestige in the games industry that it enjoys in medicine and the sciences) that are needed, exactly one for each pixel on the screen, it displays them using a very different procedure from individual 3d to 2d conversion,
So how does this differ from algorithms of ~20 years ago that modelled objects with a multitude of spheres and then had a fast "blatting" algorithm?

If this is a point-based modelling system, how does it compare to the work that has been presented recently at, say, SIGGRAPH? It seems to me that research in this area looks far better.
 
Impressive when you consider that each pixel on the screen gets probably under 80 cycles of CPU time. Of course if the code is vectorized then the effective number of cycles/pix increases.

I wonder how GPUable this algorithm is? If it was on the GPU I'm sure you could solve the aliasing issues with filtering. Problem is that I would guess this algorithm is a re-projection and hole filling style algorithm which only adds a small number of new points (searched in the data structure) per frame, and point scatter simply isn't very GPU friendly. If you were going to do a GPU version of point splatting, you would only want to draw a small subset of the points per frame and then have an image space algorithm hole fill and "search" for the proper pixels (which is SIMD + TEX cache friendly).
 
Last edited by a moderator:
The by far most significant point of ray tracing is that it is much more efficient in rendering geometric detail that has greater resolution than the rendertarget.
Just because object order renderers don't generally perform hierarchical culling of primitives which don't intersect rays doesn't mean they can't. It's just not a very good idea most of the time.

As for compressing octree representations of geometry, what's so special about it? For a small thought experiment lets take a triangle, what is the more efficient way to describe it? Voxels or an explicit surface description?

Hierarchical voxel/points will compress well enough in general and inherently give you LODs but but their main advantage over explicit surface descriptions (which can be hierarchical too, with inherent LODs) is ease of use and not efficiency.
 
Back
Top