Voxel rendering (formerly Death of the GPU as we Know It?)

I saw the (non-NDA) demo presentation a few days ago and have a few mixed feelings about it.

He showed a lot of videos and one real time demo (that he apologized for it being a little bit broken as it had some bad artifacts)

I would have preferred to see some of the more game-like levels in real time. Some of the videos seemed game like and impressive. (Also, some of the videos were a bit old and you can see he has come a long way over the years.)

And if I could get actual mouse/keyboard control of a game scene demo would have convinced me much more - as being able to fly around and get up close to geometry to look for artifacts is the real test for me. (It is not like anyone could steal anything by providing an interactive demo)

The developer admitted that he has been working in a vacuum for the last 10+ years - so he knows very little about how current renderer's work.
(Which was very apparent during a Q&A) It appeared that the demos used DX8 to do blitting to the screen. It was also stated that the demos were single core and written in plain non-optimized C, so someone who knew what they were doing could make it run much, much faster. (was a bit distressing to learn he did not know what a memory cache was however)

The main claim is that he has found an efficient way to extract point data in an octree without doing ray-casts into the data structure. The octrees were compressed and supported instancing of objects through-out the level.
(I really do not know a great deal about octrees to really comment on this)

I don't think it was a "smoke an mirrors" presentation, but without seeing some better real time demos I am not 100% convinced. (demos that I can actually control - and display task manager to see mem usage etc)
 
By chance I received the following in my inbox earlier this week: (In terms of "point based rendering" I would think that the state of the art would be seen there)
Point-Based Graphics 2008 - Call for Papers

Los Angeles, CA, USA - August 9-10, 2008 http://www.point-graphics.org

Submission Deadline: April 30, 2008

Co-sponsored by Eurographics and the IEEE-CS Visualization and Graphics Technical Committee (VGTC)

The drive for increasingly complex 3D geometric models, especially those scanned from the real-world, has brought about a growing interest in methods that build on point primitives. Following the highly successful 2004, 2005, 2006, and 2007 Symposia on Point-Based Graphics, the 5th symposium of its series, PBG08, aims to further demonstrate the applicability of point-based methods in modeling, rendering, and simulation, and in a wide range of application domains. PBG08 will take place in in Los Angeles, CA, USA, from August 9-10, 2008, co-located with ACM SIGGRAPH 2008 and co-organized with the International Symposium on Volume Graphics (VG08).


We invite your original contributions in areas including, but not limited to, the following:

- Data acquisition and surface reconstruction
- Geometric modeling using point primitives
- Sampling, approximation, and interpolation
- Transmission and compression of point-sampled geometry
- Rendering algorithms for point primitives
- Geometry processing of point models
- Topological properties of point clouds
- Hardware architectures for point primitives
- Animation and morphing of point-sampled geometry
- Hybrid representations and algorithms
- Use of point-based methods in real-world applications


PBG was established do develop and leverage the newly created field of point based graphics and to establish a community of its own. With the 5.
symposium of this kind in 2008 and the broad spectrum of scientific publications on the subject, we believe that our initial mission is accomplished. At the same time we observe a natural evolution and confluence of point graphics and volume graphics into the broader field of "Sample Based Graphics". In order to address this development VG'08 and PBG'08 will organize a joint track on the topic to evaluate its suitability as new direction into which both symposia might evolve in the years to come.

For more information about submission, please visit http://point-graphics.org


Important Dates:
Paper submission deadline: Apr 30, 2008
Notification of acceptance: Jun 4, 2008
Camera-ready copy: Jun 18, 2008
Symposium: Aug 9-10, 2008



General Chairs:
Mario Botsch, ETH Zurich
Matthias Zwicker, University of California, San Diego

Papers Chairs:
Renato Pajarola, University of Zurich
Oliver Staadt, University of Rostock

Incidentally, regarding the thread title "Death of the GPU as we Know It?" , at Siggraph 2007 there was a presentation on a "standard" GPU that was extended to support point based rendering.
 
sqrt[-1];1144873 said:
The developer admitted that he has been working in a vacuum for the last 10+ years - so he knows very little about how current renderer's work. (Which was very apparent during a Q&A) It appeared that the demos used DX8 to do blitting to the screen. It was also stated that the demos were single core and written in plain non-optimized C, so someone who knew what they were doing could make it run much, much faster. (was a bit distressing to learn he did not know what a memory cache was however)

That kills my idea that it might have been vectorized! Still 80 cycles/pix in something probably using the x86 float stack... and non-cache optimized? Seems like it might really help the guy to try and build a GPGPU version of this.
 
First hit on a Google search for memory cache: http://en.wikipedia.org/wiki/Cache - In this case the CPU cache.

Basically, if you access memory near where previous accesses were, memory retrieval is much faster. (there are no special "C" language access for it - it happens at a hardware level automatically)

This is a problem when demoing scenes that contain a lot of small duplicated geometry, as the entire data structure is probably running off the cache - in a real game scene the entire level would probably not fit in the cache. (Modern CPU caches can be up to 8-12MB in size I think)

And things are generally worse on consoles where the CPUs generally do not play as nicely as desktop CPUs. (not to mention not handling branching code well - so if you have a lot of if() statements that can be bad)
 
Bruce, it is somewhat intriguing that you manged to get where you have without diving into some lower level hardware understanding involving cache issues ... well actually probably most programmers these days are not cache aware. In fact this might very well really put you into a good position to take what you are working on to the next level, ie, do yourself a favor and learn everything there is to know about modern CPU (and GPU) arch at the lowest level and modify your algorithm to take advantage of what you have learned... and don't stop a caches, learn how to vectorize your code using C hardware intrinsics...
 
P.S there are advantages to working in a vacuum
Occasionally, but usually the only advantage is "bliss" :smile:

I see someone else has already pointed you at links to explain caching but, in a nutshell, most of a computer's memory, ie DRAM, is (relatively) s l o w. Caching is a hardware scheme to hide that from the program[er] much of the time.
 
Since this got posted on the console boards because of videos and it's still rather interesting (lots of artifacts though in the comparison video). Lets bump this and see if there is some life in the discussion.

As for that 1999 paper, cutting up the view directions around an objects into visibility masks for subsets of the points is interesting but the way they do it I don't quite see how they can guarantee conservativeness :

To compute the visibility masks, we render the entire set of blocks orthographically from a number of directions in each triangle (typically ten directions per triangle). Each point is tagged with a pointer to its block; this tag is used to determine the subset of blocks which contribute to the orthographic images. The visibility masks in this subset of blocks are updated by setting the appropriate triangle bit.

What if it's visible only in between the directions they use to compute the visibility map?

I'm not a big fan of sparse point rendering any way, I don't see what's there to gain from throwing away the connectivity data of normal voxels and trying to make adhoc guesses where surfaces are.
 
It looks like as if he has no undersampling/interpolation implemented for his point volumes. At least he did not demonstrate smooth/flat surfaces, which is critical of course. But this is hard to make out with these low quality videos and maybe could be added to his algorithms.

The problem definitely is that he does not demonstrate his approach transparently enough, which makes it impossible to invest a second thought into it. He has to be very specific on all the key points that a relevant for real-time graphics if he wants someone to be interested into it. Just showing some videos with horrible quality will never lead to sucess.

On a side note: this music in the videos is annoying as hell! If you don't have background music that doesn't totally suck: don't use it! Just like his logo that could lead someone to think that he tries sell spiritual software that collects cosmic energy in order to render his "infinite details". Keep it simple and plain if you don't have resources for something.
 
Last edited by a moderator:
The problem definitely is that he does not demonstrate his approach transparently enough, which makes it impossible to invest a second thought into it. He has to be very specific on all the key points that a relevant for real-time graphics if he wants someone to be interested into it. Just showing some videos with horrible quality will never lead to sucess.

It was the same problem several years ago when "a certain company" was told it was not a point cloud. Apparently it now is a point cloud. <shrug>
I've been trying to see if he has any published patents but these don't seem to have made it to a public state.

I'm sure there is some interesting technology buried in there somewhere but whether it is of practical use is another matter.
 
It was the same problem several years ago when "a certain company" was told it was not a point cloud. Apparently it now is a point cloud. <shrug>
I've been trying to see if he has any published patents but these don't seem to have made it to a public state.
No wonder considering the title alone: "An improved computer graphics method and software product". Holy shit!

I'm sure there is some interesting technology buried in there somewhere but whether it is of practical use is another matter.
Yes, there is really something interesting about it if it is true that his algorithm makes it possible to render the demonstrated scenes in 30ms on a single core CPU.
Someone should try to guide him on what he needs to do to properly demonstrate his algorithm so that there is at least a chance that something fruitful can come out of this.
 
I was wondering if this would ever come up again and I must say I find the progression of the story more fascinating then the technology. If this does get further I am sure Simon would choke on his weekly doughnut if he pitched the original version over 20 years ago to Imgtec :p

It is all frankly a bit bizarre that someone who has seemed to previously demonstrate a lack of understanding of some important principals such as cache and how GPU's operate and what direction they are heading is teasing something like this with psychodelic imagery and explaining via search in Microsoft Word! Don't get me wrong I wish Bruce every success and I am sure Simon and others do too.

If he really has come up with something novel and he wants a middleware company from it he really needs a partner who has knowledge in technical areas he doesn't and also a business partner to stand any chance of capitalising on this. If Bruce is listening to this read up on a comparable middleware company such as 'http://www.geomerics.com/' for a good example of what you should be aiming for.

On the actual technique itself I am guessing from what Simon said about his first stab using lots of spheres (Reminds me I used to love that game Ecstatica) that part way through he perhaps just had an 'efficient' point or sphere projection system - that dealt with gaps with the oversized spheres causing bad aliasing. Where the point data cloud is stored in an octree and subdivided down to a uniform level - maybe with some LOD by throwing away points that are too small. I suspect the possibility of a novel feature could be where as he says he searches the point set.

I thought the 'animated but rigid' character model was cute - well intentioned but not quite what people would be looking for. I think large chunks of static environments of high detail with traditional skinned polygon characters for a lot of games are fine though - and so does ID from my understanding.

But this also makes me think of Cranberry Sauces Sprout technology and Animtek's Caviar technology years ago for animated characters. The latter being RLE compressed 3d sprites I believe.

It will be interesting to see how this comes out compared to the implementations of PBR's, SVO's, ADF's, and hybrids and all that OTOY stuff within the next few years. But I am pretty sure it won't be the death of the GPU and that he really wants the GPU to be helping him out with this as other experiments are already showing ;-)

This looks cooler: http://www.atomontage.com/
 
Last edited by a moderator:
sounds like just makeing a new picture every time. i feel it is more like you color every pixel on the screen with a 3-dimensional framework "guide" that shows the computer what to color the screen. its like having a 3-d picture in the memory, then coloring the pixels to match the view from a certain distance. by using some math (i think, i cant program...) you can determin how much of your model is shown. this is like a 2-d animation colored in 3 dimensions. smart idea!, but im a skeptical person.
popsci says Bruce Dell needs to get the ip protection, so i await a awesome demo!
 
Back
Top