Unlimited Detail, octree traversals

My reservation is I'm assuming they're doing this by creating a lookup table that you access using camera position and view direction rather than a standard sparse voxel octtree (they have said it's not just a SVO several times over the years if we choose to take them at their word).

If that is the case then omni directional lights might get painful as unlike a camera they're not projecting/looking along a single view direction.

Obviously this all works on the assumption that they aren't just streaming a SVO and are actually doing a 'search engine' of the points. The simplest way I can think to build something like that would be to use the precomputation stage to populate a big lookup table with view direction dependent trigonometry tests.

Well with defferred rendering you would only ever need to look at the voxel data again for dynamic shadow mapping. In that case, you could:
-just live without it -well, that's hard to ask of a next gen game
-Precompute and store depth from the voxel objects and only render dynamic ones in real time - now you are stuck with static lighting, but dynamic shadowing
-Render the voxel scene another 6 aditional times for omnidirectional shadowed lights into a cubemap, just like it's done with rasterisation - fully dynamic lighting, might be slow.
-Use proxy geometry for the static environment during shadowmap generation - fast, simple, but adds to the production pipeline and produces less detailed shadows.
 
yes, we've commented on it a lot in the past

With all those deleted posts, it shouldn't surprise if I'm cautious. So now Euclideon concentrates on the geospatial industry; question would be if they finally got at least one patent granted or not.
 
I hope they will achieve the breakthrough. I am not going to follow this, or watch the video, until they offer the working solution which I can benefit from...
 
With all those deleted posts, it shouldn't surprise if I'm cautious. So now Euclideon concentrates on the geospatial industry; question would be if they finally got at least one patent granted or not.

The posts were deleted because they were noise (at best).
 
With all those deleted posts, it shouldn't surprise if I'm cautious. So now Euclideon concentrates on the geospatial industry; question would be if they finally got at least one patent granted or not.

They can sell software licenses (including licensing the engine to be used in other software, if they go that way), no patent is needed.
Do id software, Epic, Crytek, Frostbite have software patents?
 
This is pretty much how I though Unlimited Detail worked when they released the first demos with lots of repetitive geometry. If you have a tree, you can easily just make multiple children nodes point to the same subtrees. Technically the result is no longer an tree, but an directed acyclic graph (DAG). Trees and DAGs cannot represent "unlimited detail" without unlimited memory.

But how do you get the "unlimited detail". Simply by removing the acyclic rule, and by not having any leaf nodes that have null child lists. The children pointers of these nodes should instead point to another node that is closer to the "tree" root. This results in infinite repeated detail when zoomed in (slightly similar than fractals). Cycles in the graph do not cause an infinite loop in the raytracing algorithm, since the algorithm has automatic cut off once the detail is smaller than one pixel in the screen (each child iteration cuts down the voxel size by 2x2x2).

I like voxel rendering, because it offers perfect level of detail (no popping, memory access pattern that scales ideally when geometry gets further away), is easy to stream (like virtual texturing) and is easier to process (triangle geometry is just an empty shell and can have very complex topology).

Are you talking about this video ?
This video shows how well voxels handle LOD (especially those zoom-ins from above). There's no popping, no visible object mesh transitions and no disappearing objects. Every single small object is still visible at high distance.

However I dislike most of their marketing bullshit. The best part was when he tried to explain how their system is not heavily bottlenecked by seek time on HDD. And then introduced a "slow, 3$" USB 2.0 flash memory stick as an example. Flash memory bandwidth might be lower than HDD bandwidth, but it has much faster seek time. I have been testing our virtual texture system on USB 2.0 sticks, and the latency is definitely lower compared to HDDs (at least on consoles). This kind of marketing might be effective towards common people, but fail badly when used on technical people :)

That information about "keeping just one point of data in memory for each pixel in screen" is basically correct. However you need to stream data in as larger pages (just like with virtual texturing). In general voxel octree streaming is pretty much identical to virtual texture quadtree streaming. The biggest difference is that the data set has one extra dimension. Virtual texturing needs less than 100 MB of system memory to display perfect texture quality (1:1 texel / pixel ratio) for every pixel at 720p (basically "infinite" texture detail with fixed RAM footprint). I would expect SVO/DAG/DG streaming to utilize a bit more RAM, since 3d data has higher overhead (higher percentage of streamed-in page bytes left unused). But the difference shouldn't be that big.

Voxel rendering (including UD technology) is good for static backgrounds. Render the voxels to g-buffers (deferred rendering), and the data goes though the same GPU based lighting & post processing pipeline as rasterized data. That allows you to combine it with moving objects (and moving light sources). Shadow mapping however might be difficult to do efficiently, but you could cast secondary shadow rays directly to the voxel acceleration structure (just like most triangle ray tracers do)... but as with any ray tracer, it's the secondary rays that (likely will) kill the performance.

But id-software did release a game (Rage) that had all the environment lighting (and shadows) baked in their virtual texture. It's not out of question that someone could release a game with voxel geometry and static (baked) lighting + shadows. Add in some dynamic (non shadow casting) lights, and it could look pretty good.
 
Last edited by a moderator:
They can sell software licenses (including licensing the engine to be used in other software, if they go that way), no patent is needed.
Do id software, Epic, Crytek, Frostbite have software patents?

Bares the question then why they've tried on numerous accounts to file patents which have been rejected or even better why they tried on different occassions to sell their portofolio to several IHVs.
 
For patents I'd guess prior work
for selling to Ihv's I'd guess they would have to disclose the shortcomings something they never did in any of their videos.
"you can have unlimited detail in your games"
Well yes you can if you have terrabytes of data
 
This thing is strictly for visualization of teraytes or gigabytes of data, anyway.

The one gaming application I can see for now is a chess game, where pieces can be entirely static and move by translation only. Make pieces crazy detailed but the board also, down to tiny cracks on the board and each square is a piece of wood that is sometimes slightly tilted so one edge is half a millimiter higher than it should be and the opposite edge is half a millimeter lower.

Have you do a fly-by, FPS style, click to get some preset camera position, have a cool environment surround the board. Multiple environments / pieces sets / boards and a "tournament" mode where you fight computer opponents of varying abilities (but it's probably hard for a computer to be both a very good player and making dumb mistakes at the right moments so you can still win)

Or do a 3D Mahjong Solitaire (I remember buying a really great one, good graphics, gameplay and sound/music). Checkers? Any better idea? :)
More shareware types of games than AAA titles.
 
When it started out years ago under "Unlimited Detail" it was (nearly) presented as a revolution in 3D graphics, as I said there were more than one attempts to file patents which weren't accepted from what I recall and I also recall clearly a forum member here which works at one of the graphics IHVs which said here in public that they had been approached (probably amongst other IHVs too) to buy forementioned technology.

Now of course they've changed targets with Euclideon, which might very well lead them to success in the geospatial industry. And yes I honestly wish them any success possible.

Apart from that how often have we heard of revelations in 3D? Funny how graphics IHVs still stubbornly stick to rasterizing up to now for a reason.
 
Apart from that how often have we heard of revelations in 3D?
Quite a few, to be sure. However...
Funny how graphics IHVs still stubbornly stick to rasterizing up to now for a reason.

I'm not always convinced that "for a reason" is always thinking of the best way forward. Keep in mind that if some phenomenally awesome new method came through that was elegant, easy and intuitive to program, and provided a full order of magnitude improvement in rendering, would the IHV's immediately jump on it?

Maybe, so long as their existing IP could wrap around it. But what if instead it was fundamentally incompatible with triangle-based rasterizing? Chances are they'd poo it, and it would get scuttled at the bottom with lots of other technologies.

There are very certainly "reasons", but I cannot be convinced that all reasons were with proper merit. I do not mean to insinuate that Euclidian had the right answer, as I'm keenly aware of all the obvious drawbacks. Nevertheless, I'm still unconvinced that IHV adoption is the right yardstick.
 
I'm not always convinced that "for a reason" is always thinking of the best way forward. Keep in mind that if some phenomenally awesome new method came through that was elegant, easy and intuitive to program, and provided a full order of magnitude improvement in rendering, would the IHV's immediately jump on it?

There are lot of cases where IHVs absorb one way or another smaller firms exactly because there seems to be something behind their ideas.

Maybe, so long as their existing IP could wrap around it. But what if instead it was fundamentally incompatible with triangle-based rasterizing? Chances are they'd poo it, and it would get scuttled at the bottom with lots of other technologies.

In that case I'd say the answer is quite difficult and yes it could be likely they just ignore it. However are you even aware of at least one example where it actually was the case?

There are very certainly "reasons", but I cannot be convinced that all reasons were with proper merit. I do not mean to insinuate that Euclidian had the right answer, as I'm keenly aware of all the obvious drawbacks. Nevertheless, I'm still unconvinced that IHV adoption is the right yardstick.

In order for IHVs to dump rasterizing they'd have to face a method/technology which advantages overweigh its disadvantages compared to rasterizing; again I honestly haven't ever heard or read of anything that came close to a paradigm like that. If there ever was I'd gladly like to read about it out of pure technical curiousity.

While I fully understand your notion above, I'm afraid that without even a single paradigm that would have had the potential or nearly the potential to replace rasterizing it's somewhat of a moot point in the end.
 
But what if instead it was fundamentally incompatible with triangle-based rasterizing? Chances are they'd poo it, and it would get scuttled at the bottom with lots of other technologies.

Well it's not just the IHVs that the finger should be pointed at, surely. Any switch to a technique that's fundamentally incompatible with rasterization is going to have to bring all the software developers who target the IHVs hardware along with it, and potentially involve them in re-writing a hell of a lot of software. So the new thing is going to have to be pretty damned spectacular to get everyone on board.
 
There have to be "LOD" Transitions, else you would have ugly aliasing in motion, eg. a complex object will have a "voxel" information which will be used if the scale is smaller than 1 pixel and child nodes if it isnt.
Now there is the problem of the voxel having the same color from all perspectives, and absolutely no surface information.

Wheres with traditional polygons you have surface normals for specular and other material properties this approach only contains solid colors.
Every detail about the surface has to be generated from neighboring voxels and this is something you cant do easily, especially if you want consistent results.

So this is all solid, static, no-surface-information stuff, more complex effects have to be done in screen-space to get any kind of performance. Its rather limiting for anything really.
 
Back
Top