Unlimited Detail, octree traversals

Terrible demonstration video from them as well. Nothing has changed at all with their tech and it's limitations. Their VR "game" was far too low quality and short to make much out but basically looked like one of their flat lit detailed terrain areas with some really poor quality animated standard models for the "game" part (the monster). They need to focus their tech on an area they actually might be useful in, but that is definitely not games.
 
Terrible demonstration video from them as well. Nothing has changed at all with their tech and it's limitations. Their VR "game" was far too low quality and short to make much out but basically looked like one of their flat lit detailed terrain areas with some really poor quality animated standard models for the "game" part (the monster). They need to focus their tech on an area they actually might be useful in, but that is definitely not games.
It might be a very good tech for games if used right (by talented devs). But nobody knows as Euclideon are not telling enough about their tech. They are also not game developers, so they don't know how to market for game developers. We don't care about PR bullshit. We want to know exactly how every piece of tech we integrate works, especially all the limitations. Every veteran programmer in games tech industry has burned their fingers multiple times by chasing perfect tech only to later notice a "minor" shortcoming that invalidates an otherwise good idea completely. I believe many of us would also be interested to know whether their algorithm could be ported to GPUs. This could make it much more viable. Their performance is pretty good on CPU (albeit only for primary rays), but how well does it parallelize to millions of SIMD lanes.

I am not blinded by their bad assets (models, animation, etc). I have done lots of bad coder art to our internal tech demos. Tech needs good assets to look good. However if they don't have good tools to convert high-poly meshes to point clouds and/or can't directly export animation from existing DCC software, then they have a problem. It's really hard to reach wide adaptation with tech that doesn't support industry standard DCC tools. Media Molecule did their own tools for Dreams, but it took them very long time, and they had talented content creation team to give the programmers feedback during the whole process. And their game is heavily based around user generated content. Users will be using the same tools as the developers. Their tools are quite different from traditional PC DCC tools geared towards professional artists.
 
I am not blinded by their bad assets (models, animation, etc). I have done lots of bad coder art to our internal tech demos. Tech needs good assets to look good. However if they don't have good tools to convert high-poly meshes to point clouds...
That's where they completely contradict themselves. Other devs, stuck with their stone-age ideas of triangles, spend hundreds of man hours creating triangle based assets, whereas Euclidon has invented SolidScan that enables one guy to get photorealistic assets in no time at all. They showcase it in the video, photorealistic environments captured from the real world. Having boasted how awesome their asset creation is, how then do we justify their crap game art?

Interestingly I politely asked as much in a comment on the vid, but the comment's not there...
 
The biggest flaw in those captured forest scenes is the complete lack of specular lighting. Every material has some specular. Not only shiny ones. Id-Software's Rage had the same problem with fully baked lighting (remember, it was a 60 fps game on Xbox 360 and PS3).

If Euclideon could scan normal maps and extract material properties (specularity, roughness) from the surfaces the results would look significantly better. Also these scenes are very small. Super low resolution background is just a few meters away. Unique scanned geometry takes just too much storage space for big environments. Scanning and storing a single tiny forest would take terabytes. Imagine now much storage Everquest or WoW game world would take :)
 
The biggest flaw in those captured forest scenes is the complete lack of specular lighting. Every material has some specular. Not only shiny ones. Id-Software's Rage had the same problem with fully baked lighting (remember, it was a 60 fps game on Xbox 360 and PS3).

If Euclideon could scan normal maps and extract material properties (specularity, roughness) from the surfaces the results would look significantly better. Also these scenes are very small. Super low resolution background is just a few meters away. Unique scanned geometry takes just too much storage space for big environments. Scanning and storing a single tiny forest would take terabytes. Imagine now much storage Everquest or WoW game world would take :)
In the second video I posted, in minute 11 you can see some animated animals with specular lighting.

As for normal maps, isn't the point of unlimited detail... unlimited detail? Why would they need normal maps if they allegedly can create intricate objects with all kinds of forms, rugosity, etc.? Normal maps are needed in polygonal environments where you need to fake a higher polygon density in order to achieve prettier results without using the polygons you would need to create all those details. This is why I don't understand you saying that they could scan normal maps. :-S
 
In the second video I posted, in minute 11 you can see some animated animals with specular lighting.

As for normal maps, isn't the point of unlimited detail... unlimited detail? Why would they need normal maps if they allegedly can create intricate objects with all kinds of forms, rugosity, etc.? Normal maps are needed in polygonal environments where you need to fake a higher polygon density in order to achieve prettier results without using the polygons you would need to create all those details. This is why I don't understand you saying that they could scan normal maps. :-S
Normal information is stored so you know the surface normal per pixel.
On polygonal engines you can acces surface normal and possibly few overlapping normal maps as well.

With voxel engines you do not have correct surface normal information unless you store it.
You could try to construct it from depth buffer, but it wouldn't be stable.

You need to have some way to describe surface properly for lighting. (Location, Color, specular color, normal, roughness, occlusion.)

Currently UD seems to have color and reconstructered normal.

What I was horrified was their comment that they cannot use normal lighting methods and had to invent a new way to light objects using CPU.

If you have all necessary information, the lighting equation doesn't care if you drew the source information with a deluxe paint.
It just works.

Sadly they seem to be adamant to reinvent a wheel and result seems to be a classic enviromentmap with light painted on and a XY displacement on read depending on surface normal. (Really, we used this in old good Pentium 1-3 era demos)

Sorry for the rant, but they have nice fast first ray search method and they could make it good.
Sadly as they withold information it's hard to know how it scales to multiple views and so on. (Could be great if new view would be cheap)
 
The biggest flaw in those captured forest scenes is the complete lack of specular lighting. Every material has some specular. Not only shiny ones. Id-Software's Rage had the same problem with fully baked lighting (remember, it was a 60 fps game on Xbox 360 and PS3).

If Euclideon could scan normal maps and extract material properties (specularity, roughness) from the surfaces the results would look significantly better. Also these scenes are very small. Super low resolution background is just a few meters away. Unique scanned geometry takes just too much storage space for big environments. Scanning and storing a single tiny forest would take terabytes. Imagine now much storage Everquest or WoW game world would take :)
Gamestop/GAME/Walmart should partner with Sony to brings its new 3.3TB storage optical medium to Xbox and PC gamers. Then support developers who utilize datapoint rendering and this drive to combat the rise of digital games taking it marketshare.
http://www.computerworld.com/articl...y-cranks-up-optical-disc-storage-to-33tb.html

If everything shifts back to physical media Sony could benefit from this because it would combat Microsofts datacenter infrastructure advantage.
 
Normal information is stored so you know the surface normal per pixel.
On polygonal engines you can acces surface normal and possibly few overlapping normal maps as well.

With voxel engines you do not have correct surface normal information unless you store it.
You could try to construct it from depth buffer, but it wouldn't be stable.

You need to have some way to describe surface properly for lighting. (Location, Color, specular color, normal, roughness, occlusion.)

Currently UD seems to have color and reconstructered normal.

What I was horrified was their comment that they cannot use normal lighting methods and had to invent a new way to light objects using CPU.

If you have all necessary information, the lighting equation doesn't care if you drew the source information with a deluxe paint.
It just works.

Sadly they seem to be adamant to reinvent a wheel and result seems to be a classic enviromentmap with light painted on and a XY displacement on read depending on surface normal. (Really, we used this in old good Pentium 1-3 era demos)

Sorry for the rant, but they have nice fast first ray search method and they could make it good.
Sadly as they withold information it's hard to know how it scales to multiple views and so on. (Could be great if new view would be cheap)
Thank you for the explanation.
 
As for normal maps, isn't the point of unlimited detail... unlimited detail? Why would they need normal maps if they allegedly can create intricate objects with all kinds of forms, rugosity, etc.? Normal maps are needed in polygonal environments where you need to fake a higher polygon density in order to achieve prettier results without using the polygons you would need to create all those details. This is why I don't understand you saying that they could scan normal maps. :-S
Yes, you can obviously reconstruct surface normal from geometry if your scanned data is dense enough. We do it as well in our renderer. But normal + albedo is not enough for high quality PBR lighting. You need a rgb specular map (esp for non-dielectric materials such as metals) and a roughness map (or two for anisotropic specular). More complex materials need even more data.
In the second video I posted, in minute 11 you can see some animated animals with specular lighting.
Rage also had dynamic lighting only for moving objects. Static scenery had baked lighting with no specular ("printed cardboard" look). Dynamic objects looked sometimes out of place. These problems were OK for last gen 60 fps console game. A modern lighting pipeline needs unfied lighting model that applies the same lighting data to all surfaces. Specular occlusion is also very important. Otherwise objects seem to float (geometry is not blocking specular reflections). I am just wondering how well Euclidion's tech is suited for querying specular occlusion (spatially varying query points with highly incoherent direction).
 
So @sebbbi when do we get to see something? :)

front_1.jpg
 
Rage also had dynamic lighting only for moving objects. Static scenery had baked lighting with no specular ("printed cardboard" look). Dynamic objects looked sometimes out of place. These problems were OK for last gen 60 fps console game. A modern lighting pipeline needs unfied lighting model that applies the same lighting data to all surfaces. Specular occlusion is also very important. Otherwise objects seem to float (geometry is not blocking specular reflections).
Yeah, I remember that.

I am just wondering how well Euclidion's tech is suited for querying specular occlusion (spatially varying query points with highly incoherent direction).
I wonder the same... If Euclideon's tech has room for improvement (or maybe a hybrid solution) or if the limitations are in the very roots of it.
 
I wonder the same... If Euclideon's tech has room for improvement (or maybe a hybrid solution) or if the limitations are in the very roots of it.
The thing that bugs me is that I don't know. Euclideon's structure might be a perfect data structure for incoherent ray queries or it might be very bad data structure for it.

A data structure with fast ray queries and fast cone queries would be perfect for modern lighting pipelines. Cone query should return sum (integral) of all surface area hit by the visible part of the cone. Voxel cone tracing (https://research.nvidia.com/sites/default/files/publications/GIVoxels-pg2011-authors.pdf) kind of does this. But we need improved data structures to make it faster and at higher quality. I would hope that we can soon say goodbye to baked cube-map based lighting.

Also, you can bake specular lighting as well. But it is significantly more complex than baking diffuse albedo. Here's a good presenation about this:
https://readyatdawn.sharefile.com/d-s9979ff4b57c4543b
 
But the fully lit aspect or very limited lighting fools me, it reminds me of the good days of the 90s. Jedi Knight 1 was one of my favorites and was impressive once running on a 3dfx voodoo. It had some impressive room/outdoors sizes. Obviously, every surface is painted with a single layer of texture and that's all.

What doesn't fool me is the animation, it seems so obvious they use one complete voxel statue for each frame of animation (the dumb and easy way) and I'm curious how many voxel statues you will need exactly for a Blizzard game (their example).
Is a game possible, but requiring 64GB of RAM and 1TB of local hard drive space?

The first Jedi Knight also had full motion video scenes with real actors. Looks like cutscenes with actors would be similarly needed if you want story telling moments.
Otherwise.. I'm half-expecting a helicopter seek-and-destroy game in a fantasy setting of hills and dales.
 
Last edited:
His claims were bollocks and insulting. He's experiencing the reaction of ordinary people responding how they would normally when insulted. This isn't a case of some medieval thinkers refusing to see the truth his discovered. He's no Galileo or Darwin or victim of closed-minded thinking.

I am wondering if it's what happened to Giordano Bruno. He's remembered for pretending that stars are actually other Suns and there may be an infinite amount of them with worlds orbiting them. That's like an unlimited universe where you can zoom at will to other Earthes. But the man did manage to piss off everybody else, on actually unrelated matters, and that ended badly.
 
I watched the holo-dec video, and found it funny that the only person he mentions to have seen potential on euclideon's tech was the ceo of crytek, which has bet in the wrong horse more than once, and like euclideon, seems to depend more on money from investors than revenue from shipped products.
 
Why so salty? Numerous other people are recognizing the potential. The problem is that this potentiality of their approach is not explorable or verifyable or developable. If the guy proposing string theory wouldn't have been open to scrutiny, he would be known as a big troll and scharlatan. Because this didn't happen, it found it's niche and recognition.

Edit: I'm not implying it's the same class of invention, I'm just noting that crazy appearing ideas can end up objectively valuable.
 
Also, you can bake specular lighting as well. But it is significantly more complex than baking diffuse albedo. Here's a good presenation about this:
https://readyatdawn.sharefile.com/d-s9979ff4b57c4543b
It almost takes on a different meaning.

"Baked specular" in that case still calculates the reflection at runtime based on information about incident lighting and the materials. Which means that information that will play nicely with dynamic stuff also exists.

Whereas Euclideon's environments possibly just bake the final radiance of the point. An analogous method for specular might be to bake a view-angle-dependent radiance distribution for every point.
 
An analogous method for specular might be to bake a view-angle-dependent radiance distribution for every point.
Yes. There are many methods for baking the view-angle-dependent radiance distribution. SGs are one way. Spherical harmonics (lightmaps or volumes) are another. One of the earliest methods was called "Radiosity normal mapping" and was used in Valve's source engine (http://www.valvesoftware.com/publications/2006/SIGGRAPH06_Course_ShadingInValvesSourceEngine.pdf). They store 3 lighting values per surface pixel. Each stores different incoming lighting direction. At runtime these 3 lighting samples are interpolated based on view to surface vector. This is pretty good already, and only 3x uncompressed storage cost (it compresses pretty well, so real storage cost is not 3x higher).

Surface scanning technologies are also able to capture directional lighting of each surface point. As long as there are enough scans from different directions. Of course this results in completely baked lighting (no possibility to move light sources or modify environment at runtime). But would result in very good looking baked lighting. Baked lighting is still used in many AAA games and is a valuable technique in mostly static scenes. But is is important that you are also able to add moving (local) lights on top of the baked lighting result. This requires full set of scanned surface properties (normal, roughness, specular.rgb, diffuse.rgb). I can't imagine modern AAA games without these common moving light sources: car headlights, light source carried by player (torch, flashlight, etc), flames (bonfire), explosions / lightning / futuristic effects (plasma, etc). Dynamic day and night cycle is also starting to become common in open world games. Some game genres just wouldn't work without it.
 
I'm far from an expert, but even if this technology was relevant, it seems already outaded by photogrammetry


Photogrammetry is already used in some games.
 
Last edited:
Back
Top