Unlimited Detail, octree traversals

So, they've finally hired someone with common sense?

Visualizing LIDAR data quickly is something this tech is much more suited for, no need for animation or re-lighting or changing the source data in any other way.
They can also skip dealing with the content creation aspect.

I guess all of us who didn't for a second believe that this is suitable for video games are now justified :) And the government money isn't wasted either, so the only people left unhappy are those who thought this would bring some sort of revolution...
 
So, they've finally hired someone with common sense?

Visualizing LIDAR data quickly is something this tech is much more suited for, no need for animation or re-lighting or changing the source data in any other way.
They can also skip dealing with the content creation aspect.

I guess all of us who didn't for a second believe that this is suitable for video games are now justified :) And the government money isn't wasted either, so the only people left unhappy are those who thought this would bring some sort of revolution...
nooooooooooooooooooooooooo, my dreams!
 

Yes very good video, one of the things that I like is that there appears to be zero aliasing going on. Their suggestion of being able to go for higher resolution once gpu assistance is added also sounds promising. 4k console games might be viable with gpu assistance if a single laptop cpu is able to render tens of frames at what seems like hd resolution.

So, they've finally hired someone with common sense?

Visualizing LIDAR data quickly is something this tech is much more suited for, no need for animation or re-lighting or changing the source data in any other way.
They can also skip dealing with the content creation aspect.

I guess all of us who didn't for a second believe that this is suitable for video games are now justified :) And the government money isn't wasted either, so the only people left unhappy are those who thought this would bring some sort of revolution...


What do you mean? The linked interview video suggests animation would not impede realtime framerates, and was entirely suggestive of applicability towards games.
 
They haven't gave up on gaming, at least officially:

http://www.euclideon.com/company/history/

Euclideon is still developing exciting new technologies for the gaming industry.
Despite its interest in the spatial industry, Euclideon has not forgotten its roots in gaming technologies, and is continuing to develop exciting and revolutionary technologies to be used in next-gen games.
Watch this space for future announcements.
 
They haven't gave up on gaming, at least officially:

http://www.euclideon.com/company/history/

Euclideon is still developing exciting new technologies for the gaming industry.
Despite its interest in the spatial industry, Euclideon has not forgotten its roots in gaming technologies, and is continuing to develop exciting and revolutionary technologies to be used in next-gen games.
Watch this space for future announcements.

Hmmm. Have Euclideon actually developed any "exciting new technologies" that are actually used in current-gen gaming? Is anyone planning on using any of their technology in next-gen gaming? I mean there must be something, right? If your roots are in gaming technologies then somebody out there must be doing something with your tech?
 
They haven't gave up on gaming, at least officially:

http://www.euclideon.com/company/history/

Euclideon is still developing exciting new technologies for the gaming industry.
Despite its interest in the spatial industry, Euclideon has not forgotten its roots in gaming technologies, and is continuing to develop exciting and revolutionary technologies to be used in next-gen games.
Watch this space for future announcements.
Object scanning, maybe. Their roots in gaming is stretching it a bit. They never had a real, competitive gaming tech. There's been talk of voxelised terrain, so maybe they can roll that out? But with terrain tessellation, I'm not even sure that has value. I'm willing write Euclideon off at this point until they show something new and realistic.
 
In case it is unclear, spamming these forums won't get you editing rights, but it might get you a vacation. The offending posts have been deleted, as is rather obvious.
 
Fast access into completely static scenes was always going to be a good use for this.
They did themselves no favours with all the nonsense about unlimited detail gaming but it's nice to see they finally started applying the technique to its strengths!

I wonder if they ever tried to get the patent on the core technique. There's got to be a mountain of prior art in the area.

edit: just watching the video - he does still talk some crap though. Laser scanners aren't exactly producing the bulk of the worlds data, geo applications aren't one of man kinds biggest achievements, quoting data sizes in trillons of bytes, etc. It's like he's targetting PR at teenagers rather than companies - then again a youtube demo with some fuzzy science words may be the best way to sell things these days *shrug*
 
Last edited by a moderator:
For it to be used that way it would probably need two things:
1) be able to produce an accurate depth buffer (they probably can do this)
2) dynamic lighting and shadows (this may not be an easy task but pre-baked would probably be good enough for some games).
 
For it to be used that way it would probably need two things:
1) be able to produce an accurate depth buffer (they probably can do this)
2) dynamic lighting and shadows (this may not be an easy task but pre-baked would probably be good enough for some games).

I think it would be prety straight foward with defferred rendering. Each voxel would store albedo color, and normal, and that would be rendered to a Gbuffer. It could also have other stuff like specular and glossiness if you wanna go fancy, but then the already huge size of the data-set of a full sparse voxel octree become even larger.
Yet, you run into the interactivity problem. Completely stactic environments with a couple characters in it is not exactly where most engine are heading.
 
Just use it for non-interactive static scenery. Theres always plenty of that. The data set can be kept on a server? It seems too good to be true. Maybe MS should have had a look at this for their XB1 cloud.
 
I think it would be prety straight foward with defferred rendering. Each voxel would store albedo color, and normal, and that would be rendered to a Gbuffer. It could also have other stuff like specular and glossiness if you wanna go fancy, but then the already huge size of the data-set of a full sparse voxel octree become even larger.
Yet, you run into the interactivity problem. Completely stactic environments with a couple characters in it is not exactly where most engine are heading.

My reservation is I'm assuming they're doing this by creating a lookup table that you access using camera position and view direction rather than a standard sparse voxel octtree (they have said it's not just a SVO several times over the years if we choose to take them at their word).

If that is the case then omni directional lights might get painful as unlike a camera they're not projecting/looking along a single view direction.

Obviously this all works on the assumption that they aren't just streaming a SVO and are actually doing a 'search engine' of the points. The simplest way I can think to build something like that would be to use the precomputation stage to populate a big lookup table with view direction dependent trigonometry tests.
 
Just use it for non-interactive static scenery. Theres always plenty of that. The data set can be kept on a server? It seems too good to be true. Maybe MS should have had a look at this for their XB1 cloud.

Thats actually good idea for cloud processing. Performance free very high quality backgrounds, it would be amazing especially for sports or racing games.

As i said, they've hit a gold mine with this one, good for them.
 
Maybe the video explains this but it's taken down. What gold mine did they hit? i.e. is there something new?
 
Shame they took down the video.

This latest demo was very good. It looked a bit like a super high resolution realtime flythrough of google streetview at times. They were parsing static laser scanning data to produce fly throughs of real world scenes with at least both position and colour of every point. The data sets were things like city road networks and building models (i.e. up to kilometers of roadside imagery).

Interestingly they were streaming the data directly off a conventional hard drive (and at one stage a USB2 thumb drive). So that means they're not simply seeking through the data. They had realtime viewing running on a mid range laptop. They were also able to jump from one view to another near instantaneously so it's not relying solely on streaming data. When they did a jump you could see a fraction of a second where a low res model popped in and then was refined. It was however very fast so wouldn't be distracting in use but gives some big clues to how it's laid out within the file.

There's a non-realtime pre-computation stage to reorganise the data. Can't remember exact values but it wasn't horrifically slow. Minutes -> hours for very large data sets. No info of what they during the pre-computation. Very high levels of compression - the indexed/reordered data set is apparently smaller than the original data (my suspicion is they're comparing their compressed file size with uncompressed point size data though).

Presentation style was similar to all the others though with some slightly odd claims - that said the tech looked solid this time (which is really all that matters).
 
Back
Top