Unlimited Detail, octree traversals

Oh come on, this is just scanning existing real world locations and converting them to some static dataset that can be cached in real time.

And you don't even see any dynamic lighting or shading, so it's not obvious that the surfaces are super noisy and messy, there's no clear distinction between objects at large scales and so on.

Same bullshit, only now they have more varied material to display. Still impossible to make a good game out of it, especially something compared to GTA or even COD.

I'd really rather not see anything more of this but I guess it'll flood the internet once again...
 
Oh come on, this is just scanning existing real world locations and converting them to some static dataset that can be cached in real time.

Just? 3D scanning from multiple views and reconstructing complex environments is not trivial. Assuming they didn't use some third party software for it or spend eons cleaning shit up it's quite an achievement.
 
First observed by scholars of the Achaemenid empire excited by the apparent rendering capabilities of the abacus, Comet Euclideon makes another pass through the inner solar system.
 
Oh come on, this is just scanning existing real world locations and converting them to some static dataset that can be cached in real time.

And you don't even see any dynamic lighting or shading, so it's not obvious that the surfaces are super noisy and messy, there's no clear distinction between objects at large scales and so on.

Same bullshit, only now they have more varied material to display. Still impossible to make a good game out of it, especially something compared to GTA or even COD.

I'd really rather not see anything more of this but I guess it'll flood the internet once again...

Party pooper :p
 
Just? 3D scanning from multiple views and reconstructing complex environments is not trivial. Assuming they didn't use some third party software for it or spend eons cleaning shit up it's quite an achievement.
Laa-Yosh's point isn't to undervalue the effort of effective capture, but the massive limitations of the end data. 3D captures are great for historical building preservation. They'll be great for fairly static adventure games with fixed locals to look around (maybe their game is a murder mystery set in a mansion?). But Euclideon are placing themselves as superior to all the other 3D tech companies because they have broken through the photorealistic-detail barrier that no-one else can, ignoring completely that everyone is pursuing solutions that are dynamic. What good is a cathedral interior if it can only be rendered at one time of day at one time of the year in one weather condition? Also, their realtime comparisons were with characters (avatar) yet they haven't shown characters, only scenery. Let's see what their engine does with Avatar. Or how's about picking up one of those cathedral candlesticks and putting it on the floor. Rubbishy old Elder Scrolls can't manage that photorealistically because they're not use Teh Awsomist FuturTech Euclideon!

Euclideon is perfectly entitled to trumpet their own achievements, but they shouldn't hype it unrealistically and basically lie about their tech by leaving unanswered questions and letting people believe the details. There's a fundamental data storage and access issue, especially with datasets that are dynamically changing. They've never shown anything to suggest they've solved this, while the technical view shows it'd have to be an effin amazing solution if it exists. If they want to talk about realtime graphics, they need to talk about dynamic graphics, realtime lighting, moveable objects, and physics. An engine that can't handle all that is very niche, suitable for little beyond virtual walkthroughs and 3D virtual photographs.
 
Just? 3D scanning from multiple views and reconstructing complex environments is not trivial. Assuming they didn't use some third party software for it or spend eons cleaning shit up it's quite an achievement.

We've spent about 2 years with R&D on scanning and tested many different solutions from LIDAR through handheld lasers to stereo photogrammetry, and we've built our own system for characters and props. I wouldn't say I'm an expert, but I am familiar with the methods and the possible results.

Photogrammetry for a static interior is actually quite easy, you need to do a lot of shots for good coverage but you don't need to worry about a zillion things. LIDAR isn't that hard either.

Also, where did they say anything about cleanup? I'm willing to bet that their data is super noisy and inaccurate in a lot of cases. However if you also project the photographs back as a texture and render just that with no lighting or shading, then these inaccuracies get pretty hard to spot.
Also notice how they're using different environments - the temple interior gets no close-ups, the stairs get no wide shots. The accuracy of these two scenes must be wildly different, but they're suggesting that they can get that level of detail in large interiors or exteriors too - let's just call this a sleight of hand...

Processing this data to a quality level where you can use dynamic lighting, shading, shadows and such, so where it can fit into an FPS for example, is a huge effort.
Even when movie VFX needs a digital set replacement, it's much easier - they do a LIDAR to get proper size and placement, and shoot a lot of highres photographs. Then they build simple geometry and project the photos and render without shading or lighting. Detail is once again rough, but at least not noisy - however they can scale it depending on the fixed camera movement of the shots.

But for an interactive environment, you'd still have to remodel everything, process the photo textures to remove shading and highlights reflections (look at the cupid statues in the temple, baked reflections), add material properties and so on. Funnily enough there's a nice example of that, The Vanishing of Ethan Carter.
http://www.gamersyde.com/news_the_vanishing_of_ethan_carter_new_trailer-15377_en.html

Characters get even more complex and that game didn't really get them right so far, but there are countless other examples where it went well, not to mention movies.

So no, there isn't really that much of an achievement in their datasets, in fact I believe they'd look crap in a proper renderer.
 
Euclideon is perfectly entitled to trumpet their own achievements, but they shouldn't hype it unrealistically and basically lie about their tech by leaving unanswered questions and letting people believe the details.

What their tech seems to be doing, and quite well, is realtime visualization of huge static datasets. But we already knew that from the previous demo and the only difference here seems to be that it works fast enough without relying on massive instancing, too. This is great tech for a number of industries, but video games or anything interactive wouldn't benefit from it at all.
 
Perhaps good for an old style adventure game in the vein of Myst, Riven, 7th Guest, as you say. Pre-rendered 360 degrees stills and FMV transitions solve the storage and rendering issues :)

Real time renderer and asset management would be used by the content creators only.
 
Well if you're going for that, why not generate the views straight from the photographs? I think there already are camera systems to capture a full 360 range in a single shot...
 
I always sell a product by sitting in my kitchen with a lavelier mic... really professional.

Either way. That video man. Not that I don't find it good looking (I've worked with laser scanning in the past, but for very different reasons), but scanning an architecture and reproducing it will not save you artists... I guess this guy/company has no real idea of what makes a game (or a movie, for that matter). Especially today, dynamic lighting is "what it's about". And that problem isn't solvable by scanning, really.
 
I'm beginning to think that this... campaign has very little to do with acquiring video game customers. Looks more like an ego thing, really.

I mean any sane developer would immediately see through the bullshit, there's no way to actually sell them the tech; and all the talk and video is aimed at clueless gamers instead.
 
And that problem isn't solvable by scanning, really.
As a talking point, it could be with enough scanning and calculating variances. Imagine a whole load of shots of the same scene at different times of day. As the colours change, you could derive a neutral albedo from the integral of every point, say. Then you can compute the variance from the albedo at a given time and store that as just a delta with a representation that allows for tweening between different time points. I'm sure there's a lot that could be done with scanning including geometry, surface properties, and lighting data. It's just nothing like the cure-all it's being represented as. And, of course, it can't handle anything that's not scanned, which is probably more what your saying (no flaming torches or first-person flashlights happening in any Euclideon scene ;) ).
 
The problem with your idea is that most scanning methods for large interiors are producing pretty noisy geometry, and there's also going to be a lot of unnecessary detail. The walls for example wouldn't be flat at all, and instead of relatively simple surfaces you'd get a super dense mesh (or lots of voxels if you convert the data).

High quality surfaces are of course possible, but the methods aren't really feasible for large scenes, they're for scanning people and props.

Portable laser scanners need dozens of passes for a human sized subject, impossible to apply to a temple interior. Photogrammetry's most advanced implementation is the Lightstage developed by Paul Debevec; it's a huge and heavy instrument in a big room - a geodesic sphere with computer controlled LED lights - so it's not really portable at all. LIDAR resolution will depend on distance from the device and it's not really feasible to take a scan for every square meter of the scene either.

Then there's the problem with shading, you'd need to differentiate between stone and metal and such. Capturing material attributes requires controlled lighting as well, just look at the scanner Ready at Dawn has built for The Order - once again relying on computer controlled LED lights and lots of photos. I can't even begin to imagine a portable device that could work in a large interior scene.

Maybe, maybe, it's possible to develop software that can analyze multiple scans and refine the data, but it'd still lag far behind what a few talented artists could do with the raw scans. At this time it's more economical to rely on manual work to get high quality results.
 
So... is there ANY lighting then? I mean, the laserscanner can't really replace a gonioreflectometer for deriving the BRDF materials, as I don't think they move light sources or the cameras around to capture the different values for different angles.

It's sort of like Resident Evil... capture high res textures (well, in REs case, offline render) and texture a simple room approximating sheets were the player character can walk around. Use some light probes and shade the character from "within the world" (though that would need additional work from them instead of using the laser scanner). But still wouldn't allow them to actually do anything besides recreating a real life room and virtually nothing else.
 
Photogrammetry's most advanced implementation is the Lightstage developed by Paul Debevec; it's a huge and heavy instrument in a big room - a geodesic sphere with computer controlled LED lights - so it's not really portable at all.
There was a portable lightstage shown at Siggraph. I don't know how good it is and it's smaller than the original lightstage.
 
I'm beginning to think that this... campaign has very little to do with acquiring video game customers. Looks more like an ego thing, really.

I mean any sane developer would immediately see through the bullshit, there's no way to actually sell them the tech; and all the talk and video is aimed at clueless gamers instead.
Or clueless customers that need some 3D Engine for visualisation, or game companies with CEOs that are totally detached from the technology.

You would be surprised how much plain idiotic decisions are made within big companies not driven by engineers.
 
They are back.

The surfaces look way too static. There's no specular highlights and no reflections (*). Any modern PBR pipeline produces a better looking lighting result. They have lots of geometry detail, and that's very nice, but otherwise their pipeline is not that impressive. It's only rendering single textured unlit surfaces (all diffuse lighting is "offline" baked on the "textures").

Let's assume that their future scanning technology will be able to extract BRDF materials. This will at least triple the captured data amount. But the biggest problem isn't the data amount, it's the fact that since there's no baked lighting, the game needs to start rendering shadow maps (secondary "atom" projection/viewport) or casting secondary rays. A single dynamic light source (sun) would (at least) double the processing power requirement (likely more since the secondary rays are not as coherent as the primary rays). In modern games you expect to have more dynamic light sources in addition to the sun, and that would drag down the performance even further.

Let me compare this to real time ray-tracing. It's possible to ray-trace quickly on current hardware, if you only need primary rays. Bu primary rays alone don't bring anything fancy over standard rasterization. Nobody would be excited about a ray tracer without any reflections or shadows. As soon as you add secondary rays in the mix for shadows and reflections, the ray tracing performance will plummet. There is no real time ray-tracer that can render complex scenes with dynamic lighting and secondary rays (reflections) at an acceptable quality and acceptable frame rate.

You could also compare this technology to a completely static virtual textured game scenario. Virtual texturing allows you to have "unlimited" texture data in a single scene (using only a constant amount of runtime memory). Id-software for exampled baked all the lighting in Rage to the virtual texture. This way you don't need to light anything at runtime (or render any shadow maps), freeing lots of performance. Id used this to achieve 60 fps on last gen consoles, while many other AAA games achieved only 30 fps with "worse" image quality. However their game data required three dual-layer DVDs! JPG XR is a very efficient data compression algorithm. Compressing texture data of voxels is slightly less efficient. Assuming you want similar world size and similar "texture" quality, a voxel based approach would take even more DVDs. First person games are very demanding for texture/voxel resolution, since the camera (character head) frequently goes very close to the walls and objects in the game world, and the floor (or terrain) is not that far away from the camera either (much closer compared to 3rd person games for example). Rage didn't have dynamic lighting either. With the required normal maps and material maps (roughness, etc), the amount of texture data would have at least tripled. That would have been too many DVDs to be profitable.

(*) This is actually a perfect use case for reverse reprojection. Reprojection should cover roughly 80%/90% of the scene pixels, saving quite a bit of work.
 
Last edited by a moderator:
Back
Top