Unlimited Detail, octree traversals

Yeah, I'd like to see someone build a Mass Effect game by "importing" real world stuff. And that was a fairly realistic art style I've picked, but we could go with Gears or Ratchet or this new one Wildstar... Scanned real life stuff wouldn't even work for 5% of games.

Also, it's not as easy or cheap as one would think, I've looked into proper scanning solutions and they're hideously expensive as you go up with the precision. There are the various photo based tools getting a lot of interest, but those aren't precise enough for closeups either.

But rebuilding the Euclidean demo with poly based assets would certainly be an interesting challange. Some Crysis fans might even give it a try...
 
Yeah, I'd like to see someone build a Mass Effect game by "importing" real world stuff. And that was a fairly realistic art style I've picked, but we could go with Gears or Ratchet or this new one Wildstar... Scanned real life stuff wouldn't even work for 5% of games.
To be fair, they did suggest going back to traditional art, so sculpting abstract designs in clay and scanning in. Of course scanning complex objects isn't going to be any easier than modelling them, I'm sure. I think, fundamentally, UD doesn't answer any of the questions of content creation, even though Dell suggests it'll make everything cheap and better. It eliminates a step in converting real-world data to polygons, but as we don't know that those voxels are useable as polygons, there could be lots of work to convert voxel data to data formats that allow skinning and stuff. As always, he just offered a miracle solution without a valid explanation or decent proof.

Also, it's not as easy or cheap as one would think, I've looked into proper scanning solutions and they're hideously expensive as you go up with the precision...
Maybe that new Kinect tech can make it cheap?! :D That would be pretty awesome. I'm guesing it's using voxels. Converting its 3D space to useable poly models would be as much an amazing software tech as the capture itself!
 
To be fair, they did suggest going back to traditional art, so sculpting abstract designs in clay and scanning in.
That idea is utterly stupid.
Sculpting on the computer using Zbrush or Mudbox or such is a lot faster, you have automatic symmetry, undo, ability to quickly change proportions drastically while maintaining smaller scale details and so on. Texture painting using photo sources, complex brushes. Layers, filters, and so on.

It just goes to show how completely in the dark Dell is about modern content creation pipelines. He doesn't even bother to sit down and ask a few questions from someone who's actually did some real world game art production.

Not to mention that you can just as well voxelize the zbrush/mudbox sculpted, textured models as you can do this with scanned stuff.

Maybe that new Kinect tech can make it cheap?! :D That would be pretty awesome. I'm guesing it's using voxels. Converting its 3D space to useable poly models would be as much an amazing software tech as the capture itself!

The Kinect stuff is totally inaccurate too, about 1-2 centimeters of precision at most. Everything would look like it was made of melted vax.
It's good for reference, to scan real world faces, body proportions etc. but then again those are usually boring for the typical gamer.

And anyway, although it's not realtime, Autodesk's Photofly can offer a lot more freedom, if you are able to photograph the object or scene you want to get into the computer. It'll still be a mess, but you only need a digital camera.
 
The Kinect stuff is totally inaccurate too, about 1-2 centimeters of precision at most.
The latest 3D construction demo shows a Dell logo on the back of the monitor, 1mm in depth. In terms of tech, we're a long way from getting instant scan-and-use object import, but with MS's latset showing I'm wondering how much time we are away from that? I think I may end up being surprised!
 

That's a nice big post you got there, mad props :)

Most of what you said is true. I always have issues with looking at Steam HW statistics, because most Valve games are so under-specced (well, optimized would be the nice word) that I suspect a lot of Steam users use it to exclusively play something like TF2 or even CS: Source, which are obviously content to run on 7-year old PCs.

And expect the HW averages to take even more of a hit once DoTA 2 is released and the Internet descends on it:

DotA 2 Minimum System Requirements:

* OS: Windows® 7 / Vista / Vista64 / XP
* Processor: Pentium 4 3.0GHz
* Memory: 1 GB for XP / 2GB for Vista
* Graphics: DirectX 9 compatible video card with 128 MB, Shader model 2.0. ATI X800, NVidia 6600 or better
* Hard Drive: At least 2.5 GB of free space
* Sound: DirectX 9.0c compatible sound card

So I would posit that the average Steam user's hardware is not (and will not be in future) necessarily reflective of the average gamer's hardware who actually likes to consistently play new AAA games.
 
The latest 3D construction demo shows a Dell logo on the back of the monitor, 1mm in depth.

Hmm, do you have a video of that? And are you sure it's properly picked up in the geometry, and it's not just some strange bump with the photo texture creating the illusion of precise detail?

When I'm talking about high quality scan data, I mean stuff like this:
efty_04.jpg


The equipment for this level of precision and detail is very, very expensive, usually immobile as well. I don't think a Kinect or such device can ever get close to this quality.


In terms of tech, we're a long way from getting instant scan-and-use object import, but with MS's latset showing I'm wondering how much time we are away from that? I think I may end up being surprised!

It's one thing to scan something that looks reasonably OK, but has a terrible mesh layout, terrible UVs and textures with all the lighting, shadows and such baked into it, with no object separation and so on. And it's a completely different thing to turn it into an asset that can be animated, shaded without artifacts, modified, and so on. That part still takes a lot of time.
 
Hmm, do you have a video of that? And are you sure it's properly picked up in the geometry, and it's not just some strange bump with the photo texture creating the illusion of precise detail?

The Dell logo is highlighted at 1:23.

Doesn't look like they are doing mesh conversion, but the quality is remarkable for a point-and-shoot solution using a consumer device. A professional solution could do much better, at which point, if voxel rendering worked in all areas, 3D capture would offer one means of capturing objects. Of course voxels are very unlikely to work that well, and what's needed is to get from this 3D data to 3D meshes, which as you say is a mountain still to climb. This Kinect Fusion is definitely a major accomplishment though.
 
The equipment for this level of precision and detail is very, very expensive, usually immobile as well. I don't think a Kinect or such device can ever get close to this quality.
I very much doubt there is anything exotic about their hardware, almost certainly just a couple of high FPS cameras and scanned structured laser light patterns ... not altogether dissimilar to Kinect (Kinect just uses a fixed pattern rather than moving it).

Give a decent EE student 10K$ a couple of months and let him ignore patents and he can build it for ya.
 
The problem is that proper reflective materials require precise, smooth surfaces. The mess of points that this approach can generate would never look good enough with such shaders. It's a lot faster to manually build these objects and use polygons to get smooth surfaces, both planar and curved.

It'd be okay for rough stone walls or tree bark or such surfaces which also tend to be quite diffuse. But anything industrial is never really going to look good enough with this approach.
I have no idea how hard it would be to automate the remodeling, though. For now I'm not aware of any production ready tools, it's all manual work.

(Ironically, Zbrush has a lot of hard surface sculpting tools now, so it's far easier to use for voxelization or normal map exctraction, at least for today's games)
 
I very much doubt there is anything exotic about their hardware, almost certainly just a couple of high FPS cameras and scanned structured laser light patterns ... not altogether dissimilar to Kinect (Kinect just uses a fixed pattern rather than moving it).

Give a decent EE student 10K$ a couple of months and let him ignore patents and he can build it for ya.

In that case why aren't there more facilities, how is it that it's only a few such studios that can offer these services - and movie VFX studios are going to them, instead of building their own stuff?

I think you probably seriously underestimate the issues, especially at this precision level.
 
In that case why aren't there more facilities, how is it that it's only a few such studios that can offer these services - and movie VFX studios are going to them, instead of building their own stuff?
It's a boutique high margin industry. They have something that works and the volume of work and the way budgets generally are per project makes it hard to justify the expense ... you can't really use an EE student because you need someone capable of maintaining it too, so basically you end up needing a full time employee.

But if the volumes go up (for instance because of the games industry) and the hardware gets commoditized I doubt it will remain very expensive ... assuming it's not a patent death trap.
 
I still think that the sub-millimeter precision makes it complicated, but then again I'm no laser engineer...
 
In that case why aren't there more facilities, how is it that it's only a few such studios that can offer these services - and movie VFX studios are going to them, instead of building their own stuff?

I think you probably seriously underestimate the issues, especially at this precision level.

Getting the raw data is not the problem...

Writing comprehensive software packages that can take that raw data and bake / massage it into something useful is where the money is.... ;)
 
There are well developed tools to compare high res (multi-million polygon) objects with a standard poly model or the limit surface of a subdiv model and write the results into normal/displacement maps. Off-the shelf sculpt tools like Zbrush, Mudbox all have this functionality.

Building the model automatically is still a bit too much of a challenge, especially for anything that has to be animated.

But high scanning equipment is very, very expensive. The $15-20K devices can usually get something like this:
headscan-rough-1024x583.jpg

3994326126_1d10820401_o.jpg


I think the difference is obvious. And this is what the average student could build you, IMHO... and you usually also need some remeshing software to combine the results of several scans together.
 
I was always under the impression that scans like that were mainly used as reference material.. you build a mesh as closely as possible to that one, except built in a way that it can be properly UV mapped and animated, and then maybe a quick point snap to the scan mesh, and you've got a low (ish) poly, animatable head mesh that looks every bit as good as the scan.
 
about scanning, rockstar using something called MotinoScan and it looks good. But unfortunately the textures looks low res, but that maybe the RAM limitation on console not MotionScan limitation...
 
It's not comparable to the XYZRGB scan I've posted above. They're using stereo photography based approaches, but with dozens of digital cameras, whereas regular scanning is done with a laser.
 
are those live human heads? if so that's working quite well as they need to stand still.

what about using actual roman busts, greek statues and renaissance material? that would make for interesting game content, though with new issues : "yet another Caligula"
 
Back
Top