You are limited to the performance on the light stage.
If in the performance you put your hand in your hair, your hair will move and have volumetric shadows and anisotropic highlights and everything else real hair has because you recorded it while doing it for real.
If instead you animate a voxelated version of the data and you move the voxelated hand into the voxelated hair, nothing will happen. You might be able to add in voxel lighting, shadows, physics, etc through some other technique. But, the light stage data is prerecorded and fixed.
Edit: More rambling.
I think they /can/ use the light stage as a technique to capture hirez normal and albedo maps (and maybe even voxel positions). That is a really nice alternative to hand modeling and painting and deserves great kudos, but it can only be used to create data for a traditional lighting pipeline. By beef with OTOY's presentations is that they show the 100% accurate (prerecorded) lighting and then show the albedo and normals that can be used for not-100%-accurate simulated lighting. Both techniques come from the same tech, but the fact that they are separate and incompatible is quietly left unmentioned.
If in the performance you put your hand in your hair, your hair will move and have volumetric shadows and anisotropic highlights and everything else real hair has because you recorded it while doing it for real.
If instead you animate a voxelated version of the data and you move the voxelated hand into the voxelated hair, nothing will happen. You might be able to add in voxel lighting, shadows, physics, etc through some other technique. But, the light stage data is prerecorded and fixed.
Edit: More rambling.
I think they /can/ use the light stage as a technique to capture hirez normal and albedo maps (and maybe even voxel positions). That is a really nice alternative to hand modeling and painting and deserves great kudos, but it can only be used to create data for a traditional lighting pipeline. By beef with OTOY's presentations is that they show the 100% accurate (prerecorded) lighting and then show the albedo and normals that can be used for not-100%-accurate simulated lighting. Both techniques come from the same tech, but the fact that they are separate and incompatible is quietly left unmentioned.
Last edited by a moderator: