Ignoring content generation for a moment. How exactly do you render an animated character inside a scene with neural descriptions?
For a character do you give the animation engine a bunch of neural blobs for sections of the character, together with an animated skeleton and let it try to condense it into a plenoptic neural model so it can be composed with the rest of the scene by a raytracer?
Or do you use a giant textual description for a scene and everything important in it, then every frame tell it what changes and have it make something up in 15 msec?
For a character do you give the animation engine a bunch of neural blobs for sections of the character, together with an animated skeleton and let it try to condense it into a plenoptic neural model so it can be composed with the rest of the scene by a raytracer?
Or do you use a giant textual description for a scene and everything important in it, then every frame tell it what changes and have it make something up in 15 msec?
Last edited: