VFX_Veteran
Regular
I'm not sure where to put this so I stuck it here. Mods can put it wherever they see this as being suited for the best audience.
I've been looking to see how close a game engine can render (non-realtime) CG-like quality to make a movie. My goal is to make sense of the realtime, VR, gaming space to see how viable it is for CG movie-making. How limited is the hardware? How limited is the software used today with some of the most advanced graphics engines on the market today? My goals here are twofold: (1) possibly create CG movies using a gaming engine and (2) be able to use the same assets, light rigs, animation rigs, etc.. with scaled down features for the VR sector. Looking at the latest Pixar realtime lighting tool was an inspiration.
Having about 2 weeks to play with a published graphics engine, some impressions have been mixed. Here is a list of things that are immediately concerning:
1) Memory. There just isn't enough of it right now. I'm not concerned with tricking the hardware to extract out FPS. I'm more interested in moving assets from the CG space to gaming space. 12/16G of VRAM just isn't going to cut it (more details below).
2) Physically Based Lighting/Shading. Right now most gaming engines tout that feature, but there really isn't any consistency here. PBS in the film world means you can move the same shader to various lighting conditions and it will hold up (i.e. using all the basics of energy-conservation, etc..). I was amazed to see how great 1 character looked under a set of lights for a particular scene, but when importing another character with the same shader and different parameter values, the character shading broke under the same lighting. This forced me to think how I could get consistency without adding a big burden on the lighters to hand tweak every single shot.
3) Fur. It's just not there. A bit disappointed to find out that some of the more popular gaming engines just don't have any implementation for rendering out actual curves to represent hair rendering. Why should they though? They are living in a land where FPS vs. Quality is a continuous balancing act. But I can't really use transparency cards to represent hair and it not be prone to breaks in the look at certain angles. Even with fur strips in offline rendering, we see some of these artifacts and can just add more to fill up the gaps. Rigging of curve primitives is also a big need. I'm not looking for simulations more than being able to hand-tweak curves to control "how" the curves move. From a director's point of view - they basically want control over everything.
Things that are good:
1) Excellent use of editing tools that can quickly be made to give rapid iterations of shots by moving various cameras and takes in realtime.
2) Incredibly robust screen space features which will pretty much remove the need for doing comp work in Nuke.
3) Unmatched speed at rendering (even though I hate the slow compile times for shaders). I'm still waiting to see how much I can throw at the hardware to where the render times equal CPU offline rendering. I have a long way to go for that, so there is room for pushing more to the hardware.
Another pet-project of mine was to research using the full programmable pixel/vertex pipeline with the latest OpenGL version to see about realtime texturing/lighting in the Maya's viewport for displaying texture/lit assets based on the Alembic cache. I was excited about this prospect and immediately went to work making a small "proof-of-concept" realtime renderer that could give me a small picture of how much can the hardware handle given a certain current texture workflow.
I first needed to make a gpu memory manager class (not yet finished) that could query the hardware and give me all the pertinent information.
This is on my newly built-machine with the following specs:
i7-6800k
64G DDR4 RAM
Titan X @ 12G DDR5 VRAM
EVO 840 1T SSD
OpenGL driver: 4.5
What concerns me about this is the maximum number of texture units bindable for any given mesh. I also saw significant latency when all the texture contexts were initialized and uploaded to the GPU.
Using multithreading to load 75 unique textures (single threading was a non-starter) where most were 4k and some were 8x8 textures (this is a behavior of Mari to fill in rows of udim spaces), I was a bit surprised at the memory usage:
I'm wondering how to control the mip-mapping levels in OpenGL as some of these textures have very high frequency and seems like the renderer can't help but introduce undersampling as seen here:
I went for the gusto and tried to load in 200 unique textures (these were diffuse only) and got this:
Basically consumed all of the Titan X's memory. LOL!
Lots to learn but enjoying the exploration and discovery.. more to come!
I've been looking to see how close a game engine can render (non-realtime) CG-like quality to make a movie. My goal is to make sense of the realtime, VR, gaming space to see how viable it is for CG movie-making. How limited is the hardware? How limited is the software used today with some of the most advanced graphics engines on the market today? My goals here are twofold: (1) possibly create CG movies using a gaming engine and (2) be able to use the same assets, light rigs, animation rigs, etc.. with scaled down features for the VR sector. Looking at the latest Pixar realtime lighting tool was an inspiration.
Having about 2 weeks to play with a published graphics engine, some impressions have been mixed. Here is a list of things that are immediately concerning:
1) Memory. There just isn't enough of it right now. I'm not concerned with tricking the hardware to extract out FPS. I'm more interested in moving assets from the CG space to gaming space. 12/16G of VRAM just isn't going to cut it (more details below).
2) Physically Based Lighting/Shading. Right now most gaming engines tout that feature, but there really isn't any consistency here. PBS in the film world means you can move the same shader to various lighting conditions and it will hold up (i.e. using all the basics of energy-conservation, etc..). I was amazed to see how great 1 character looked under a set of lights for a particular scene, but when importing another character with the same shader and different parameter values, the character shading broke under the same lighting. This forced me to think how I could get consistency without adding a big burden on the lighters to hand tweak every single shot.
3) Fur. It's just not there. A bit disappointed to find out that some of the more popular gaming engines just don't have any implementation for rendering out actual curves to represent hair rendering. Why should they though? They are living in a land where FPS vs. Quality is a continuous balancing act. But I can't really use transparency cards to represent hair and it not be prone to breaks in the look at certain angles. Even with fur strips in offline rendering, we see some of these artifacts and can just add more to fill up the gaps. Rigging of curve primitives is also a big need. I'm not looking for simulations more than being able to hand-tweak curves to control "how" the curves move. From a director's point of view - they basically want control over everything.
Things that are good:
1) Excellent use of editing tools that can quickly be made to give rapid iterations of shots by moving various cameras and takes in realtime.
2) Incredibly robust screen space features which will pretty much remove the need for doing comp work in Nuke.
3) Unmatched speed at rendering (even though I hate the slow compile times for shaders). I'm still waiting to see how much I can throw at the hardware to where the render times equal CPU offline rendering. I have a long way to go for that, so there is room for pushing more to the hardware.
Another pet-project of mine was to research using the full programmable pixel/vertex pipeline with the latest OpenGL version to see about realtime texturing/lighting in the Maya's viewport for displaying texture/lit assets based on the Alembic cache. I was excited about this prospect and immediately went to work making a small "proof-of-concept" realtime renderer that could give me a small picture of how much can the hardware handle given a certain current texture workflow.
I first needed to make a gpu memory manager class (not yet finished) that could query the hardware and give me all the pertinent information.
This is on my newly built-machine with the following specs:
i7-6800k
64G DDR4 RAM
Titan X @ 12G DDR5 VRAM
EVO 840 1T SSD
OpenGL driver: 4.5
What concerns me about this is the maximum number of texture units bindable for any given mesh. I also saw significant latency when all the texture contexts were initialized and uploaded to the GPU.
Using multithreading to load 75 unique textures (single threading was a non-starter) where most were 4k and some were 8x8 textures (this is a behavior of Mari to fill in rows of udim spaces), I was a bit surprised at the memory usage:
I'm wondering how to control the mip-mapping levels in OpenGL as some of these textures have very high frequency and seems like the renderer can't help but introduce undersampling as seen here:
I went for the gusto and tried to load in 200 unique textures (these were diffuse only) and got this:
Basically consumed all of the Titan X's memory. LOL!
Lots to learn but enjoying the exploration and discovery.. more to come!
Last edited: