Accurate human rendering in game [2014-2016]

Status
Not open for further replies.
Yeah, very nice! Two tiny issues: the hair immediately gives it away as realtime (wonder how much time's left before we get really good hair though), and the anatomy is a bit off in terms of proportions and pose.

Top notch work though.
 
and the anatomy is a bit off in terms of proportions and pose.

It could have fit well in a "accurate dwarf rendering" thread.

Joke aside, I'm very much impressed. Btw, you can see the LOD kick in on the beard when camera pulls back.
 
Dwarf or not, a realistic character should've taken its leads from the actor IMHO. And the legs would still be strange.

Also, I think this is the same guy that did the Emilia Clarke UE4 character... That one was quite good as well.
 
Dwarf or not, a realistic character should've taken its leads from the actor IMHO. And the legs would still be strange.

Also, I think this is the same guy that did the Emilia Clarke UE4 character... That one was quite good as well.

Yep, that's the one:
 
How much time does it take to make these HQ assets? Has the length of time changed for current gen vs last gen, or is it pretty much the same due to the high res/poly versions still needed to be made + creating the in-game version?
 
I believe that creating the actual high poly models is the same as it was before with a potential increase of time needed in many cases where they have to aim more for realism now that real time graphics are supposed to be more capable than before. What we considered realistic last gen is less realistic by today's standards (i.e see Snake in MGS4 and compare him with Venom Snake in MGS5). Shaders, textures and the likes are also expected to have jumped. Artists need to be more precise if their game is less stylized and more realistic because the viewer picks the unrealistic parts easier now
Of course in many cases previous gen, artists were producing their models regardless of technical limitations for use in cut scenes or FMVs or even to get an idea of the model's aesthetics before making the in game model. Because of this time havent increased as much as most people think.
 
Last edited:
Is there an automatic path from a high poly model to a low poly model? Or do the tools just create approximations that have to be hand crafted for lower poly models?
 
Is there an automatic path from a high poly model to a low poly model? Or do the tools just create approximations that have to be hand crafted for lower poly models?

Quite a few games are using simplygon for that task but i am not aware of the specifics (if for example you have to create your LOD_0 for the game beforehand).
 
Quite a few games are using simplygon for that task but i am not aware of the specifics (if for example you have to create your LOD_0 for the game beforehand).
Cheers for the link Clukos, delighted to see 'auto-magic' hasn't left the lexicon yet too! They certainly have some glowing testimonials but then again my company doesn't put the complaints on the first page either. If it works as well as they claim, hell even if it only cut 50% of the time needed for multiple LOD it would be a very useful tool.
 
Oh wowza! Is that some kind of POM texture on Dawlin's face? It seems to have high depth. The hair rendering and lighting are both very convincing too, not sure if that's the kind of quality we could get even in PS5 gen tho, since the rest of the environment detail has to keep up too.
 
Well yeah, I don't think you would ever use a max detail lod in regular gameplay segments unless you are on the PC of course. I think that level of detail is doable in real time cutscenes in the future.

I think materials are getting there but things like hair and complex rigs and weighting that comes with it is still a steep hill to climb. No matter how good some of these in game models are, in motion, clipping, UV oddities and weird looking deformations are still common and I don't see that going away anytime soon.
 
Yeah, convincing muscle deformation kinda requires a layer of muscle simulation underneath the skin that makes things wobble, jiggle in motion and be affected by the sliding of bones. That's gotta be some ILM stuff that wont be available in real time for a while I assume.
 
PC version of RotTR
For-Fansites-2.png

For-Fansites-1.png
 
Was going to post the exact same thing :smile:

Not sure if we get a photo mode in RoTR so we can replica those shots though. Already bought the game on Steam, will post some screens of Lara and other characters when i get my hands on it. Also, this time around it seems like they based her model on her actress more than the reboot and the EE did.
 
It's certainly fun to play around with that character creator, though it's not that easy to change a face to look like someone you know, I'm finding. Some great matches have been created in that Neogaf thread, but that seems to be almost more because the presets in there were close enough. Imho one of the biggest limitations is that it seems that the hairlines are so fixed that you cannot change the proportions of the head a lot.Still, I really like the way this editor works, and it is certainly one of the best so far, and you can get some results that are a lot better looking than most.

@AlNets I think DOA should copy this ... ;)

sirjjhk.jpg
 
How much time does it take to make these HQ assets? Has the length of time changed for current gen vs last gen, or is it pretty much the same due to the high res/poly versions still needed to be made + creating the in-game version?

I'm not that familiar with current workflows, or even just how many wildly different approaches there might be, but I can give a few notes - partially based on some of the stuff I've seen.

Next gen has introduced significantly increased poly and texture budgets and more shader complexity. So it is an expectation and a competitive advantage to make use of these possibilities.

The high res source assets are quite a lot more detailed nowadays. It's not uncommon to see gigabyte-sized zbrush files for a single character, with poly counts from 20 to 40 million or so, distributed across various parts.
This is in part a consequence of adding more visual complexity - from belts and pouches through SF armor and gadgets to various ornamentations and such. Sure, Gears of War had a lot of this stuff, but most of that was copied or instanced many times - nowadays it's more common to have a lot of individual detail. So even the simple task of creating all this will take more time; and then there are the artistic aspects, fine tuning and balancing every little element until the complete result is pleasing enough.
The other side is that every single element can now be detailed even further, because the extra work will be visible in the final asset. This will also add to the workload.

The low res assets are also more detailed, so more time has to be spent on creating models, UVs, and then extracting various texture maps (normals, cavity, AO etc) from the high res model. 100K polygons are pretty average nowadays, some games may go even further. Painting more textures and at higher resolutions is also more costly, just as creating more shaders at increased complexity, crafting more realistic and more detailed hair and fur.

On the other end, there are also some gains thanks to more advanced tools. Some of this comes from off-the-shelf software, where a lot of effort is put into developing more capable apps and such.
Then there are the various custom tools developed at all the leading studios; probably the best known example is ND's procedural shader elements for stuff like stiches or dirt. These can seriously speed up asset creation, by removing some of the texturing work; not to mention that it's probably a non-destructive step, so artists can keep on painting the textures and still keep the procedural elements on top intact.

Still, I think that in the end, these newer assets are quite a lot more work intensive.
 
Is there an automatic path from a high poly model to a low poly model? Or do the tools just create approximations that have to be hand crafted for lower poly models?

This is actually a more complex question.

There are some quite advanced off-the-shelf tools available, that can decimate a super high res zbrush sculpt to more manageable polygon counts; and also some pretty advanced auto UV tools.
There are also apps to create textures with really finely controllable proceduralism, like Substance Designer.
One of the most interesting talks from last year's GDC was about implementing these tools into an asset production pipeline. The main point was to utilize them for rapid prototyping; you need to build a starting library of shaders, skeletons, parameter values and such, and you have to write some code (mostly scripts for the DCC apps and some exporters). Then you can take a work in progress zbrush sculpt and literally publish it into the engine as a fully textured and shaded and rigged asset, with a single click. Sure, it will probably not stand up to close scrutiny, but you will have a pretty clear idea of the final results; and for some background assets, the auto-generated result may even be sufficient.

However, if you need a final asset that's technically sound and fit for all the later stages of the production pipeline, a lot of these procedural tools will start to break down. Sure, it is still possible to utilize them to various extents, but we're not yet at a point where software can properly replace a talented artist. There is a lot of research and development into such automatic tools; but for now, the more user friendly ones are the better ones, those that can enable artists to realize their ideas faster. So yes, hero and even secondary level assets are still mostly hand built and manually textured; even though the tools can help a lot more.
 
Quite a few games are using simplygon for that task but i am not aware of the specifics (if for example you have to create your LOD_0 for the game beforehand).

This is more suited for static background elements that do not require any rigging, IMHO.
 
This is more suited for static background elements that do not require any rigging, IMHO.

They have a tutorial for rigged characters on their site: https://www.simplygon.com/knowledge-base/tutorials/optimizing-a-rigged-character

And some different tutorials for some stuff they are using:
If symmetry aware reduction is enabled, Simplygon will match vertices and texture coordinates that are mirrored over a specified plane andl keep the symmetric features in the generated LOD.

ProxyLOD now has support for character models. Simplygon is able to create light-weight LODs or low-poly models of animated characters while keeping skinning and animation data, thus allowing, for example, rendering of large crowds at high frame rates and automatic porting of animated high-res assets to mobile.

In addition to full skinning support, users can optionally reduce the skinning complexity of a rigged mesh by using BoneLOD. BoneLOD can be set to remove a specific number of bones for a LOD. Optionally, it can be run as a fully automated process, that removes bones which does not influence the result much.

 
Last edited:
Status
Not open for further replies.
Back
Top