Speculation on the polycount or quality of nextgen character models.

Square Enix is using baked cloth animation in the Luminous Engine:

http://game.watch.impress.co.jp/docs/news/20111012_483045.html

This is the kind of stuff that would be wonderful to see. No longer static-or-barely moving clothing, that reminds one of the last decade. wonder if something similar could be done with regards to hair.

Skin quality already approaches photorealism in some stills(crysis, battlefield,nvidia human head demo), and clothing texture quality is up there too. All that's needed is Better Animation(clothes and hair, as facial animation is already nearing photorealism in some demos), Lighting, Image quality.

At least for racing we'll be right up there nextgen, GT5 in photo mode shows what's possible if you improve IQ drastically. Hopefully all nextgen racers deal with ingame image quality issues, as it detracts from the race.
 
Last edited by a moderator:
But this could work, couldn't it? You just need to store a shit ton of point level animation in memory I suppose. Didn't Quake 3 Arena animation work kinda like that too? I mean, you don't have to make the mesh quite as dense as in the video, right? Just how would you incorporate animation like that into a game like Uncharted, though? That game usually blends dozens of different animations together in real time.
 
Er, where's the realtime part from that video?

The papers in the page say the process used for sim is verlet integration, and cite nvidia and amd papers, which suggests realtime uses.

It also says it is running in the luminous engine
All graphics Square Enix's next-generation game engine

At least this simple cloth runs at 200+fps

The video of the full body human says it is running at 60fps, which would be rather odd if this was not related at all to realtime content. The youtube channel consists of what seems like 13 videos related to luminous engine(the above cloth video is from the same channel, all vids are very good), and seem to involve realtime technical demonstrations.

Considering the previous realtime demo from about 6 years ago(e3 2005? right)
I wouldn't consider this an unreasonable jump, we could assume it is running on a high end sli setup. OR is there something that would intrinsically make scaling realtime clothing animation quality up impossible?
 
Last edited by a moderator:
The video of the full body human says it is running at 60fps, which would be rather odd if this was not related at all to realtime content.

I have a very hard time believing that. We work with like 1/4 or even less the poly count and simulations take a night to run. It can't be that fast even on dedicated hardware.
 
I would think the simulations you make are mostly physically correct while the video one is just good enough and can thus run several orders of magnitude faster :)
 
The mesh might be tessellated, we subdivide for rendering too (smooths out jagged folds as well)

Still I can't see how it can run in real time. I think the video only shows how they re-play some precalculated simulations in real time and not how fast the original calculations are.

Notice how the walking character runs a loop, too. They're probably streaming highly compressed mesh deformation data from some kind of an intermediate cache.
Edit: we also cache cloth simulations, to disk. Then the riggers do a cleanup pass on the geometry cache with simple mesh editing tools (clusters mainly) to remove intersection problems and finetune complex movements like Ezio removing/pulling up his hood.

This might work very well for a game that only uses canned animations, like an FF - in the interactive gameworld part you can only walk/run around, and in the battles all movement is turn-based and heavily controlled.
Whereas something like Uncharted or Alan Wake will require a lot more flexibility - so everyone in UC has tight clothing with normal-mapped folds appearing based on bone rotations, and AW has some very simple low-res simulation for Alan's jacket (which is another reason why I refuse to simply believe that Square can just increase simulation detail by several orders of magnitude compared to what Remedy did on the x360). Oh and AC and many other games use simple bones and physics to deform the long pieces of clothing.
 
I have a very hard time believing that. We work with like 1/4 or even less the poly count and simulations take a night to run. It can't be that fast even on dedicated hardware.

They're not simulating at those poly counts. One of the slides in the linked presentation says that they're simulating around 40x40 particles, and then using HW tessellation to fill in the rest.
 
Hmm, that sounds a little more likely to happen in real time, but it will lose significant accuracy as well. Probably not true for the dress or robe or whatever (as they've mentioned that it's baked), only true for the table cloth, though.
 
Laa-Yosh, you know for a console game how the in engine cutscene usually displays better lighting, higher precision shaders, shadows and what not, do you think a modern gpu like 580gtx can handle the in engine cutscene quality of say Uncharted 3 in realtime?
 
Laa-Yosh, you know for a console game how the in engine cutscene usually displays better lighting, higher precision shaders, shadows and what not, do you think a modern gpu like 580gtx can handle the in engine cutscene quality of say Uncharted 3 in realtime?

I somehow doubt that GPU can handle them but I hope next gen consoles will. I feel that we dont get the same difference from generation to generation as we used to get before.
Uncharted's cut scenes are supposed to be using the same assets as the real time scenes. Expecting them to look that good with everything else given is probably the least that we should expect from next gen consoles.
 
The thing is we have no idea about the quality settings and those are very important. What is the number of samples per pixel? Is there some oversampling or other quality increase for motion blur, depth of field?

Only ND can tell for sure, for now.
 
So here on the left we have a high res CG render of the same character Sev from Killzone 3, if we compare it to the ingame model on the ps3, personally I think the leap in quality on the PS4 should be dangerously close to the CGI model. With the help of tessellation, better skin shaders and higher res textures that nextgen hardware should offer of course.
http://i43.tinypic.com/n4cf93.jpg
 
Maybe if devs had infinite time to bring the entire level of quality to par for in-game.
 
So here on the left we have a high res CG render of the same character Sev from Killzone 3, if we compare it to the ingame model on the ps3, personally I think the leap in quality on the PS4 should be dangerously close to the CGI model. With the help of tessellation, better skin shaders and higher res textures that nextgen hardware should offer of course.
http://i43.tinypic.com/n4cf93.jpg

Theres a problem with this view. Its essentially saying, "Things would look this good if we had good enough hardware." The reality is, thats not true at all. All those uber poly models, textures, shaders and effects need to be made by an artist, who probably has a BS and probably gets paid good money, probably at least 50-60k a year. The more polygons, textures, ect you put on something, the more time it takes. Tools like tesselation can certainly help, but still its mostly a time investment.

Better hardware isn't going to give anyone more time for free. It will help with certain things and cases where you're clearly hardware limited and with the right hardware you can make things look better for the same time investment. However, IMO, that is not the majority of cases. Look on the polycount thread at the GoW3 renders vs the in game models. Is cranking the polycount up 5 times going to make them look much better? Not really. 10 times maybe? Not really. Are slight texture boosts going to make them better? One level isn't going to do much. A couple levels might look better, but stepping a texture up two levels is a 16x investment, and I doubt anyone is eyeballing 16GB of ram in the next console. Virtual texturing largely makes this irrelevant anyway.

I really think this upcoming gen people are too focused on cool hardware, when its going to be largely artist/programmer effort and investment that defines it from last gen.
 
Yeah... You're only really going to appreciate the quality in cut-scenes, and those can be largely done with offline-rendered videos and the ultra high poly models where they can also jack up the lighting and shadowing and other cinematography effects up the wazoo. Of course, that means you don't see customized characters (if the game even has such a feature), but there's got to be a conscious trade-off somewhere.


-----

Gears in-game models certainly do have their sharp edges, but I can't say it was awful whilst playing. That's the sort of stuff I only notice in screenshots and also because of the angles being used.

Sure, maybe you can do crazy high quality in real-time ala Samaritan, but I gather that the point of that little section was showing that the technology exists to do such morphology in the first place (DX11) - what gamers will even be able to do it in real-time? :p There is stuff like Flood/Halo 3 and Lambent/Gears 3 morphing, but that's maybe more an issue in the flexibility of authoring the assets. At least I took the DX11 stuff just to make such things easier to do with fewer restrictions. They can pick a suitable tessellation factor for performance. Not sure how tessellation factors into facial animation...

There's going to be a whole lot more other things that need attention i.e. the environment, and ultimately there is a bit of a ceiling limit for the base triangle throughput (sans tessellation factors). Setup rate and core clock speed haven't seen gigantic leaps in the last 5 years - nothing comparable to ALUs.

-----

Anyways, there's such a wide variety in games where the quality of models is dictated by priorities and game performance. If you're just talking about capabilities, well, Fight Night 4 already showcases really impressive modeling on current gen due to the nature of the game, but who knows how much work went into it despite having real-world people to base the models on.

Is that economical for a full on action game with larger and more detailed environments? How much work was it to make the 60K in-game models for Resident Evil 5?

Too many considerations.
 
Tesselation isn't a good idea for character faces - you could make use of more polygons for more detailed deformations instead. What you create with tessellation can't be controlled, it's basically wasted vertices for animation.
Naughty Dog already has a head mesh that's very, very dense but I'm sure they could use even more polygons there to get even better control. Also, moving to blendshapes instead of bones would be a good idea to spend extra memory. A smoother nose or eyelid is only visible in closeups but better control over deformations is more visible.

Yeah, then again, KZ3 characters weren't as detailed as the ones in Uncharted or Last of us. Also, the LDR color depth is very very evident and disturbing. They should fix those things first.

I mean this really is pretty damn close to as good as it can get with realtime hardware:

Ellie.png


The next big step in quality could come from implementing raytracing for the lighting and shadows and not from more polygons or textures... and of course from better hair (even if this is one of the best examples of realtime hair I've seen)
 
Still I can't see how it can run in real time. I think the video only shows how they re-play some precalculated simulations in real time and not how fast the original calculations are.

Did anyone even bother to read Scofield's original message? It's right in there: "Square Enix is using baked cloth animation in the Luminous Engine"
 
Back
Top