Next-gen graphical pet peeves

Like Ac...Joshua and others have said, bad animations - especially character animations.
Too bad this is a problem that isn't likely to be solved with a groundbreaking new algorithm, or so it seems to me at least.
My hope is that middlewares/engines will provide better and better character animation packages to take as much work from the animators as possible without restraining flexibility/power of expression too much. But still, many good animators (and thus a lot of money) will still be needed for a long time to come regarding animations, I'm afraid.

I disagree...

I think the best solution would be much more intelligent animatiion systems developed by programmers to aid animators in getting in-game animation to work realistically.. The biggest problem with animation isn't specifically in the animations themselves (what could be more realistic than mocap?) but the systems used to blend animations smoothly or dynamically from one to the next..

The best solution would be context/ai-driven animation systems which could be integrated into the engine and the animation tools to allow the animators to define the properties of the contexts/AI themselves in order to get them to work well enough..
That way it would just be a case of setting up the pre-canned anims, creating descriptions of the context of the animations and then the engine could dynamically alter the animations in real-time to allow them to change and modify depending on the situation..
This kind of technology is where I see the future of animation going..
 
I disagree...

I think the best solution would be much more intelligent animatiion systems developed by programmers to aid animators in getting in-game animation to work realistically.. The biggest problem with animation isn't specifically in the animations themselves (what could be more realistic than mocap?) but the systems used to blend animations smoothly or dynamically from one to the next..

I'd love to agree with you (and hope it will work out eventually), but I actually see the biggest problem with the animations themselves. Mocap will only get you so far. There must be a reason why so much - especially regarding facial animation - in movie CGI is still keyframed, with mocap only used as a reference (if used at all). And I'm afraid that even with tools like Face Robot good (facial) animation will still take a lot of work and thus, a lot of budget. (And because of high costs and no perceived fundamental necessity for the overall quality of a game, it is likely be the kind of budget that's cut first...)

But as I said - I hope I'm too pessimistic and people will find a way to provide general, high quality facial/character animation at a resonable cost...
 
There must be a reason why so much - especially regarding facial animation - in movie CGI is still keyframed, with mocap only used as a reference (if used at all).
Well, with mocap, you don't really have any choice but to even it out after you get it or use it as a reference because all mocap is noisy to start and has the occasional blips or errors where line-of-sight of a point was obscured or something and in turn, some extrapolated guesses turn out wrong. A lot of outsource mocap studios might do some cleanup work before delivery, but even that still only becomes a starting point because the cleanup they do is more or less towards the goal of getting you an error-free, but otherwise direct mocap result, not really a "production quality" animation.
 
I disagree...

I think the best solution would be much more intelligent animatiion systems developed by programmers to aid animators in getting in-game animation to work realistically.. The biggest problem with animation isn't specifically in the animations themselves (what could be more realistic than mocap?) but the systems used to blend animations smoothly or dynamically from one to the next..

The best solution would be context/ai-driven animation systems which could be integrated into the engine and the animation tools to allow the animators to define the properties of the contexts/AI themselves in order to get them to work well enough..
That way it would just be a case of setting up the pre-canned anims, creating descriptions of the context of the animations and then the engine could dynamically alter the animations in real-time to allow them to change and modify depending on the situation..
This kind of technology is where I see the future of animation going..

What you're talking about already exists. Soul Calibur 2/3 uses it. I think the problem is most games just don't have enough MOCAP animations to connect different possible posteurs. For a fighting game it's feasible because the level is small and static so a lot of the animations could be permanently be stored in RAM. For a game that has wide open areas like a FPS storing a bunch of MOCAP animations for a bunch of onscreen characters uses up quite a bit of memory. Of course streaming the level will help free up some memory but I don't think anyone has found a viable solution yet. Then there is animation instancing which could be a good method since Project Offset will be doing it.
 
What you're talking about already exists. Soul Calibur 2/3 uses it.
Soul Calibur 2/3 used motion tweening to blend animations from one to the next using some form of either linear/parabolar-interpolation..

What I'm talking about is the kind of procedurally generated animation and animation-variance present in up-coming games like Spore, Uncharted and Assassin's creed..

I think the problem is most games just don't have enough MOCAP animations to connect different possible posteurs. For a fighting game it's feasible because the level is small and static so a lot of the animations could be permanently be stored in RAM. For a game that has wide open areas like a FPS storing a bunch of MOCAP animations for a bunch of onscreen characters uses up quite a bit of memory. Of course streaming the level will help free up some memory but I don't think anyone has found a viable solution yet. Then there is animation instancing which could be a good method since Project Offset will be doing it.
The problem is more pre-canned animations won't help entities in games move more realistically.. It will just give them more movement variation..
The problem is as AI becomes more sophisticated, characters will need to have animation systems built to allow much more flexibility than what is offered in today's systems.. Character movement will need to change and adapt in response to their environment and the only way to do this successfully/comprehensibly is to allow the system to drive a single, relevant animation (such as simply "run") through a context-driven mutator system which will shift and alter the animation on-the-fly (into something like "run hunched over to avoid the overhead fire, whilst weaving in and out of the obstacles coming your way")..
 
Last edited by a moderator:
I guess my pet peeves are alot more simplistic than others.

Screen tearing and framerate drops - I can live with all sorts of flaws, but these break me out of immersion every time. If I really wanted to deal with stuff like this, I'd buy PC Games. IMO, there is no good reason a console game should exhibit these except in corner cases. I've seen lots of games where framerate drop and tearing happen consistently, and it seems to be becoming more common.

None of my Atari 2600 games had screen tearing or framerate drops! ;)
 
So I guess that you guys are talking about something else cause like I said this is already being worked on in games currently in development.

Just check the videos to get an idea on http://www.naturalmotion.com/products.htm

But I think until the first gameplay videos of Uncharted are released people won't really understand what implications it has for games.
 
So I guess that you guys are talking about something else cause like I said this is already being worked on in games currently in development.
But I think until the first gameplay videos of Uncharted are released people won't really understand what implications it has for games.

Assassin's Creed and next-gen Lucas-Arts games will also have procedural animation blending and they were already publicly demoed (AC and Indiana Jones). They weren't that mindblowing (especially Indy had a lot glitches). Though the games were far from completion, so they will be more impressiveonce they're finished. Also this year's NCAA and NFL are confirmed to have this sort of system.:smile:
 
Ether_Snake said:
I don't know what changed with the PS3 architecture by then but I presume something did.
Nothing to do with hardware - I'm quite certain MGS4 first showing used softfiltered shadow volumes - the artifacts were exactly the same as the myriad of PS2 games that used the same technique in last 6 years (including those I've worked on myself).

Konami most likely started the game on their existing codebase(that powered SilentHill games etc., and had PC renderer as well) and then upgraded components to what they really wanted over time - hence the switch to shadowmaps in the later demos.

And before you ask - shadows were not flawless - the softening effect seen is mathematically wrong (though it looks decent enough most of the time). More importantly, while this is a debate without exact answer, most people(myself included) will tell you that shadowmaps, inspite all of their visual shortcomings, are just way more practical on new generation of hardware.
 
shadowmaps, inspite all of their visual shortcomings, are just way more practical on new generation of hardware.

On Xbox 360 as well ? Extrusion on one of the PPX cores, you already have to do a Z pre-pass anyways for predicated tiling, you have tons of vertex shader ALU's to quickly pass through the volumes' geometry and you have tons of bandwidth and fill-rate (last but not least, when doing the Z pre-pass you do not even have to worry about tiling that portion of your rendering as it will fit inside the EDRAM)... it almost seems like an architecture designed for high speed shadow volumes.
 
Panajev said:
you have tons of bandwidth and fill-rate
I disagree.
Bandwith is irellevant in this case, and fillrate isn't even close to being up to scale with resolution & scene complexity increases(both of which baloon up the fillrate demands), compared to last gen.

It's much less practical then last gen(especially if you want to add accurate soft shadowing to volumes), and alternatives are much more practical then they used to be (especially on PS2). Not a contest really.
 
Well, with mocap, you don't really have any choice but to even it out after you get it or use it as a reference because all mocap is noisy to start and has the occasional blips or errors[..]. A lot of outsource mocap studios might do some cleanup work before delivery, but even that still only becomes a starting point because the cleanup they do is more or less towards the goal of getting you an error-free, but otherwise direct mocap result, not really a "production quality" animation.

And exactly the step from "direct mocap result" to "production quality animation" is where I see the problem.

The stuff arch is talking about is definitely something to look forward to, but it's not the area of my concern. "Incomplete" movement animations are not something that bother me too much, and surely something that can be made better by more intelligent animation systems.
What I'm talking about are high quality character animations, i.e. gestures and facial expressions. Characters in dialogue, in close-up. That's where an intelligent animation system won't be the ultimate saviour, I'm afraid...
And if badly executed, it's quite often one of the main causes of breaking the immersion for me. (E.g. KotOR: Really great game - but boy, that would have rocked so much more with good character animations...)
 
And exactly the step from "direct mocap result" to "production quality animation" is where I see the problem.

The stuff arch is talking about is definitely something to look forward to, but it's not the area of my concern. "Incomplete" movement animations are not something that bother me too much, and surely something that can be made better by more intelligent animation systems.
What I'm talking about are high quality character animations, i.e. gestures and facial expressions. Characters in dialogue, in close-up. That's where an intelligent animation system won't be the ultimate saviour, I'm afraid...
And if badly executed, it's quite often one of the main causes of breaking the immersion for me. (E.g. KotOR: Really great game - but boy, that would have rocked so much more with good character animations...)

But this isn't a problem with the art..

It's a problem with the platform/engine not having the performance to support making the art look better..

Given your example the best solution would be to increase the number of bones in the character models and the number of blend shapes used for the character animations.. Then when the mocap is done, "more" of the information can be sampled and retained for use in the game..

Without sufficient information with regards to facial animation then it can't ever look "realistic".. In the same vein if the information is there but you don't have sufficient means to "represent that information" then the quality will suffer equally at this end..
 
As far as shadow aliasing is concerned, does this technique proposed by Insomniac solve the problem (page 14)?

http://www.insomniacgames.com/tech/articles/1007/files/shadows_in_rcf.pdf

If so, is there a negative impact of using this technique (performance, quality, difficulty to implement, etc)? I would love to think that this pet peeve of mine might be resolved as I find it to be the most irritating of all (breaks the illusion). I hope that MGS4 uses such a technique as I have seen alot of alliasing in this otherwise perfect looking game.
 
As far as shadow aliasing is concerned, does this technique proposed by Insomniac solve the problem (page 14)?

http://www.insomniacgames.com/tech/articles/1007/files/shadows_in_rcf.pdf

If so, is there a negative impact of using this technique (performance, quality, difficulty to implement, etc)? I would love to think that this pet peeve of mine might be resolved as I find it to be the most irritating of all (breaks the illusion). I hope that MGS4 uses such a technique as I have seen alot of alliasing in this otherwise perfect looking game.
PCF (Percentage Closer Filtering) is used by a lot of games (HS for examples takes up to 12 PCF samples per pixel), it doesn't fix aliasing problems, but it can improve quality a lot as it can replaces hard edged low res shadows with fuzzier/smoother edges.
edit: the clever thing they did is to project shadows only where needed, this reduces the volume filled by receivers a lot (in their game of course, in other games it would be pointless..) so that shadow map resolution can be distributed better.
 
I disagree.
Bandwith is irellevant in this case, and fillrate isn't even close to being up to scale with resolution & scene complexity increases(both of which baloon up the fillrate demands), compared to last gen.

It's much less practical then last gen(especially if you want to add accurate soft shadowing to volumes), and alternatives are much more practical then they used to be (especially on PS2). Not a contest really.

How do you deal with the memory hit? To get good looking shadows with shadow maps needs lots of memory, especially if you need all players on screen to have reasonably good looking shadows as well as realtime shadows cast by all stadium geometry. With shadow volumes the memory hit is tiny.

I think bandwidth can become relevant, it just depends on how many items are casting/receiving shadows. In our case aside from game characters, we need realtime shadows for all stadium geometry since the sun moves during play. It's only a stencil pass but still, it can occupy large swaths of the entire screen depending on the time of day.

You can add a softness to shadow volumes as well. Accurate? Maybe not, but it can look reasonably good. In comparison shadow mapping techniques so far seem hit or miss, although I do hear that Uncharted has pretty good shadows.
 
Back
Top