Road for Next Gen Animations : Changing philosophy.

Status
Not open for further replies.

FouadVG

Newcomer
Unfortunately the main goal of using normal mapping technique by the majority of developers is not to improve graphics and details, but to decrease the number of polygons necessary to create detailed graphics. (using minimum polygons for a given graphical details)
With this philosophy in mind, we have supposed next gen games using almost the same number of polygons for characters and protagonists as in current gen games. (10 000 polygons in many cases)

There is a lot of problems with this philosophy :

1/ You cant improve too much the animations of next gen games compared to current gen games (you are using the same number of polygons, and you cant animate normal mapped textures ) So this result in a robotic animation feeling due to non animated details using normal mapping technique.

2/ Even though developers get more detailed graphics using less polygons with normal mapping technique, there is still a problem of angle of vision and distance of vision. The quality of graphical detail is at its best in only a determined angle of view and distance, if for example you are too much near the normal mapping texture you will see pixels with no graphical 3D depth, the same if you view the texture from a 160 degree for example. This is a big problem if the normal mapping textures are over used to render not only tiny details but also big visible details which developers could easily render in polygons if they want (rendering cloths for example in Doom3 or Quake4 ).


Some examples :

Gears Of War :
Each character of gears of war is composed of only 10 000 polygons downgraded from 3 million + polygons using normal mapping technology.

gears-of-war-20050517002655380.jpg


This result in a current gen animations with next gen graphics.


Lair :
The main dragon of Lair is composed of 150 000 polygons downgraded from 5 millions + polygons using normal mapping technology. This wont result in a far better looking and detailed dragons than gears of war monsters, but this allow factor5 developers to create far better animations.

http://ps3media.ign.com/ps3/image/article/698/698558/lair-screen-20060327004958573.jpg




MGS4 :

Snake is composed of 120 000 + polygons, with little use of normal mapping. Snake is not better looking than gears of war characters, but is far better animated.

http://ps3media.ign.com/ps3/image/article/651/651322/metal-gear-solid-4-20050915072838293.jpg



Of course a lot more polygons wont result automatically in better animations, its for developers to animate those polygons.

I hope developers by the time will change their philosophy. But when we will see a change of philosophy ? (using normal mapping to add details which couldnt be added using polygons, not using normal mapping to decrease the polygons to a minimum) Maybe after MGS4 release ? Or maybe after the release of the first games using real time animations instead of prerendered pre-programmed animations by using endorphin technology for exemple...Only time will tell us.

N.B :
For those who want to see real time animations, look at those videos :
http://media.ps3.ign.com/articles/702/702423/vids_1.html


[Moderated : Images Size.]
 
At 10,000+ polygons, the granulariy of the polygins has little if anything to do with animation quality. If a dev really has a problem with polygonal granularity then he can selectively increase the polygon count without increasing by order of magnitudes as you suggest.

The real problem with robotic animation in games has little to do with polygon counts and is primarilly that games are interactive, there are two things competing. The realism of the animation and the responsiveness of them to the controller.

The obvious example is jumping, to look realistic there is a ignificant amount of time in preparation to jusmp before you actually leave the ground, however as a player I want to be in the air when I push the button. Short of mind reading or the ability of a computerto see the future this can only be solved by compromise. The original prince of persia and tombraider solved this by forcing the player to learn to anticipate and given their gameplay it was acceptable. That type of compromise would not work in Mario for example.

Using Ragdolls and IK etc to influence animation at some level will likely be something we start to see a lot of, and it will make things look a lot better, but it won't solve the fundamental problems with interactivity.
 
Slight side-note, but the likes of Euphoria from Naturalmotion sound like they point the way forward - animation generated dynamically based on physics and behavioural models. I can't wait to see the new Indiana Jones games and any others that are using this tech.

Someone here might be able to answer, but is Eurphoria now officially part of the console middleware programs?
 
"Snake is composed of 120 000 + polygons, with little use of normal mapping. Snake is not better looking than gears of war characters, but is far better animated."

You. Have. To. Be. Joking.

The bleary, overly normal mapped Doom3-ish Gears of War characters:

1) All have helmets, bandanas, or no hair

2) Rigid body armor covering most of their bodies

The GoW character models are relics of an x86 pc engine. They aren't even in the same universe as the MGS4 character models which are designed for a system with a vertex processing behemoth like Cell They are so primitive and rigid, I, an engineer with basic Maya skills, could do most of the animation for a game using them.

But hey, they sure are bumpy/shiny!

Not only that, the GoW models don't look anything like those marketing shots in the realtime footage of the game we've seen.
 
SubD said:
They aren't even in the same universe as the MGS4 character models which are designed for a system with a vertex processing behemoth like Cell They are so primitive and rigid, I, an engineer with basic Maya skills, could do most of the animation for a game using them.

Ahhh the differences between perception and reality......
 
Someone here might be able to answer, but is Eurphoria now officially part of the console middleware programs?
Euphoria is officially part of the game development vaporware toolsets.

And no, there will never be a tool that directly generates animations based on physics, but rather, the closest possible thing... For instance blending techniques between the aforementioned "robotic" animations and physics, as well as pre-generating a lot of potential combinations for reactions to all sorts of things and then performing an informed search and blend to get the best fit. You can't remove canned animations. Realtime variants like Euphoria do what ERP mentioned -- they use IK and ragdoll assets to influence existing animations; not utterly changing the way animation is done.

Besides, if you tried to simulate a character physically, you'd never be able to get him to stand up. At least not on two legs.
 
ERP said:
At 10,000+ polygons, the granulariy of the polygins has little if anything to do with animation quality. If a dev really has a problem with polygonal granularity then he can selectively increase the polygon count without increasing by order of magnitudes as you suggest.

The real problem with robotic animation in games has little to do with polygon counts and is primarilly that games are interactive, there are two things competing. The realism of the animation and the responsiveness of them to the controller.

The obvious example is jumping, to look realistic there is a ignificant amount of time in preparation to jusmp before you actually leave the ground, however as a player I want to be in the air when I push the button. Short of mind reading or the ability of a computerto see the future this can only be solved by compromise. The original prince of persia and tombraider solved this by forcing the player to learn to anticipate and given their gameplay it was acceptable. That type of compromise would not work in Mario for example.

Using Ragdolls and IK etc to influence animation at some level will likely be something we start to see a lot of, and it will make things look a lot better, but it won't solve the fundamental problems with interactivity.

You are confusing realistic animations with fast animations with good animations.

you could have good but unrealistic animations like jak and daxter for example.
They are 2 different things.
Pixar or disney CG movies have all unrealistic animations, but great animations in the same time.

Because simply a better animation means more details of characters are moving and animated and this needs a lot of power.

If you have 10 000 polygons + normal mapping textures for cloths for example, you cant animate the cloths (in Doom3 or Quake 4 ) because they are textures and not polygons. You need to render cloths in real 3D (in polygons) so you can animate them.
Same thing for the face if a lot of details on the face are rendered using normal mapping technique, you just cant create good facial expression animations, because there is a lot of details that you cant animate in the face, they are textures and not polygons.

Adding polygons to render more details allwos developers to animate those details. If those details are textures you just cant animate them.
 
ShootMyMonkey said:
Euphoria is officially part of the game development vaporware toolsets.

Well unless games like Indiana aren't coming to consoles one assumes moves must be afoot to make it happen if it hasn't (unofficially) already..

It seems like something the platform holders should be all over, particularly Sony. It seems right in tune with the direction Sony has been harping on since last E3.
 
Well unless games like Indiana aren't coming to consoles one assumes moves must be afoot to make it happen if it hasn't (unofficially) already..
It means that the developers are probably involved in the beta program. Things will proliferate once it's really ready.
 
Titanio said:
Someone here might be able to answer, but is Eurphoria now officially part of the console middleware programs?
Not that I know of, though it is available as third-party middleware. If it is a standard component of any middleware solutions, Natural Motion haven't bothered to say as much in their news announcements.

ERP brings up a good point about responsiveness. The orignal POP worked on unit distances and basically chaining commands for execution on the next step. In something like Mario you want to be jump. I think there's a fairly happy middle ground though in extra-animated unrealistic gameplay. Fleshing out the animations with stumbles, wobbles, arm movements and such can be coupled with unrealistic jumping that instantly propels you into the air with some arm wiggling. After all most games have unrealistic jumping capabilities anyway. As an example, pushing forward on the stick moves your character forward. Advanced animation adjusts the steps over the terrain, hopping over fallen trunks and divots without the player needing to worry about that. Changing direction would see the player shift weight depending on the realism wanted. A jungle shooter could go for all out realism, whereas a cutesy platformer could just embellish a few unnatural animations.

As for the original idea that poly count is needed, I don't go with that idea. Imagine a puppet made of box-section wood for each ragdoll segment (foot, shin, thigh, etc). You could animate that with fantastic realism of motion as that it were a living person, but only need 104 vertices (if my maths adds up!). The only areas where more vertices would be important would be clothing perhaps, fingers and closeups and faces. Scrub fingers, you can use the existing box-section representation for them too, as far as animation is concerned. So apart from adding a few more vertices to the face, and if you're feeling really ambitious, some cloth-like simulation on clothing, animation could be perfectly adequate on 10k models, less even. You could make an argument for muscle flexing but I think we ought to wait until natural character animation is achieved before worrying about that one!
 
SubD said:
"Snake is composed of 120 000 + polygons, with little use of normal mapping. Snake is not better looking than gears of war characters, but is far better animated."

You. Have. To. Be. Joking.

The bleary, overly normal mapped Doom3-ish Gears of War characters:

1) All have helmets, bandanas, or no hair

2) Rigid body armor covering most of their bodies

The GoW character models are relics of an x86 pc engine. They aren't even in the same universe as the MGS4 character models which are designed for a system with a vertex processing behemoth like Cell They are so primitive and rigid, I, an engineer with basic Maya skills, could do most of the animation for a game using them.

But hey, they sure are bumpy/shiny!

Not only that, the GoW models don't look anything like those marketing shots in the realtime footage of the game we've seen.

I didnt notice this before, but what you are saying is true.

Maybe this could be a 3d point of normal mapping rendering drawbacks :

3/ There is some details (like hair ? cloths ? ) That you just cant render by using normal mapping textures. Yu need to use more polygons. (maybe this is the reason of your remarque : no hair and cloths, or real skin for Gears Of War characters )

Maybe, I am not sure...Could anyone enlight us, If my third point is true ?
 
Shifty Geezer said:
Not that I know of, though it is available as third-party middleware. If it is a standard component of any middleware solutions, Natural Motion haven't bothered to say as much in their news announcements.

ERP brings up a good point about responsiveness. The orignal POP worked on unit distances and basically chaining commands for execution on the next step. In something like Mario you want to be jump. I think there's a fairly happy middle ground though in extra-animated unrealistic gameplay. Fleshing out the animations with stumbles, wobbles, arm movements and such can be coupled with unrealistic jumping that instantly propels you into the air with some arm wiggling. After all most games have unrealistic jumping capabilities anyway. As an example, pushing forward on the stick moves your character forward. Advanced animation adjusts the steps over the terrain, hopping over fallen trunks and divots without the player needing to worry about that. Changing direction would see the player shift weight depending on the realism wanted. A jungle shooter could go for all out realism, whereas a cutesy platformer could just embellish a few unnatural animations.

As for the original idea that poly count is needed, I don't go with that idea. Imagine a puppet made of box-section wood for each ragdoll segment (foot, shin, thigh, etc). You could animate that with fantastic realism of motion as that it were a living person, but only need 104 vertices (if my maths adds up!). The only areas where more vertices would be important would be clothing perhaps, fingers and closeups and faces. Scrub fingers, you can use the existing box-section representation for them too, as far as animation is concerned. So apart from adding a few more vertices to the face, and if you're feeling really ambitious, some cloth-like simulation on clothing, animation could be perfectly adequate on 10k models, less even. You could make an argument for muscle flexing but I think we ought to wait until natural character animation is achieved before worrying about that one!


Saying that you could create the same great unbelievable animation of Snake in MGS4 (the trailer of E3 2006 will be even better ! ) Even by reducing snake polygons from 120 000 to only 10 000. This is not true and illogical.
How you could do Muscle animations with 10 000 polygons ?!! No way...
MGS4 team, Lair team arent stupid to use 120 000 + polygons, if they could achieve the same look and animations with less polygons. No way...
 
Wasn't the animation in MGS4 so good because it was a continuous motion capped cut-scene???

And, I'm not sure if you've played GRAW but the animations are excellent.
 
ShootMyMonkey said:
Besides, if you tried to simulate a character physically, you'd never be able to get him to stand up. At least not on two legs.
Ithought that was the clever point to Endorphin (and thus Euphoria). I haven't used them myself but as I understand, perhaps wrongly, they combine AI with physics to get balance etc working. Just as we have gymnastic robots that use sensors to detect motion and counteract as needed, I thought Natural Motion's method used such sensors to detect limb motion and balance etc as needed. A key point of it is supposed to be the motor-level AI. Without that, it's as you say, nothing more than a selection and blend of poses.
 
FouadVG said:
Saying that you could create the same great unbelievable animation of Snake in MGS4 (the trailer of E3 2006 will be even better ! )
You need to explain what you mean by great animation. A wodden puppet as I described ballet dancing in realtime just like a real dancer would to me be great animation. If you're only talking about facial motions and low level cloth and muscle flexing, I already mentioned for those you'd need geometry. But your also wrong saying that normal mapping won't do clothes at all. The same deformations can be converted to nomal maps. It won't do flowing cloth, but you could simulate creases and ruffles with normal maps. It's no different in principle to animating waves on a liquid only instead of a displacing vertices you'd calculate normals. Facial creases could be managed similarly, but the key points of facial animation are mouth and eye shape which need geometry.
 
Shifty Geezer said:
You need to explain what you mean by great animation. A wodden puppet as I described ballet dancing in realtime just like a real dancer would to me be great animation. If you're only talking about facial motions and low level cloth and muscle flexing, I already mentioned for those you'd need geometry. But your also wrong saying that normal mapping won't do clothes at all. The same deformations can be converted to nomal maps. It won't do flowing cloth, but you could simulate creases and ruffles with normal maps. It's no different in principle to animating waves on a liquid only instead of a displacing vertices you'd calculate normals. Facial creases could be managed similarly, but the key points of facial animation are mouth and eye shape which need geometry.


You are saying that normal mapped textures could be animated ?!!

Anyone please could confirm this ?
 
Shifty Geezer said:
Facial creases could be managed similarly, but the key points of facial animation are mouth and eye shape which need geometry.
Fight Night 3 already does this on the 360. you can see wrincles form around the mouth, eyes, and forhead as your boxer breaths, and they look to be affected by swelling and whatnot.
 
3/ There is some details (like hair ? cloths ? ) That you just cant render by using normal mapping textures. Yu need to use more polygons. (maybe this is the reason of your remarque : no hair and cloths, or real skin for Gears Of War characters )

Maybe, I am not sure...Could anyone enlight us, If my third point is true ?
Pretty much true... Though you can put normal maps on hair polygons to get some further lighting detail to make individual hairs look a little more separate even though the render geometry would really most likely be built up of patches of hair as polystrips.

Ithought that was the clever point to Endorphin (and thus Euphoria). I haven't used them myself but as I understand, perhaps wrongly, they combine AI with physics to get balance etc working. Just as we have gymnastic robots that use sensors to detect motion and counteract as needed, I thought Natural Motion's method used such sensors to detect limb motion and balance etc as needed. A key point of it is supposed to be the motor-level AI.
I'm not so sure about the "motor-level" AI per se... it's more accurately referred to, even in the academic literature, as "style-based kinematics." Which is to say that physics will actually play second banana. There are some cutesy little things about re-establishing a "balanced" position from one that is "unbalanced", but as far as the tool is concerned, the question of "balanced" is based on how you define it in the static animations. So if you build a library of canned animations where a guy's normal method of locomotion is standing and hopping on his head, I believe the system will assume that is his balance condition.
 
Status
Not open for further replies.
Back
Top