For example lets say a developer wants to tell a specific story and an NPC character is important from the beginning until the end both story wise and how it behaves in the game during gameplay, the director will demand a restriction in your ability to hurt him even if the technology allows more.
I would deal with this like with the supermarket example: If the player kills an important NPC, the game kills the player quickly and he has to start over. I don't say this is how any game should be, but freedom of choice is desireable and here we could give the player the illusion to be free.
But this alone would not be worth the efford ofc. It's always hard to predict the outcome of new technology - we will see...
there's no point talking about dynamic stories created in-engine in 2019.
Ahhm... there is already a game out with procedural dynamic stories. But can't remember the name - looked like some JRPG.
If it's really possible to simulate human movements, why do we bother with motion capture?
Because existing tech barley works and is difficult to use. Requires more work.
There are multiple ways to do it, if we simulate physics the very first problem to solve is balancing. For bipeds that's very hard, because the center of mass is very high but the support area from the feet is small. Typical game physics engines can't do the jonts and motors accurately enough, so you may need custom solvers to get started.
After that, the control problem is very hard too. Balncing an inverted pendulum efficiently gives a problem of position dependend acceleration leading to complex equations, and converting the results back to a complex human body requires
IK solver that preserves the solution.
Approximations may not be good enough, because they reduce efficiency. The balancing human body constantly acts at the edge of what's is physically possible, even to get simple things done in short time.
It's difficult, otherwise there would be more companies with biped robots walking under unstable balance. Actually Boston Dynamics is the only one i'm aware of. (I guess i would need 2-3 years of work to catch up)
An alternative is procedural animation, and recently there is lots of work doing this with machine learning on real world data (so motion capture still required). We may see this first in games i guess. This is not the real thing, but looks very realistic and it's much easier i guess. *)
Both can be mixed. In the end, i'm convinced physics simulation is best suited to solve basic balancing problems like walking, running, carrying objects, reflexes etc., and ML can be used to add everything else.
So, when i say 'it's possible' i mean performance wise, talking about myself. I don't know what Natural Motion can do exactly, but without doupts Boston Dynamics could offer dynamic ragdoll library middleware to games in very short time if they wanted.
Maybe the biggest reason we do not have this yet in games is this:
Why is it always so hard to convince people about the need to keep technological progress going on? It's not that achieving my proposed goals would take anything away - it only adds.
hehe
One thing for sure: After making your characters walk realisitcally, adding body language and facial expression on top to play back cutscene data is very easy.
* first random search result on YT, does not look so different form actual games. TLoU 2 maybe already looks better but also quite dynamic. So this is happening and not far fetched.
edit: fixed AI <-> IK typo