Blaming the actors is b******, sorry.
Facial animation is an extremely complex issue, especially in games where massive amounts are required, compared to even a full CGI movie. There are basically two possible approaches to solve this problem:
Use straight motion capture for all the facial animation. This requires processing a massive amount of mocap data - cleaning, tweaking, reworking etc. As far as I know both GTA4 and Heavy Rain took this approach, and probably Assassin's Creed 2 as well; what's more interesting is that the outsourcing company involved has been Image Metrics in all these cases.
So the biggest factor is probably the general budget of the cleanup work, which had an obvious effect on the quality. We all know GTA4's budget was huge compared to the other two games. Although it's also worth noting that the general quality of the facial rig that IM receives from their clients is also a factor here.
The other approach is a procedural one, where you build a set of gestures and drive them by adding timed keys to the animation or voice in the game. For example Mass Effect does this; they do automatic lip sync to the voice file and the designers add gestures (that aren't only facial animations, but full body poses, short movements etc). There's a specialized off the shelf software for this that I've seen recently called FaceFX.
This approach can also use mocap data for the facial gestures and expressions, but these are very basic elements that are combined during runtime for the full animation.
So, all in all, I'd say that the issues with Heavy Rain are probably a result of combining less then perfect character rigging with mass produced facial animation data.
Also note that GOW3 has a lot less facial animation, being mostly an action game, so they had less work to do as well, and in turn more time to polish what they had. Although Uncharted 2, another game with high quality facial animation, has a lot of cinematics too, but I guess ND was able to fit the necessary work into their schedule.