No, the ingame heads are actually very rough and lowpoly, and rely a lot on the normal maps to add detail. The characters don't have eyeballs for example, or eyelids, just some blob in that place; then the normal map adds the shape and the color texture contains not only the iris, but where it is looking, and the reflections from the recording studio are baked in as well.
The problem with aliens and non-human characters is that you have to record both the shape, the color, and the motion from a real life head. The entire system is very closed because there are no clear separation lines between the mesh and the normal maps, and it's also probably very hard to manually animate anything. So you would have to use movie visual effects level make-up and prosthetics for mostly humanoid creatures to get the proper data using the same kind of recording sessions.
Less humanoid stuff would then have to be produced, I don't know, probably via CG, at a much higher fidelity than usual for games, in order to match the quality of the human faces. You'd have to build a movie VFX level digital character to have some source to capture similar data from and reproduce the entire capture process in a full digital pipeline within the computer.
All those small, lifelike details, subtle movement, wrinkles and folds and such that we need to accept the result as realistic would have to be reproduced for aliens and fantasy creatures. Otherwise the discrepancies between the characters would break the immersion very quickly.
I'll watch the AMD presentation now