For interfaces, if we want progress, something that hasn't happened yet but which has been on the cards as an option is emotional reading of the player. We don't have games that react to how the player is feeling, only ever how they are moving or pressing buttons. i expect lots of processing power could be taken up creating intricate monitoring and AI response, although whether any actual, worthwhile games would appear as a result is questionable.
Regarding the 2nd point, emotional reading, I don't think we will be anywhere near there in terms of AI and processing power for many, many years yet. More importantly, I don't think it's a direction that any of the 3 console manufacturers, who's whole aim is to be 'inclusive', would feel comfortable going in.
Notwidthstanding all of the different face shapes, eye shapes, etc. within a single racial group, different racial groups also have different ways in which they facially (and/or bodily) express themselves.
Added on to that, you have millions of people out there with facial ticks, facial deformity, conditions such as aspergers and other autistic spectrum disorders, etc. It is, I believe, an area fraught with not just technical difficulties, but for entertainment purveyors also moral issues and those of inclusiveness.
Reading "voice" for emotional involvement, on the other hand, is something I can see being furthered in experiences that warrant it. Firstly, because voice patterns will be more easily tailored during localisation. Secondly because, when one is speaking to a machine it is very similar to how one speaks to a child (or a dog), with exaggerated tones to indicate happiness, sadness, displeasure, etc. And, of course, there are also the words being said which can help with processing the correct emotional response.
But again, as you said, how useful something like that will be in games is questionable.