I'm not trying to say games should use the same techniques as film, but I do think we shouldn't be so eager to restrict tessellation to making big triangles.
More broadly, on a technology forum, why are so many people trying so hard to defend the status quo? Why are we so eager to ensure future hardware is limited in the same way as today's hardware?
Presumably, everyone here wants richer detail in games. Tessellation, and small triangles generally speaking, are obviously the future, but they can't be shoehorned into a traditional GPU hardware pipeline, because tiny triangles overload the hi-z, clipping, perspective correction, backface culling and pixel shader hardware, which is carefully balanced for large triangles. There is also the issue that micropolygons would require "vertex quads" much as the pixel shader requires "pixel quads," and this in turn requires models to have a consistent UV parameterization. These changes would bubble up the graphics pipeline and out into everyone's game engine and tools pipelines.
There is no way to solve these problems, other than to remove the unnecessary hardware units from the GPU pipeline. And eventually they must go - but not yet, because it would make all existing 3D applications very slow and benefit only as-yet unwritten applications.
The eventual switch from vertex/pixel shading to vertex-only shading of micropolygons (like how movies have always done it) is probably the single most disruptive change to the GPU pipeline in decades, and it will take a long time to make it into mainstream products.
It's even possible that by the time micropolygons make it in there, there won't be a hardware pipeline left to change, because at the time an architecture like Larrabee may have already moved everything into software. In this case, the switchover to micropolygons may be even more piecemeal, since it would no longer ride on the product release cycle of a GPU hardware manufacturer or indeed Microsoft.