LunchBox said:
I meant real geometry on where the industry is headed.
Well, I'm not sure about ongoing research, but the practical side is that we'd rather not get into multi-million polygon meshes yet.
First there's the hardware limit. There are a few new releases from the traditional 3D apps that will finally support 64 bit hardware and WinXP, which will finally make working with 5+ million polygon scenes actually possible. But loading all that data from disk, and especially moving it around the network when you send it to the renderfarm would still be a nightmare. Large movie VFX studios like ILM are already close to the limits of storage technology as far as I've heard.
And even with 64 bit, manipulating an order of magnitude (or more) higher amount of data will be slow, even something as simple as moving objects around. Most highend studios nowadays actually try to minimize workign with the real data and use highly simplified representations. PRMan for example supports things like archiving out an interpreted RIB file of your geometry (static or animated) to speed up the scene's parsing. It also allows several mechanisms to generate data at rendering time; even something as complex as an animated character can be built and articulated from a sort of "parts library". The hard truth is that artists tend to push the technology so far that the 3D application barely manages to handle one detailed character at a time, or about a dozen of medium detailed ones. Crowd scenes require techniques like the ones I've mentioned above.
So the tough stuff comes with the workflow. You want to animate those models: bind them to a skeleton and pant the weights for each vertex, apply simulations, morph between blendshapes etc. This would be close to impossible with a model that has millions of vertices... the current tools were not developed with such complexity in mind. Just think about how much disk space a facial blendshape library would take up - a hundred shapes are pretty common for a single character even in everyday movie VFX!
The trend nowadays is to work with a relatively simple mesh, somewhere between 5.000 to 200.000 polys, that gets subdivided at render time, and static detail is added through displacement and normal maps. Research goes into applications like Zbrush, that can replace the time consuming process of modeling something from clay, scanning it, then rebuilding an animation-friendly mesh from the pointcloud data and extracting displacement textures for it. The whole workflow can still use a lot of streamlining, even if it can now be done on a single PC-based workstation with off the shelf software.
Of course using a 20-25 million poly mesh (it seems to be the sweet spot for a highly realistic character) and having the ability to manipulate it would be very cool. But neither the hardware is strong enough for it, nor are there any clues on how it would be possible AFAIK. We'll probably see a lot more of displacement stuff in the upcoming years instead - so PS4/X3 will probably concentrate on HOS and subpixel tesselation/displacement, too. But that's like 5-6 years away