About movie 2K resolution:
It's 2048 * 1536 or so. Pixels aren't square (because it's an anamorphic resolution). The top and bottom part of the image also gets cut off in the theater - thus the effects have to be rendered in the full resolution, but no details are taken into account in the obscured areas.
About bump mapping:
Let's make a distinction between grayscale heightmaps (traditional bumps) and normal maps.
Bumps are used in the artistic sense to break up shading and specular highlights on an otherwise smooth surface. The reasons to go with a texture is that 1. detailed geometry is more expensive to render and 2. textures are painted, which is faster and more intuitive than modeling.
Bumps are used in the technical sense to fake real-world geometric detail that's usually quite small on the final rendered image.
Bumps are used in the economical sense because fast and robust micropolygon displacement mapping is not available in most renderers today. There are also a few possible problems with displacement (usually with high displacement bounds, ie. vertices are moved too much) that can result in rendering artifacts.
Bump maps can be generated relatively fast by a skilled texture artist, as they've been used for more than a decade in offline rendering. Some possible methods:
- If the texture is fully painted, start with the bump map and work with layers in photoshop. Copy individual bump layers to the color map as masks for different details like skin pores, wrinkles, rust, dirt etc.
- If using a photo texture, a possible shortcut is to desatureate the color map, run a high pass filter on it to remove low to mid frequency details, then adjust levels and brightnes/contrast. This can get quite convincing results.
In our experiences with prerendered cinematics, one of these two methods have always sufficed. For example I was able to paint a 2K set of bump/color/spec textures for a high-res
Tiger tank in about 2 days (modeling and UV mapping took considerably more time BTW).
This might be new ground for most game artists though, who haven't got an extensive background in content creation for cinematics and the like. So they just need to be trained.
Normal mapping is a different beast. Id is using it for not just high-frequency details (so the small stuff), but also for med-freq things like facial features, muscle groups, small bits of equipment and accessories. Their problem is that all of these details have to be manually modeled in 3D which takes a lot of time. Especially for Raven with their cyber-stuff in Quake4 - see the leaked stuff, the detail is insane and close to the level of movie VFX content.
Id is in part forced to use this method because of the polygon count limits that stencil shadows force on them. However the issue will remain relevant even if their next engine will use a different shadowing system and higher resolution models - when video cards will support displacement mapping.
The VFX studio Weta Digital working on Lord of the Rings has made extensive use of a similar technique: laser scan a very detailed clay maquette (over 7 feet high for the 4-feet Gollum!), create an optimized mesh with less than 1/100 of the detail, and generate displacement maps from the highres scanned geometry. They are doing this to speed up the skinning (weighting or muscle simulation) of the characters, and also to keep scene sizes small. Both are perfectly reasonable for games as well, where the vertex shader would only have to transform and skin the low-res meshes, and then dice it up and displace at a later stage.
I think it is a safe bet that real-time hardware displacement mapping is on the horizon. Thus the content creation problem id has met with Doom3 will remain.
I'd like to add that the VFX industry has not found a real solution for modeling very high detailed objects yet either. Their only advantage is that they usually have to do just a few detailed models per VFX production - noone expects 40-50 monsters, bosses, vehicles, items plus levels. Practically every full CGI feature had a very stylized look that has allowed them to simplify their models; the only exception was Final Fantasy and we all know how big their budget was (over 100 million).
For most of the detailed models, hand-painted displacement maps are used. Here's another exception: Draco, the dragon in Dragonheart was perhaps the most detailed CG character yet (because of all the scales), and he was modeled in 5 months by 5 modelers - more than 2 man-years of work.
The solution for this problem is obviously in the content creation phase, instead of the rendering technology. New and better tools are needed and there is quite some research on this topic. See this forum (run by one of the Weta guys):
http://cube.phlatt.net/forums/spiraloid/viewtopic.php?TopicID=9
For those who don't want to read through, one of the 'new' tools is Zbrush which is basically a 3D painting app; it allows to directly manipulate the polygonal surface with brushes. The new version can handle up to 1-2 million polygons; thus a modeler can create a relatively simple 3D model, subdivide it to get enough polys and paint the details instead of modeling. It is more practical for organics though, painting all the cyber stuff in Doom/Quake doesn't seem to be possible yet.
Just my 2 cents, anyway...
