BenSkywalker
Regular
As in, you have a base texture map, a specular / diffuse, lightmap.. etc. All of these are going to be combined (multi-textured) before the final values arrive. Why don't they combine these textures into one and bake them into ONE texture beforehand instead of doing the combining at the rendering stage?
Think of a floor with a texture on it. Let's say it is a brick floor. Taking a look at your typical game if you see one brick floor, you are likely to see quite a few of them. Not only that, but there will likely be a repeating texture to make even a single example of a brick floor.
Now think of the lighting model. Say it is night time and there are lights dispersed throughout the area. Using the current standard you would simply apply the differing lightmaps over the area they are used in. If you wanted to revert to a single texture you would need a unique texture for each section of the floor, exploding the amount of memory utilized by what is currently a relatively small handful of textures.
For using the trick for things like diffuse lighting, damage etc, your possibilities for a single texture could reach into the millions- for each and every texture in the game. Think of firing a rocket across the area with the brick floor. For each frame you would need to load a different texture for each area effected by the light given off from the rocket. Then, whichever areas it hit would need to load another set of textures to demonstrate the damage and then another set of textures still would need to be used for the next time you fired off a rocket as you would need the differing lighting model due to the rocket along with the leaving the damge visible. For a singular room you could chew up TBs worth of data trying to compute out all of the possible changes a small handful of textures go through.