I always wondered why games don't associate some assets as being applicable to all games types. E.g., couldn't COD/BF define that all of the core game dynamics (rendering engine, animations, physics) be loaded into the system once, then all subsequent loads would take less time whenever a new MP map loads? So it only then loads the map and textures.
Yes, it sounds like a very good idea to keep the loaded data in memory and only load the difference. However in reality that poses many problems.
1. To get the best performance out of a disc (BR or DVD), you need to order the data into linear order to minimize the seek time. For example a 2x BR drive (PS3) loads 9 MB per second. Disc (BR/DVD) seek time can take up to 100 milliseconds. During the seek the drive doesn't transfer anything, so every seek thus costs 0.9 MB (~1 MB). If you would load one megabyte textures randomly distributed on the disc, your loading speed would thus be halved (half of the time spend seeking). This is why games tend to replicate the content for every level, and put them in a linear area on the disc. Every level thus has completely separate content. If levels would share content, there would be lots of seeking involved (and that slows down the loading time).
2. Consoles have limited amount of memory and do not page memory to hard drive. If you run out of memory, you will freeze the console. Games tend leave as little as possible free memory, because that allows the highest quality of assets. If data is loaded in big chunks, the memory management is easy, just free the last data and load the new in the same memory addresses. However if you partially free the old data, and load new data partially, the memory management will become much more painful. You have holes in memory address space (where the old content was freed) and you need to load the new content to those holes (because there's no other memory available). Each asset has a different size (big mesh vs small, big texture vs small), thus it's very hard to load the new meshes in to the same memory addresses as the old ones that were freed (a single mesh or texture needs a big area of linear address space, it cannot be split). This problem is commonly known as memory fragmentation. The more dynamic memory allocation you do, the bigger the problem becomes. On PC it's not a big problem, since the computer can page in stuff to hard drive when the memory runs out (never crashing).
Fortunately both next gen consoles have hard drives (standard) and games are always installed to the hard drive, so the disc seek problem (1) is now a much smaller problem. Memory fragmentation is still a problem however, but many techniques such as virtual texturing and virtual geometry help combating the fragmentation, since these systems load data in fixed sized chunks (all data is split for example to 64KB chunks). This completely eliminates all fragmentation problems for this kind of data. Also both next gen consoles are 64 bit machines, and 64 bit PCs are becoming common. 64 bit address space allows many nice virtual addressing tricks that eliminate the fragmentation issues as long as you load/manage the data in 4 KB chunks. You can virtually map your assets to a big unmapped address space and map/load the pages that are currently resident. The big downside of this technique is that your code becomes almost impossible to port to 32 bit devices. That might be an issue right now, but in a few years even the mobile devices are all based on 64 bit architectures.