Regarding PC games, you shouldn't even expect to be external-storage limited much of the time. I've recently optimized (by a fair amount, but I did not change the system drastically, so it's still FAR from perfect) a DX7-era 2002 game engine's loadtimes, and I was surprised to discover that only about 25% of the loadtime was actually reading information. And this is for an online game, with little preprocessing...
Things that took a fair bit of time:
- Creating mipmaps and scaling up the non-power-of-two GUI textures with gluScaleImage (30%+ of the loadtime). GOSH is that function slow, handwritten special-purpose function is 10x faster at least.
- Memory allocation (5% of the loadtime or so): but honestly a good engine would require less *freeing* of memory during loadtime, and would probably reuse it better.
- Initializing misc. systems, like the Sound API using fmod, etc.
- Decompressing.
What I'm just trying to say here is that, most likely, if a game's loading time sucks, you shouldn't be blaming the I/O & external storage (too much) - blame the programmer for not profiling and for having no clue how the overall loading system should work before programming it. Deciding what to load and when counts too.
I would at least hope that, next-gen, multithreading is used to its full potential for loading (9 threads, baby!) so that, if done properly, it actually is I/O limited. And then redudancy can come into play.
Once again though, I would assume (or maybe hope) that even current-gen titles are much more I/O limited than PC games, loading-wise, considering the closed environment and the much greater need to optimize
Uttar