[Off topic, but I think it's important and useful tech information]
ERP, I get what you're saying here and it makes sense, but how much of the old code base is still in modern games, engines, libraries? I'd think by now, most of the bigger developers/games would be optimized for Xenon, Cell.
Is that not the case?
It's not a question of legacy code.
The traditional viewpoint on optimization, is don't optimize prematurely measure find the hot spots and address. Practically on large applications this approach doesn't actually work.
Hypothetically I have a team of say 20 people 3 or 4 of those (the team "stars") may work on what's considered performance sensitive code, in general they will optimize as they write the code. The rest will work on general systems or gameplay code, none of the individual pieces are expensive and the vast majority of the programmers are more concerned with readability and maintainability (and they should be) than overall performance. They also tend to take the fast route to a solution rather than the best route because so few pieces of gameplay code survive any sort of gameplay review, and there is pressure for the designers to see it in game as quickly as possible.
The last point is why performance travesties like Havoks character control stuff make it into final builds, it's quick to get in and working and by the time you measure the cost, the cost of change is too high to justify.
There is too much churn in the codebase for anyone to look at or be familiar with all of the code and a lot of reasons that you believe performance is currently below par and will improve. You get to 4 weeks before E3 and have to show a demo, at this point you start looking at performance, which is usually horrible, some of that is assets that are way over budget, some of it is poorly optimized code.
You sit down with the performance analyser, run it over the code and you cry. Because it's not a single hotspot it's literally thousands of pieces of code wasting 0.1ms here or 0.01ms there and it's impossible to "fix" them all for the demo and even by the time you ship.
20 years ago on SNES or Genesis everything was assembler, every line of code was likely vetted by a single individual.
15 years ago circa PS1 when games were a few hundred thousand lines of code, every line of code was looked at and evaluated, if I didn't write it I saw it go into the codebase and had it fixed when it did.
10 years ago circa PS2 it started to get difficult to do that, plus you started to see legacy code and large blocks of external code that were poorly optimized for the platform.
FWIW I didn't do my first dynamic allocation in a piece of game code until PS2, and only the because we inherited the codebase.
The concept that game teams some how lovingly craft every line of code with an eye to performance and are some how much better than their software engineering counterparts in other disciplines is utter crap, it hasn't been that way for over a decade.
Building a game today is all about software engineering and project management and much less about programming.
There are still a very few teams that are very "old school" in there development approach, but even there they use 3rd party libraries and not every engineer is a star.
IMO Console hardware should be designed around what lets teams produce the best games.