No amount of market evaluation is bound to stop research into future technologies all by itself.
At most, it can influence CPU, GPU, storage and gameplay aspects of the project, but altering such basic aspects as keeping consistency/compatibility with older software (that's the keyword right there) calls for a very careful approach.
The cost of scrapping hardware R&D altogether is often much higher than ditching software code-bases, but with tens of millions spent on R&D per game it came to a point of walking a fine line between success and failure.
The wide margin of error of the 80's is long gone.
For instance, i don't think the rumored *full compatibility break* of the future Windows codename "Vienna", would be even remotely possible without the aid of multi core development, yes, but it would also not be possible if there was no hardware and software support for robust, near-native speed emulation (at least comparable to running current apps in contemporaneous hardware natively).
This would almost certainly imply having to move towards a native X86-64 codebase, dropping old x86 FPU's and legacy register files, "RISC-ing" even more the x86 architecture.
For all those things, there's a degree of control/influence that a software-driven company cannot expect to have.
I'm sure IBM would not be willing to hand over PPC IP control to Microsoft for the X360, unless they would settle for a simple design (compared to the "big iron" server chips).
Cost would not be a factor for MS, nor would be power requirements, judging by IBM's recent power efficiency claims.
The prize would be ultimate: living-room domination, in much the same way iPod's rule the MP3 player and legal music download markets.
Sony sacrificed PS3 to Blu-ray for that alone.
Control over Blu-ray royalties (if the format wins the "war", that is) can pay for 2 or 3 generations of Playstation's all by itself.
Having said this, the cleanliness of non-x86 designs may provide further flexibility to achieve a true hardware independence for the software layer, especially in a typical console environment.
Hence the managed code push, hence the hardware-assisted virtualization.
Maybe the diminished returns of new technology advances can be offset by greater flexibility inside what happens in a specific thread, and how that relates to the overall task (gaming, in this case).
Intel, IBM and AMD would have liked it was so simple, but when reality sets in, innovation can stall under the burden of adaptation to these new conditions, and we get the result in the form of relatively simple hardware concepts like the Wii and NDS beating the other systems due to high costs and lack of appeal of the software.
If i were to choose, i would have told MS to go for a highly clocked, branching-friendly, single-core, dual-threaded CPU, instead of 3 simple cores, with 2 simple threads each, and a tiny amount of L2 cache (partially shared with the GPU, on top of that).
The transistor cost would not have been that big, but the game could conceivably arrive with advanced features earlier, because developers would not need to center first on dividing it into 6 tasks to have performance, they would be able to follow a more conservative, time tested routine, and then evolve with the number of threads/cores.
Desktop PC game developers have been working with maybe one, 2, and 4 threads with lots of RAM for the last few years, but console developers come from a limited amount of RAM in a single thread at SD resolutions on the previous generation, to a still comparably small amount of RAM with 6 to 9 threads in a HD minimum resolution of 720p (at least internally).