D
Deleted member 11852
Guest
I'm not an expert but I have done some crude emulation work and I think it depends very much on the target system and the approach taken.Funny thing is before Xenia some of the emulation developers were so emphatic and absolutely certain in their knowledge that xbox 360 games would take 20 years to have the hardware to emulated xbox 360 games, that they'd need 3-8 cores running at 15-20ghz or more to emulate.
If you look back at how 8-bit computers and consoles were initially emulated, it was often using brute force approach to emulate the entire system at a per cycle level - based on the fastest clock in that system.This is basically as good as it gets as long as you have detailed documentation about how the CPU, all of the chips, the RAM and buses operate. And this is necessary if you need to emulate something like a Commodore 64 because there were so many "unsupported hardware features" used in commercial games that if you don't emulate the VIC-II (graphics) chip exactly how it worked at cycle, scanline and register level, a bunch of software will spectacularly break.
More recently emulation has largely moved away from this approach - particularly for complicated target systems which often do have inate variances built into both the hardware and the software. If this is the case then you may be able to avoid cycle-level emulation (or at least for every subsystem) and employ smarter methods to emulate something. For example you wouldn't want to have to emulate the tri-core 3.2Ghz Xenon CPU at the per-cycle level in realtme if instead you can use a variety of JIT precompiler techniques to convert PowerPC code to 80x86 for the purpose of emulation.