High end quad core PC CPUs aren't the problem, since a single 3+ GHz Haswell core can run tasks of two lower clocked Jaguar cores in the same allocated time slot (16.6 ms). However older quad core CPUs such as Core 2 Quad have similar IPC than Jaguar, and are clocked usually in the 2 GHz - 3 GHz range. A single core thus doesn't have enough oomph to run two Jaguar cores worth of tasks in the allocated time. This is a real problem for static threading, since 6 is not dividable by 4. In the worst case scenario two cores get twice as much work (6 cores work distributed to the quad core as: 2-2-1-1). In this scenario a lower clocked 6 core CPU would be preferable.I wonder what changes they had to make for 4 thread PC CPU's then and whether we're likely to see significantly better performance on 6 thread+ CPU's in the future on account of most future games being optimised for that number of x86 (albeit significantly weaker) threads.
This is one of the reasons why most developers are switching (or have already switched) to a task based system (all work split to small tasks) that dynamically distributes all the work items (jobs) to all the available CPU cores. A modern task based system would automatically split the work evenly on any kind of CPU core setup (from a lowly dual core to a 15 core / 30 thread Ivy Bridge EX). Obviously a game designed to consoles wouldn't automatically fully utilize a 30 thread Ivy Bridge EX, unless the developer added some PC specific "ultra" settings and scaled the environment and AI complexity to match (more NPCs, higher draw distance, more complex environments). Code would scale automatically, but the extra work related to making a more complex environment for PC wouldn't likely be easy (unless the content is mostly based on procedural creation). And the extra testing needed for dynamically scaling content complexity would likely be too much work (unless you would only scale up the visual quality, not the game play elements).