The point had nothing to do with latency or framerate.
I was simply saying having a large number of flops does not a good CPU for games make.
A good CPU for games should efficiently run game code, without placing an undue burden on development. Because any thing that makes it difficult to produce good code, cuts down on the amount of iteration that can take place in a game, and indirectly impacts quality. If a processor is esoteric in design there had better be a big payoff to justify it.
The meta point is more at the level of you can't look at parts of a game in isolation, a game isn't a collection of technologies and assets, it's a whole. And you have to understand the development process when your looking at designing "good" hardware, as much as the pure technology. Increasing the burden on engineering has to have a payoff of it' simply not worthwhile. You have to be somewhat pragmatic when it comes to development on large teams.
... (More interesting points) ...
I see what you're getting at however I'd have to argue the point that the hardware architectural design in many ways runs orthogonal to the development work done by the vast majority of your programming team.
In your average game company you'll be looking at say ~10-30% of your code team being the super-elite low-level-headed coders with the rest made up of a range of talented high-level system/gameplay-authors.
Now if you're licensing your engine (UE3, CryENGINE 3 etc...) your low-level guys will be gutting & refitting the engine code, optimising for the specifics of your game as well as laying the foundations of additional systems required; SPU'ifying them, writing core library modules for heavy-lifting processes etc. that all need to be fast. These are all areas of your code where bottlenecks are expected to appear & they're all handled by the kind of coder whom, whether given an esoteric Cell or a generic Xenon, will be able to tame the beast & make it sing.
Granted iteration in games development is a massively important factor towards overall game quality however I'd argue that it's generally the flexibility and expressive-power of your high-level tools and technology that make up the lion's share of priority in this area & code iteration on improving and developing the high-level systems (such as kismet/FlowGraph-style visual scripting systems for example) has very little relevance IMO towards the specifics of the lower-level engine system/module implementations with respect to the choice of target hardware.
This will be even more evident next-gen IMO as I fully expect the proliferation of middle-ware engine technology to explode and the traditional focus of many development studios on proprietary solutions to all but disappear.
So in this light I don't particularly agree that their will be an impetus on hardware vendors to target best-case performance for worst-case code in their designs as ultimately, it'll probably be that 10-30% of your run-time code, written by your core engine & rendering guys, that makes up the ~70-90% of your processing load at run-time.
& with companies like Epic & Crytek having large-scale teams of dedicated veteran low-level coders serving your needs in this area, they'll be looking to maximise the potential of the hardware provided in order to supply the most competitive engine solutions to market & not specifically target average-case performance trade-offs due to the kinds of production restrictions imposed on developments teams with time-limited schedules and finance-limited budgets.
This is how I see things at least...