Perhaps you could then expand on what exactly the "point" is?
The point had nothing to do with latency or framerate.
I was simply saying having a large number of flops does not a good CPU for games make.
A good CPU for games should efficiently run game code, without placing an undue burden on development. Because any thing that makes it difficult to produce good code, cuts down on the amount of iteration that can take place in a game, and indirectly impacts quality. If a processor is esoteric in design there had better be a big payoff to justify it.
The meta point is more at the level of you can't look at parts of a game in isolation, a game isn't a collection of technologies and assets, it's a whole. And you have to understand the development process when your looking at designing "good" hardware, as much as the pure technology. Increasing the burden on engineering has to have a payoff of it' simply not worthwhile. You have to be somewhat pragmatic when it comes to development on large teams.
We used to discuss whether using a GC'd language for all of the none critical code would result in better or worse performance overall. On the face of it looking at the pure technology side you would say worse, but you have to understand on a team of >20 engineers, perhaps 3-6 are experienced hardcore low level coders, and if you can free them from having to fix the random memory writes and null pointer dereferences that get introduced, they can spend that time optimizing or improving other areas of the game. It could end up being a net performance win.
When game teams were 1 engineer or 5 engineers, and games were a couple of hundred thousand lines of code it wasn't an issue.
A lot of game technology today is about data driving and allowing artists/designers to iterate, it's widely accepted that more iterations in those fields produce higher quality. We do a piss poor job of doing the same for engineers. In the 80's I could assemble a game (all of it) in under 10 seconds, I worked on one game not so long ago that had a worst case link time of 20 minutes.
I personally like banging bits, I like playing with new and obscure processor designs, on my personal projects I still look at the disassembly the compiler produces for "critical" code sections and if I don't like it I rewrite the offending functions in assembler. On large game teams I'm more pragmatic I'm looking at what produces the biggest win for the game given deadlines and personell.
If your designing processors for games you should look at game code and figure out what needs improvement. I have no proof of this, because I haven't seen the majority of game code written across PS3/X360, but I don't think what most game code is crying out for is increased computation density. Certainly there are certain aspects of code where computation density is a big win, but I think most of those lend themselves to GPGPU like computation models.
As a total aside, I'm very old school when it comes to framerate, to me if it's not 60Hz it's the wrong set of compromises. However I've seen a couple of studies recently on the 60/30 debate and how they affect review score and sales. The short version is they don't for the most part people buying/reviewing games don't care.