my understanding of the 10x rumour was that it was talking about performance, not paper flops. Which makes perfect sense as to compare only on flops would be to both undersell the console (from a marketing point of view) and misrepresent it (from an informative point of view, e.g. telling developers what to target).
We can be certain that whatever goes into these consoles will be vastly more efficient than Xenos/RSX so it's a fairly safe assumption to say that a GPU with 2.4TFLOPS of throughput would not equate to "only" 10x RSX performance.
Ok, I will play game: RE: Performance. Is your situation texture bound? Fill rate bound? Shader bound? Or they seeing a 10x improvement on their current code by dropping it in (is this select parts of the code or across the board in every aspect)? Is the situation best case scenario where the best architectural change slows up a lower bound one from Xenos/RSX or is this a worst case scenario where RSX/Xenos were good and this is the least improved aspect of the new design? Or, by golly, is 10x just a nice round number to express the approximate memory increase or how many polygons they can throw on screen at any one time?
You don't know. Hence the dismissal of other possibilities, even if they are unlikely, and the constant *insisting* it must be one thing or another is just mind boggling. You cannot even categorize what performance is as different pressures change.
I don't think it takes much of an imagination to take a design aimed at addressing a specific situation, e.g. fill rate, the Xbox1 had 6.4GB/s of total system bandwidth and fill rate (especially with transparencies) was a big issue. The 360 went a far way to address this with the ROPs on eDRAM. So software that was previously fill rate bound is no longer (peak speed up with worst case scenario was 40x). So it could be legitimate to have software situations where you were fill rate bound and saw gargantuan performance increases but to call it 40x would be inaccurate--unless the person passing along the rumor was specifically looking at such.
That is why 10x performance doesn't mean squat without a proper context, which the rumor lacks. If the rumor is legit we have no clue how many people the data has been filtered through, how they are deriving their metrics, and what they mean.
If you knew for a fact they were talking very specifically about the end result was running today's code 10x faster you would obviously know more than the person passing along the rumor!
I just don't why you continue to insist that it must be "10x" (whatever that means) post-architectural efficiencies. Please tell, can you tell me what GPU is 10x faster than, say, an X1800?
(Hint: are you going to go run to some architectural metrics and guess architectural increases or are you going to look at gaming benchmarks? Which ones? With what features enabled?)
And that really gets to the end point: Show me where we can take a product we know, e.g. X1800 ATI GPU, and show me a 10x faster GPU.
This should be child's play, after all, because you both know the starting HW and have nearly 8 years of GPU releases to find the product that fits the mold.
I will be quite interested what you can choose that is 10x the performance