Jawed
Legend
I'm simply agreeing with Arun's interpretation, each cluster contains 3 multiprocessors and 2 quad TMUs. This increases the ALU:TEX ratio, something that NVidia has signalled (weakly, to be honest) will happen.So it's your opinion that Nvidia did indeed break up their cluster an decoupled TMUs from ALU-blocks? No more "one Quad-TMU per 16xSIMD"?
Overall I have to say the quality of this 240 SPs rumour is a bit thin - NVidia can blind analysts pretty easily, e.g. G80 can be described as 160 SPs (8 MAD lanes + 2 MI lanes per multiprocessor). If you use that as the basis of "240 SPs", then GT200 is a 12 cluster design with 96 TMUs
Jawed