While we're throwing up just random numbers, I'd say 640alus, 160tus & 64rops. Ofcourse then they would have to get rid of the other silicon dedicated to GPGPU and highly optimize their alu/tus. However like DemoCoder mentioned this was probably not the best time for them to do it so I'll still be pinning hopes for such optimization in the refresh.
We're basically saying the same thing then; it wouldn't fit the 3+B transistors budget unless they'd exclude the added computing functionality. But how much would they actually remove of that when you start building from the get go ALUs that are capable of both SP/DP and not just a mediocre ratio but a 2:1 ratio? While it's just one example I have a hard time imagining that things like that come for free. Instead of developing two different architectures for two different target markets (which would add quite a bit in resources and the R&D cost for HPC would hardly get amortisized ever) they obviously tried to get the best possible for "both worlds".
If they managed to reach (which is still open to find out)
up to 2*GT200 gaming performance, they haven't really missed the typical twice the performance IHVs set with each generation. If they haven't then of course the picture changes dramatically and while we're here GT200 wasn't really a new generation now was it?
Not sure what you are implying there.
I should have been more clear then; there was a newsblurb from Fudo where someone from AMD admitted that you can't change much on short notice. One of the messages many read out of that one was that NV went to deeper architectural changes with GF100 compared to AMD and the latter couldn't do anything about it.
Now I am implying that by the time NVIDIA realised that AMD is not going to miss its target despite the 40nm problems and they will arrive inevitably later than AMD, it was just too damn late to theoretically pull a performance part before the high end part.
The usual kind? Hard launch.
If it's a hard launch and the difference between the two is in the =/>2 months range, then it's truly not such a dramatic delay. But it has to be a real hard launch which remains to be seen.
Fermi needs to be better for graphics than the competition not just Nvidia's previous generation.
Albeit it wasn't in answer to my post, allow me to bounce back to the first paragraph (always following my so far reasoning in my former posts): and in order to achieve that do they absolutely have to have twice the amount of TMUs or ROPs as an example or would it had been wiser to increase efficiency in the existing ones? Because if it's the latter they wouldn't need f.e. 160TMUs as you're suggesting and revamp them at the same time.
To avoid misunderstandings look at RV770's ROPs vs. RV670/600 as an example. An amount of 16 in all of them and the "but" I leave for you to fill out.
NVIDIA has to fill out our blanks - question marks as to what they've done in terms of 3D exactly. Frankly when I see an IHV going through as much trouble to revamp (let me call them) every computing transistor, it sounds pretty idiotic that each and every 3D related transistor has remained unchanged. I'm not saying or implying that it's one way or another, for the time being that space has been left blank in my mind that's all.