trinibwoy: It's certainly not what he started out by saying, but I think he made it pretty clear he changed his position now that he understands the distinctions better, so you can hardly blame him for that I think
I think everyone decided to pick on you specifically because you made the most clear-cut statement subject - and apparently that was mostly due to a misunderstanding between gross and operating margins. But now that you've reduced your position to a much more defensible one, you're still the one being picked upon by everyone disagreeing with this position even if there are quite a few others who would say the same.
So don't take it personally, at this point I don't think most people want to attack you personally - it's just a matter of convenience to target you, which I realise isn't the most polite option maybe but this is the Internet so I think it's about par for the course heh
Anyway: of course if you tried to dissociate the
incremental R&D of various products, it is possible that some have not had/will not have sufficient gross profits to pay for their R&D. However, IMO given the likely gross margins involved throughout the product line, the probability that this is true for any current NVIDIA product is quite low. Keep in mind the incremental R&D for, say, GF106 is small compared to the total R&D required for GF1xx in general - a lot of it is shared.
Now if you wanted to 'share' the total architecture R&D between the various chips you very slightly increase the probability that one product wasn't worth its R&D, but that's also a completely useless metric because it implies there was an option in the first place not to work on a new architecture at all. Of course that was not an option, so only the incremental R&D for any given chip should be considered if you want to estimate its return on investment. And even that is very hard to estimate even with all the numbers (which we don't have) and quite subjective.