Jerry Cornelius
Newcomer
Chalnoth said:No, but nVidia and ATI have such teams, and have not gone for deferred rendering. nVidia even owns some deferred rendering IP. There's a reason for this.
Ya, they don't want to be assosciated with a company that's never produced a high end offering in the PC market, or throw away all their years of investment in IMR, or risk infringing on a zillion patents (which sucks).
We don't know it's massive. Any application that stresses a TBDR this much will likely be stressing any card you can put in your PC. All that's needed is apprpriate performance in the worst case. I don't know what the bandwidth requirements are for scene and list data once it's been transformed, but you could probably get away with another bank of low cost memory in a high end card to really have it covered There's going to be limits to what the system can throw at the card.Chalnoth said:If you have an idea of how to get around the massive performance drop that would be incurred from a buffer overflow, please, post it. Otherwise...