This thought is basically rooted in the project I'm working on right now. Since the virtual world in computer game is getting more complex everyday, runtime visibility determination is getting more important likewisely. There're many software based algorithms which can be easily accelerated by proper hardware(like hierarchical occlusion map, hierarichal z-buffering, frustum culling, etc.) I'm wondering, if people in the graphics giants are thinking implementing some of them in the next gen hardware? I dont know much about VLSI design, but the math behind those listed algorithms isn't that complex. The only major change I can think of, is that such hardware might need an input buffer holding raw data sent from application, and send those survive the visibility test to the rendering pipeline. Also, the hardware might be responsible for space partitioning and bounding volume generation if they're easy to implement, but this is not quite needed since you can do such things in preprocessing stage.
Now someone please tell me, is it a plausible appraoch?
Now someone please tell me, is it a plausible appraoch?