From DemoCoder in another thread:
Im quite puzzled over how HW geometry generation will relate to simulation in the future.
For collision detection, one can work with general bounding volumes and hierarchical bounding volumes first, when there's possibility for collision, mesh fragment can be run through HW "primitive processor" and VS, which will be doing HOS tessellation, displacement, and vertices can be read back for collision detection / physics calculations on host CPU.
That could work because in most situations exact collision calculations can be reduced to minimum with various caching schemes etc. And for HOS exact collision detection can be performed on CPU even without tessellation, although the algos are quite complicated.
But for shadows... precise calculation would mean that basically all the geometry that will be casting shadows needs to be processed twice on HW.
In real world, most likely nobody will take displacement maps into account for shadow generation, but HOS-s will need to be adressed.
In any case, im often wondering whether the whole T&L thing on graphics cchips makes much sense .. post-transform vertices are often useful for more than just displaying on the screen. And reading them back to host CPU and then retransforming again doesnt sound like very efficient solution. Wouldnt just T&L coprocessor make much more sense ?
I know it's been argued over when HW T&L first appeared already.. but now the whole issue is getting even more complicated because of the "programmable primitive processing" on the horizon .. ( BTW i still havent found any info on how this PPP will fit into OGL2 specification )
2) ability to handle geometry amplification issues on the chip, making TruForm(n-patch), HOS, displacement mapping, etc more useful. Most new games are going to want to do shadow volumes, and all current geometry amplificiation schemes hurt collision detection and shadow generation. This "per primitive processor" on the NV30 sounds like it could do the trick by enabling you to create and destroy vertices and send them to the vertex shader. Being able to query about collisions would be good too
Im quite puzzled over how HW geometry generation will relate to simulation in the future.
For collision detection, one can work with general bounding volumes and hierarchical bounding volumes first, when there's possibility for collision, mesh fragment can be run through HW "primitive processor" and VS, which will be doing HOS tessellation, displacement, and vertices can be read back for collision detection / physics calculations on host CPU.
That could work because in most situations exact collision calculations can be reduced to minimum with various caching schemes etc. And for HOS exact collision detection can be performed on CPU even without tessellation, although the algos are quite complicated.
But for shadows... precise calculation would mean that basically all the geometry that will be casting shadows needs to be processed twice on HW.
In real world, most likely nobody will take displacement maps into account for shadow generation, but HOS-s will need to be adressed.
In any case, im often wondering whether the whole T&L thing on graphics cchips makes much sense .. post-transform vertices are often useful for more than just displaying on the screen. And reading them back to host CPU and then retransforming again doesnt sound like very efficient solution. Wouldnt just T&L coprocessor make much more sense ?
I know it's been argued over when HW T&L first appeared already.. but now the whole issue is getting even more complicated because of the "programmable primitive processing" on the horizon .. ( BTW i still havent found any info on how this PPP will fit into OGL2 specification )