BTW, does the R300 really do displacement mapping 100% in hardware, is this a case where the feature is really 90% done by the drivers, with a small amount of setup by the HW? I'm having trouble visualizing the R300 creating actual vertices on-chip. It's certainly possible, but it seems like a big step to put the entire tesselation engine 100% into HW. I can more easily imagine having the driver massage the mesh and insert placeholder data into the mesh (e.g. doing the actual tesselation) and than having the HW operate on them to move them to the correct positions with the correct normals, et al.
Any ATI engineer care to comment on the actual process. I see it kind of like how MPEG decoding works. First mo-comp was hardware assisted in hardware. Then iDCT was added. Others added pull-down and scaling features. Gradually over time more and more of the software codec got replaced by hardware acceleration, but it didn't start out like that.
Any ATI engineer care to comment on the actual process. I see it kind of like how MPEG decoding works. First mo-comp was hardware assisted in hardware. Then iDCT was added. Others added pull-down and scaling features. Gradually over time more and more of the software codec got replaced by hardware acceleration, but it didn't start out like that.