AFAIK, no modern GPU is capable of handling NURBS in hardware (even though OpenGL has supported them for ages) - they are tessellated into triangles in software at the driver level and only then passed on to the GPU.
The NVTeaPot demo (which I had on my harddrive for a while but deleted in a fit of tidyingup-itis) ran faster with the hardware NURB option on my GF3 than the software mode. So some kind of support must exist.
It's possible that NVidia, convert the NURB's into Beziers and use their hardware to tesselate the result.
Given the amout of CPU setup required to draw a Bezier on NV2x I wouldn't expect it to be much faster than doing the tesselation with the CPU unless the curve density was low and the polygon density high.
FWIW this matches what the hardware can do.
But it requires significant CPU setup to get the data into a format suitable for hardware acceleration. Unless the rendered number of tris is much larger than the patch count I really can't see it being much of a win performance wise.