NURBS on current top end generation video cards

K.I.L.E.R

Retarded moron
Veteran
I've been doing a lot of reading these past 2 days on curves and surfaces and I noticed a lot of cloth and physics simulation demos use some form of spatial divisioning surfaces on models themselves in order to simulate effects such as wind. These surfaces were generated via the B-Splines method or NURBS in some cases.
I remember NURBS being a popular topic back in the Quake 3 days where the statues of the "Oh-so popular" demo map in Quake 3 had used B-Splines generated by the artist on the statue.

Since then video cards have become hugely more powerful and with that more powerful mathematical models are now being used in real time to do things that were previously unthinkable.
It's been around 5-6 years since that time.

With the introduction of vertex shaders on the Geforce 3 and current power of hardware why aren't game developers using curves and surfaces such as NURBS more often?
Why aren't video cards touting them as a feature?
Is it possible to use NURBS in a AAA title and still maintain decent performance?
 
Because until now, hardware has not been able to create geometry, with a few hard-wired exceptions (NV20 RT-patches, R200 and Parhelia N-patches).
Geometry shaders in D3D10 will change that.
 
Curved surfaces are a problem to work with. Polygons are nice and simple, and to get a more exact representation it's easy to just throw more polygons at the problem. Curved surfaces aren't like that.

I was a long-time fan of bicubic patches, until someone pointed out to me that to get decent surface continuity you end up using so many control points that you might as well have used polygons in the first place. NURBS aren't as bad in that respect (if anything they have the opposite problem, where you have to add extra control points to add discontinuities to the surface), but it's still non trivial to exactly model a surface with them.

To look at it another way, if you need to represent an arbitrary surface, you actually need about the same amount of source data in both systems in the limit (that's pretty much implied by entropy).

For some things they may turn out to be a win, but I suspect that the long term future of graphics is largely polygonal. It's a guess, though.
 
K.I.L.E.R said:
.... These surfaces were generated via the B-Splines method or NURBS in some cases....
I know I'm being a bit pedantic, but NURBS are B-Splines. The acronym stands for Non-Uniform Rational B-Splines.

...why aren't game developers using curves and surfaces such as NURBS more often?
One possible reason is that NURBS can be tricky to model with for some surfaces. In a sense, they've been superceded by subdivision surfaces, which are perhaps easier to use when modelling objects with tricky topology.

Also, from what I recall, NURBS would typically be converted into a simpler representation (e.g. Beziers) before rendering. That might be a better starting point for the games engine.
 
imho, I would used nurbs to model the general shape of an object, then use relief mapping (specifically the cone step mapping varient) to fill in the rest. I don't think that modelling the general shape with nurbs, then the details with heighmaps, would be too difficult. It would look great in the game, given that you will never see ugly polygon edges again, should take minimally more work to create, and you could use the basic shapes for physics.
 
I agree with Dio. Especially on continuity - that can be a real bugger to get right :LOL:

Stencil shadows killed hardware HOS. If you make decent use of HOS on a mesh that also has an attached (CPU generated) shadow volume then the sillouette is likely to be wrong. Given that the control mesh for HOS is likely to be relatively low-resolution, this shows up quite badly.

Call it another graphics fashion, but Stencil Shadows > HOS. ATI and NV realised no one was really making much use of the technology so they stopped pushing it, and it just seems to have disappeared these days...

As for the GS being a programmable tesselator - yes and no. From a technical standpoint it can be used to implement HOS, but in practice the noises I'm hearing indicate that won't be such a good idea. General performance and implementation issues of emitting large numbers of triangles combined with relatively complex equations would appear not to be a realistic target for first-gen D3D10 hardware.

I've not tried it yet, but even with adjacency information in the GS I'd imagine there are a few possibilities for getting nasty continuity artifacts.

Cheers,
Jack
 
1-ring is not enough to have decent tesselation, but probably multipasses can help here..
 
As far as I know its not actually possible to render direct from a NURBS surface, you need to export a mesh of some sort.
Some non realtime raytrace renderers export a whole surface before render & allow the tesselation system to be tweaked for best effect but it hurts to do very high poly tesselation.
Others generate sub pixel polys on the fly for only the section being worked on so there is never a visible poly.
This latter would be the best if we could get graphics hardware to do it, the tesselator on the xenos seems like the kind of thing needed to get at least part way there.
 
You can use R2VB to create geometry. Actually, one of the samples in the latest ATI SDK does n-patches using R2VB. Geometry shaders will be more flexible, but already now you can do many forms of tessellation.
 
DudeMiester said:
imho, I would used nurbs to model the general shape of an object, then use relief mapping (specifically the cone step mapping varient) to fill in the rest. I don't think that modelling the general shape with nurbs, then the details with heighmaps, would be too difficult. It would look great in the game, given that you will never see ugly polygon edges again, should take minimally more work to create, and you could use the basic shapes for physics.

There are some very serious practical problems with NURBS, as it is a geometry format that's more suitable for the design and manufacturing industry. Building organic models, characters and such are very problematic and NURBS has been gradualy phased out in the movie VFX industry in the past 5 years by subdivision surfaces for these reasons.

- NURBS patch topology is strictly rectangular, you can't have triangles or 5-6 sided shapes. Thus modeling with NURBS patches will require more control points and result in problematic surfaces at branching geometry like an Y-shape or the fingers of a hand.
Also, the patches can not be 'welded' together into a single surface, they can only be 'stiched' together. Animating a surface through deformation (skeletial skinning) can rip the patches apart and create tiny cracks and holes in the model. Also, the stiching methods require a lot of processing time which is not affordable in a realtime enviroment.

- NURBS has a built-in UV parametrization for texture mapping that follows the control points of the patch. It can not be changed in any way and if your patch is not a nice rectangular surface than the mapping will become distorted.
Also, each individual patch has its own 0-1 UV space, thus it requires its own texture or textures (color, bump, specular etc). A typical character may require hundreds of patches and thus several hundred textures - which would kill efficiency on every piece of hardware.


Other kinds of HOS might be more practical though, like as far as I know MotoGP uses some custom stuff for the bikes and even the drivers (but note that they have a spherical helmet instead of facial features). In general, the prefered HOS would be subdivision surfaces as there's already a lot of experience and tools in the CG industry.
 
Laa-Yosh said:
- NURBS has a built-in UV parametrization for texture mapping that follows the control points of the patch. It can not be changed in any way and if your patch is not a nice rectangular surface than the mapping will become distorted.
While the rest of the post is good, I don't see why the you are forced to use the surface parameterisation for the texture coordinates. You could add extra dimensions to your control points or, alternatively, use a solid texture mapping scheme.
 
Back
Top