Bezier Curves Instead Of Polygons

I have a soft spot for games that had high expectations but didn't quite (or just didn't) live up to them, For example Trespasser from 1998 and Enter the Matrix. Somehow I can overlook flaws in games because I understand or empathies with its creators intentions. Enter the Matrix is one such enigma to me that I just had to know more about. So setting out searching for information on its underlying framework I disappointingly found almost nothing. I did find something however, one tiny little thing. On Enter the Matrix's website under the "Behind the Scenes" tab was an article by the lead programmer Michael "Saxs" Persson on the game. It gave me one single clue as to what's different (or I assume different) in EtM with regards to other games, Shiny engines usually have some interesting quirk.

It was very important that we get a realistic Niobe or Ghost on screen, characters that the actors could be proud to compare themselves to. We feel we achieved that goal. The key to the realistic models was the decision to use patches instead of static polygon models. It enabled us to generate realistic models with 25,000+ triangles for the close-up scenes, while during fighting we can downgrade them to a more manageable 3000-10,000

I learned that Patches is another word for Bezier curves on Wiki. Obviously this spawned more questions in my psyche then answers. How come I've never heard of this approach before? Since I haven't, are there some unwelcome downsides with using Patches? As it sounds like a form of tessellation, does it work in real time or is it a set of parameters predefined between scenes? I'm inquisitive by nature and most other questions I have surrounding the particular game and its engine are more general. Would anyone here care to explain to a curious bystander and enthusiast how the bezier patches work?
 
I think I saw something with regard to direct 3d 11 and the tessellation function, they described something like that as well but I don't recall enough to be sure im right on this.
 
If you search for "Subdivision surfaces" in the console fora, you'll get a list with a few existing threads dealing with high-order surfaces. These probably tell you what you want to know. Basically HOS, mathematical curves instead of polygons that get tesselated down to draw on the GPU, have issues that prevent them being commonplace. There are lots of paper and examples of using HOS, but they just don't make it into games at the moment. It's quite interesting to see optimistic discussion from a few years back, how the new consoles will be able to handle HOS, only to find, nope, same old triangles as ever!
 
I think there are two major problems,

First, (arguably) artists lose fine control of the presentation of the mesh.
And secondly, to actually compute the mesh you need to spend resources. If there was a choice between slower, dynamic LOD and faster fixed LOD, the answer will probably always be the faster one. You can (in theory) always get more detail for the same expense.

For nostalgia, does anyone else remember that back in the day - One of Team Fortress 2's big selling points was going to be it's use of Intel's MRM tech? (Multi Resolution Mesh) :mrgreen:
 
But based on what he said hasnt the technique being used in Enter the Matrix? Which was on weaker hardware? Including the PS2?
 
I will do as you say Shifty, While I do though I wont leave this thread hanging hehe, I'm much to curious. Aside from using it for tessellation, are there no benefits over polygons at all? I'm also left wondering by the quote if bezier patches are selectively used on characters in ETM? I don't know whether or not you can mix two approaches like polygons and patches, but since you (Graham) pointed out that there are some drawbacks to patches I wonder if they could have to avoid some of them.

Edit: I did search for Subdivision Surfaces, but judging from the results, does Subdivision surfaces equal bezier patches according to you Shifty? It seems to be many different techniques in use for it such as NURBS and Splines. Though I do not know how these work exactly, they may amount to the same result I guess.
 
Last edited by a moderator:
Yes you can mix them. The benefits are tessellation and model size - you can create a smaller model that consumes less RAM yte which preserves the detail. However, that's normally coarse detail. eg. A sphere is tiny represented as a HOS, whereas a high resolution polygon approximation of a sphere has a lot of vertices. However when you start adding in detail with creases, the HOS starts to need more control points, and the memory advantage is reduced. Laa-Yosh is the best person to ask, but I'm pretty sure he's explained as much in the previous threads.

So really, the main benefit is object quality with intelligent tessellation, which has a processing overhead and some art issues. Some games dabble but in most cases the tradeoffs aren't deemed worth it.
 
Yes you can mix them.

I wonder if this is why the characters seem to jump around a little in the game, jumping between points? Is it common to use this mix of techniques or is Enter the Matrix sort of a one off? I'm reading this thread regarding subdivision surfaces, it's very very interesting.

With regards to Enter the Matrix, does it use real time tessellation on the characters based on processor load or would this technique mostly be used between scenes, using set parameters? For (a bad) example "Ingame->use x value for characters." "Cutscene->use y value for characters". I guess using the later approach would be useful in a multiplatform environment, just ship different parameters tailored for each platform to offset any performance differences without having to redo all the models.
 
As mentioned, beziers are tesselated into triangles anyway for drawing, so the mixing of modeling techniques isn't an issue.

Games have been using bezier modeling on consoles since at least the Dreamcast, usually in select, limited application like describing the terrain of a mountain slope for a snowboarding game, for example.
 
They have to be for the GPUs to draw, because the GPUs work in triangles. A software renderer could render them without tesselation, but then you lose all your GPU acceleration, plus it'd be much more complex and expensive maths.
 
Actually there is no analytical way to rasterize cubic bezier patches ... you are always going to be doing some form of subdivision.

PS. edited because I said curves, should have said patches.
 
Last edited by a moderator:
So correct me if I'm wrong, GPUs can only render triangles efficiently? Therefore any mathematical formula used to form an object has to be broken down into triangles for rendering. I know nothing of maths, it's a mystery to me, but Splines and Bezier curves look very elegant compared to triangles. Would hardware support for these solutions be of interest to developers if it existed or is the triangle approach so versatile that it's not necessary?

Regarding Enter the Matrix, does anyone know if Path of Neo is based on the same (or evolved) engine and therefore also uses bezier patches for characters?
 
Actually there is no analytical way to rasterize cubic bezier curves ... you are always going to be doing some form of subdivision.
:???: Not sure what you mean here. A beizer rendered in Flash or Corel draw is being subdivided? The raytracer Realsoft 3D renders everything in pure curves without tesselating down to triangles too, NURBS, SDS and particles including 3D hairs/strands. A software renderer may have to take an unconventional approach, but it's certainly possible to take a surface defined by mathetmical curves and rasterize into a pixel-based frontbuffer without having to subdivide it in triangles/lines. Heck, even the Sinclair Spectrum could rasterize a Sine wave without a tesselation step! :p
 
So correct me if I'm wrong, GPUs can only render triangles efficiently?
Yes, that's the avenue realtime rendering took. GPUs fit in huge amounts of processing power by being fairly restrictive in what they do. Well, that used to be the case. Now we can cram in so many transistors, it's not such an issue any more ;) Still, back in the day GPU's were fast because they limited the scope of graphics rendering to a method that was simple and readily scalable, which gives us a legacy system that's full of issues but at the same time, still more than competitive with alternative methods like ray tracing so it remains the method of choice.

Therefore any mathematical formula used to form an object has to be broken down into triangles for rendering.
For GPU's, yes.

I know nothing of maths, it's a mystery to me, but Splines and Bezier curves look very elegant compared to triangles.
They are, but slow and very hard to work with. I like Realsoft 3D precisely because of this elegance, working in a true mathematical representation of a scene. But at the same time, it's dog slow in comparison to other 'hack' renderers that take shortcuts to get the visual job done and with generally no visual penalty for using the shortcuts.

Would hardware support for these solutions be of interest to developers if it existed or is the triangle approach so versatile that it's not necessary?
There does exist some hardware support for NURBS, but at the hardware tesselation level. Xenos has it of course, as does PSP (although I've never seen it's curve support talked about in more than passing). True rendering of Nurbs would probably be too hard to integrate into a triangle rasterizer, and comparatively slow to boot such that you'd be better off tesselating and using the triangle method.

Also this is all talk of rasterization. Incorporating HOS into a game also requires other aspects like texture projection (shadows) and collisions.
 
If there was a choice between slower, dynamic LOD and faster fixed LOD, the answer will probably always be the faster one. You can (in theory) always get more detail for the same expense.
I'm not so sure about that theory since dynamic LOD can be ( in theory :) ) based on viewing angle as well as distance.
 
:???: Not sure what you mean here. A beizer rendered in Flash or Corel draw is being subdivided?
Curves can in theory be done analytically, dunno if they actually are ... the equations to solve cubic polynomial equations are relatively nasty.
A software renderer may have to take an unconventional approach, but it's certainly possible to take a surface defined by mathetmical curves and rasterize into a pixel-based frontbuffer without having to subdivide it in triangles/lines. Heck, even the Sinclair Spectrum could rasterize a Sine wave without a tesselation step! :p
There is an analytical way to rasterize a sine. Cones, spheres etc too ... but not cubic bezier patches.

You can either iteratively step through the parameter range to find points on the patch which fall "near" pixel centres, or you can use bisection to find a point on the patch corresponding to a pixel to an arbitrary precision ... or some hybrid between the two (adaptive subdivision). Either way you you are not calculating the exact parameters corresponding to the pixel center ... because there is no analytical way to do that, all you can do is get close using numerical methods, which one way or another will always work by subdividing the parameter range.
 
Last edited by a moderator:
Okay, but in the scope of this topic, if you're not tesselating to triangles to push it through a GPU, I'd say it was not 'subdivided'. Or even, subdivisions on the pixel level can be considered as non-tesselated. Thus if rasterizing a Nurbs teapot, each pixel has an associated evaluation of the Nurbs parameters, that is distinct from subdividing the Nurbs to a triangle mesh and determing the pixel value by evaluating the vertices. I think that makes an applicable sense of 'subdivision' for Color me Dan's discussion.
 
Color me Dan said:
but Splines and Bezier curves look very elegant compared to triangles.
I think it's pretty common to feel that way when you first learn about them(IIRC I did too).
But to paraphrase what others kinda said already, they are only elegant in select, limited applications.
 
Ok, I think I understand Bezier Patches a little better now. Vaguely but still much better then before. Having read your responses and searching on the net I come to the conclusion that you've all been telling me, Bezier patches are mostly suited for tessellation and little else. Seems to me though that this is not a limitation to bezier patches themselves but rather the need to transform them into triangles before rendering.

Despite having read about the downsides such as interpolation issues across boundaries, if these could be fixed, wouldn't there be a performance boost if hardware was tuned towards Beziers rather then Triangles? Complex shapes coming down to a few control points instead of a bunch of triangles (again) sounds much more elegant to me. I'm sure there are a lot of other issues with such a proposal, and I'd love to hear them. Most of all though, do I make sense or have I still not understood this?
 
Back
Top