3D Glossary Update

DaveBaumann said:
I think in some cases there has to be the 'most commonly used' definition.
The way I see it, there are two ways we can find 'most commonly used' definitions:
  • Agree on using the terms as defined by an authoritative source where such a source exsts. The OpenGL Specification may be useful as such a source (which would resolve "frame buffer" and "rasterization" at least).
  • Arrange polls on the "right" usage of terms. ("Tile-based rendering is ...?")
 
arjan de lumens said:
This would be the 4th definition conflict I collect so far. So now I have:
  • Frame buffer - RGB(A) color buffer only, or all actual per-pixel data (RGB, Z, stencil, etc), or all memory that can potentially be allocated to per-pixel data?

that's a tough one. i, personally, can't stick to either one myself - in dfferent occasions i use either of the two. i think a definition along the line of: "frame buffer - a buffer storing per-frame, raster data, which in different contexts could mean either just color/pixel information, or complete fragment information." would be apt /shrug/. at least we can agree on one thing: frame buffer is refered either as something which stores just color values, or something which stores complete per-fragment info. i, for one, haven't come across usage of the term which would be 'color + depth data', but, say, 'stencil data' excluded.

  • Rasterization - Scan-conversion only, or per-pixel color determination also?

well, for me rasterization is the engineering term for what mathematicians would probably call a 'uniform discretization' -- scan-conversion is just one of the methods for achieving it, inverse-mapping (ray-casting) is rasterization, too. so, i'd say 'per-pixel color determination' sounds on spot.

  • Tile-based rendering - does this include immediate-mode rendering techniques that merely use a tiled framebuffer also?

i think once and for all we should drop the term 'tile-based rendering' - it carries very little information about the process. what the PVR architecture does is 'scene capturing', or 'scene batching', whether it's organized in tiles or not is of lesser importance, and is actually a quantitative matter - it could do full-screen 'tiles', or pixel-sized 'tiles' just as well.

  • Immediate mode - different definitions for hardware and software. (This could be handled by re-writing my previous definition attempt to point out the difference.)

i believe that just batching the two meanings for the two different occasions would suffice.
 
arjan de lumens said:
DaveBaumann said:
I think in some cases there has to be the 'most commonly used' definition.
The way I see it, there are two ways we can find 'most commonly used' definitions:
  • Agree on using the terms as defined by an authoritative source where such a source exsts. The OpenGL Specification may be useful as such a source (which would resolve "frame buffer" and "rasterization" at least).

Err... how about the "graphics bible", Computer Graphics. Principles and Practice?
 
Simon F said:
arjan de lumens said:
DaveBaumann said:
I think in some cases there has to be the 'most commonly used' definition.
The way I see it, there are two ways we can find 'most commonly used' definitions:
  • Agree on using the terms as defined by an authoritative source where such a source exsts. The OpenGL Specification may be useful as such a source (which would resolve "frame buffer" and "rasterization" at least).

Err... how about the "graphics bible", Computer Graphics. Principles and Practice?

bibles are for believers, coders use API specs ;)
 
Some terms relating to vectors and normals:

Vector: Mathematical object frequently encountered during 3d calculations. Can be thought of as an arrow with a direction and a length. Usually represented as a set of 3 numbers, defining an arrow with direction and length in a cartesian space.[need some help to make this more understandable to non-techie readers]

Normal/Normal vector : A vector specifying a direction perpendicular to a given point on a surface. In 3d graphics primarily used for lighting and reflection calculations. Normal vectors may be specified on a per-vertex basis or on a per-texel basis in a texture map - the latter method is necessary for certain bump-mapping techniques (in particular DOT3 and bump-reflection mapping)

Normal map: A texture map whose contents are used as per-pixel normal vectors rather than RGBA colors. Normal maps are a class of bump maps.

[edit: Normal vectors can be used for reflections in addition to just lighting.]
 
Some random terms:

Diffuse lighting models
A ligthing model in which the reflected light is scattered evenly in all directions. That means that the percieved color of the surface is independent of the viewing direction.

Specular lighting models
A ligthing model in which the reflected light is distributed unevenly in different directions. That means that the percieved color of the surface is dependent of the viewing direction.

Raycasting
A method of searching the intersection of a straight line and 3D objects. The task is to determine the closest intersection to a point, or to test if intersection occurs at all. This is the basis of raytracing, photon mapping.

Raytracing
A rendering method that fundamentally different than the method used by most hardware 3D accelerators (rasterization).
It cast a ray from the camara (view point) across each screen pixel to determine what part of which object is visible at that point.
The reflected ray can be started from the target point to find what is reflected at that point, and that can be repeated to get multiple reflections.
Refractions and sharp shadowing can also be easily handled with raytracing.

Point light
An abstracted light source type that is extremely small in size, and radiates equally in all directions. Due to the divergent nature of the emitted light it is attenuated with the distance from the lightsource.
The point light casts extremely sharp shadows.

Directional light
An abstracted light source type that is at the infinetly distant and is extremely small in size. Due to the convergent nature of the light there is no attenuation.
The directional light casts extremely sharp shadows.

Area light
A kind of lightsource that have extent unlike point or directional light. Area lights are expensive to calculate, usually approximated with a lot of point or directional lights. Area lights can be partally occluded and therefore cast soft shadows.

Bounced light
A surface that reflects light acts as a lightsource itself.
Dealing with bounced lights require recursive algorithms in the general case.

Bounced diffuse light
Bounced light using a diffuse lighting equation.
Can be calculated with radiosity or distributive raytracing.

Bounced specular light
Bounced light using a specular lighting equation.
Can be calculated with distributive raytracing or photon mapping.
In a special case of a planar mirror a virtual lightsource can be placed "behind" the mirror.

Caustics
Reflection effects caused by bounced specular light.

Distributive raytracing
Standard raytacing cannot handle area lights, softs shadows, blurry reflections or diffuse bounced light.
Distibutive raytracing sends multiple rays from any surface point to test the amount of light received from that direction.

Photon mapping
Likewise distributive raytracing it works with multiple rays, but it works in the opposite direction. It starts from the lightsource and follows light rays forwards including bouncing from surfaces.
 
Colourless said:
Caustics are also caused by refraction.

Quite true.

Actually the "bounded specular light" section mostly applies to refracted light as well.

I was actually to write terms about light scattering as well, but I leave that to someone else.
 
Some rather basic terms:

Vertex: Point in space, with a set of XYZ coordinates, and associated data. The associated data typically include one or more RGBA colors, a normal vector, and one or more sets of texture coordinates (and, if matrix palette skinning is used, a set of matrix indices and their weights). To define a line, 2 vertices corresponding to the line's endpoints are needed; to define a triangle, 3 vertices corresponding to the corners of the triangle are needed.

Backface culling: Each polygon in 3d space has 2 sides, a front side and a back side, referred to as front face and back face respectively. At any given time, at most one of these sides is visible from the current camera position. Frequently, we do not wish to draw back faces, usually because they correspond to the 'inside' of a solid object. Backface culling refers to the process of removing such polygons from the rendering process. It is normally done as part of triangle setup.

Degenerate: In 3d graphics, used to describe a primitive that due to its geometrical properties end up not getting drawn. Examples of degenerate primitives are lines where both endpoints have the same coordinates, triangles where two of the vertices have the same coordinates, and polygons whose vertices all lie on a straight line. (Degenerate triangles normally appear when triangle meshes are folded into triangle strips; each 'fold' in the triangle strip produces 2 degenerate triangles.)

Polygon Offset: When rendering a polygon on top of another polygon, we often wish to have one of the polygons visibly in front of or behind the other polygon, even if they lie in the same plane. This is normally done by adding or subtracting a small value to the depth or Z value of each pixel in one of the polygons, causing the Z-buffer test to treat the polygon as slightly in front of or behind the other polygon, giving the intended effect. The value that is actually added to or subtracted from the depth value this way is called 'polygon offset'.
 
Some more terms, this time mostly T&L stuff:

Transform: The process of mapping 3d objects from one coordinate system to another. Examples of transforms that can be performed on 3d objects are : rotation, scaling (size change), and translation (moving it around other than rotation). Normally, each transform is represented by a 4x4 matrix - multiple transforms can be concatenated into one by multiplying together the matrices in the appropriate order. The resulting matrix is then used to transform the 3d object to its appropriate size and placement. Mathematically, the transform is performed by taking each vertex in the 3d object, treat its XYZ(W) coodinate as a vector, and multiply that vector by the transform matrix.

Matrix palette skinning [edited according to Simon F's suggestions below]: Transform method where we, for a given body, first compute a set of different matrices, each corresponding to the transform/placement in 3d space of one rigid body part (like e.g. a bone). This set of matrices is referred to as a 'matrix palette'. Then, for each vertex, we specify a set of indices into this matrix palette and a weight for each index. The matrices that are pointed to by the indices are then blended together using the weights for each index. The resulting matrix is then used to transform the vertex's coordinate. The result of using matrix palette skinning on a 3d model is that the parts of the model flow together naturally rather than making the model look like it is composed of a bunch of rigid parts that just happen to be stuck together - the result is a more realistic look on moving organic figures, such as animals and people.

T&L: Transform & Lighting - a set of calculations that are performed on a per-vertex basis, basically applying transforms and lighting calculations to each vertex. Also, computation of texture coordinates for each vertex is counted as part of T&L. 'Hardware T&L' refers to the capability of 3d graphics hardware to perform these calculations - if such functionality is not present in the hardware, the alternative is 'software T&L'. 'Fixed-function' or 'static' T&L refer to T&L implementations that are restricted in what types of calculations they can perform during T&L - typically, the transform is limited to plain 4x4 matrix transform and matrix palette skinning, and the lighting model used for the lighting cannot be changed (typically a variant of Phong lighting).

Vertex shader: Program that is run once for each vertex that is processed. Also used for 3d graphics hardware capable of running such programs. Vertex shading functionality replaces standard 'static' T&L in the 3d pipeline. Vertex shaders can be used to perform a range of effects on the geometry and the lighting of the model, such as non-linear transforms/procedural geometry (e.g. putting waves on water, putting all of the scene into a fisheye view), lighting models other than Phong (simpler models if higher performance is desired, or BRDF models for added realism etc), and setup for certain texturing techniques (like bump-reflection mapping). Uder the DirectX API, version 8 and up, vertex shaders are given version numbers to indicate their exact capabilitites.
 
hyp-x said:
Directional light
An abstracted light source type that is at the infinetly distant and is extremely small in size. Due to the convergent nature of the light there is no attenuation.
Convergent? Only at infinity The rays are parallel ;-)

I suggest that using the term "parallel light" is better. Directional could imply something like a spot light which could be a point light with a principal light direction.
arjan said:
Matrix palette skinning: Transform method where we first compute a set of different matrices, each corresponding to e.g. a bone or otherwise rigid part in a body.
That sounds good, but it might be worth saying that each matrix corresponds to the rotations and translations needed to position a particular bone.

Degenerate: In 3d graphics, used to describe a primitive that due to its geometrical properties end up not getting drawn.
It can also mean a case where the primitive might be drawable but the system can't handle it...
Examples of degenerate primitives ...riangles where two of the vertices have the same coordinates, and polygons whose vertices all lie on a straight line.
You could just say triangles where all the points are colinear - it gets all cases with one fell swoop.
An example of the other case I was mentioning is with, say, Bezier patches. It's quite easy to make a perfectly valid patch but have the system fail to draw it properly. There's a good description in Watt and Watt's "Advanced Animation and Rendering Techniques".


While on the subject of definitions, perhaps we could put in proper references to the inventors/paper that inititally described the term.
 
Simon F said:
hyp-x said:
Directional light
An abstracted light source type that is at the infinetly distant and is extremely small in size. Due to the convergent nature of the light there is no attenuation.
Convergent? Only at infinity The rays are parallel ;-)

I suggest that using the term "parallel light" is better. Directional could imply something like a spot light which could be a point light with a principal light direction.

Ok, so the definition is: :)

Directional light
An abstracted light source type that is at the infinetly distant and is extremely small in size. Due to the light rays are parallel there is no attenuation.


You can't call it Parallel light instead of Directional light, because ... thats what it's called (in D3D at least).
 
Hyp-X said:
You can't call it Parallel light instead of Directional light, because ... thats what it's called (in D3D at least).
Well I wouldn't take D3D as the oracle for definitions, but it seems that "Computer Graphics. Principles and Practice" also use the term 'directional', so I give in ;).
What confuses me is where I got the term "parallel light" from. I've been using it for around 0xF years. <shrug>

[UPDATE] Ahh, the joy of Google. A quick search does turn up "parallel light" as a synonym for "directional light". I wasn't going mad after all!
 
Vector: Mathematical object frequently encountered during 3d calculations. Can be thought of as an arrow with a direction and a length. Usually represented as a set of 3 numbers, defining an arrow with direction and length in a cartesian space.[need some help to make this more understandable to non-techie readers
Since 4d vectors are very common, "Can be thought of as an arrow with a direction and a length" is not suitable imho.
How about
"Vector: Mathematical object frequently encountered during 3d calculations. Fundamentally they are just a set of numbers grouped together. They are often used to represent more tangible objects however, for example a vector with 3 components can be visualized as an arrow describing a particular direction in 3d, and is commonly used for that purpose."

You could just say triangles where all the points are colinear - it gets all cases with one fell swoop.
"colinear" could perhaps be unnecessarily technical. Can't think of a concise good replacement though, just the kind of thing that is sometimes difficult for a non-native English speaker.
 
colinear is widely accepted term. yet to make it simple, one could say:
We speak about degenerate triangle when all three vertices are laid on the same point or line. So, basically, it is not triangle anymore.
I request once more: Please, keep it as simple as you can.
 
tobbe said:
"Vector: Mathematical object frequently encountered during 3d calculations. Fundamentally they are just a set of numbers grouped together. They are often used to represent more tangible objects however, for example a vector with 3 components can be visualized as an arrow describing a particular direction in 3d, and is commonly used for that purpose."

Scalar vs vector
Of course there are also bit vectors, interrupt vectors, STL vectors and whatnot
 
Back
Top