Softening edges automatically

Frank

Certified not a majority
Veteran
As I understand it, you can make really nice geometry by using a very large amount of vertices. But your framerate will drop through the bottom. So bump-maps are used. But they have to be created as textures.

Could a vertex shader be used to soften the edges by looking at the lightning used and the angle the vertice is connected to its neighbours? That way, you could automatically soften a low-poly model. Or is this done already?

And if it would work, would it be easier than making bump-maps?
 
As I understand it, you can make really nice geometry by using a very large amount of vertices. But your framerate will drop through the bottom. So bump-maps are used. But they have to be created as textures.
Vertices are really cheap nowadays. So, having (relatively) hi-poly objects doesn't hurt much. Of course, bump-maps are used on top of that to increase detail even more :)
Could a vertex shader be used to soften the edges by looking at the lightning used and the angle the vertice is connected to its neighbours?
Vertex shader operates on a single vertex. It doesn't know what triangle it is in, nor what are it's neighbors. Of course, you can supply that info as a part of vertex data, but that's probably not worth it.
That way, you could automatically soften a low-poly model. Or is this done already?
Yes, and it's called "N-Patches" :) It does tesselate a triangle into smaller triangles, but based on vertex normal only, not connectivity info.
And if it would work, would it be easier than making bump-maps?
These are different things. Tesselating triangle into smaller triangles doesn't reduce your triangle count (it just transfers generation of triangles into other place). Plus, N-Patched models are, well, "smooth" - you don't always want that. Displacement maps can be used here, though...
 
DiGuru said:
As I understand it, you can make really nice geometry by using a very large amount of vertices. But your framerate will drop through the bottom. So bump-maps are used. But they have to be created as textures.

Could a vertex shader be used to soften the edges by looking at the lightning used and the angle the vertice is connected to its neighbours? That way, you could automatically soften a low-poly model. Or is this done already?

And if it would work, would it be easier than making bump-maps?
I'm not sure if I understand you correctly, but do you mean smoothing of geometry based on vertex normals and neighboring faces?
Yes, this is done already, one variant is called PN Triangles (based on vertex normals), and ATI implemented it as "TruForm". This tessellation takes place before the vertex shader.

But you shouldn't compare it to bump maps. Bump maps add detail while curved surfaces add smoothness.
 
I think I don't understand some things. I was under the impression, that you have polygons, that break up into triangles, that are called vertices. Or is a vertice a pixel within a triangle?

Tesselation = breaking polygons/triangles (made out of vectors or splines) into smaller triangles (made out of vectors), right?

Bumb-maps add detail, yes, but they're used to soften models as well. True?
 
DiGuru said:
I think I don't understand some things. I was under the impression, that you have polygons, that break up into triangles, that are called vertices. Or is a vertice a pixel within a triangle?
Vertices are the points at the corners of a polygon.

Tesselation = breaking polygons/triangles (made out of vectors or splines) into smaller triangles (made out of vectors), right?
Yes.

Bumb-maps add detail, yes, but they're used to soften models as well. True?
Very rarely. Gouraud shading is usually enough to give objects a soft lighting (bump maps only affect lighting not geometry). Bump maps are used to simulate relatively flat surface structures, scratches, brick walls and the like.
 
Xmas said:
DiGuru said:
I think I don't understand some things. I was under the impression, that you have polygons, that break up into triangles, that are called vertices. Or is a vertice a pixel within a triangle?
Vertices are the points at the corners of a polygon.

Ah. So, how does a vertex shader generates the pixels that go into the pixel pipeline?

Tesselation = breaking polygons/triangles (made out of vectors or splines) into smaller triangles (made out of vectors), right?
Yes.

Bumb-maps add detail, yes, but they're used to soften models as well. True?
Very rarely. Gouraud shading is usually enough to give objects a soft lighting (bump maps only affect lighting not geometry). Bump maps are used to simulate relatively flat surface structures, scratches, brick walls and the like.

Yes, I know bumb-maps are textures that add/substract lighting info. But I have seen some demos that do reflection based on the 'bumps'. So, a 'smart' shader could use that to smooth edges, surely?
 
DiGuru said:
Or is a vertice a pixel within a triangle?

I'll assume that English is not your primary language (not mine either, so don't worry :)). "Vertices" is the plural form of "vertex", so it's "a vertex" and "many vertices". "Vertex" is synonymous to "corner", so three vertices make a triangle.

Edit: XMas beat me to it ..
 
DiGuru said:
Yes, I know bumb-maps are textures that add/substract lighting info. But I have seen some demos that do reflection based on the 'bumps'. So, a 'smart' shader could use that to smooth edges, surely?

A shader can't smooth silhuette egdes, but it can smooth the appearance of a character. For instance, look at the DPS demo on my site.
 
Humus said:
DiGuru said:
Or is a vertice a pixel within a triangle?

I'll assume that English is not your primary language (not mine either, so don't worry :)). "Vertices" is the plural form of "vertex", so it's "a vertex" and "many vertices". "Vertex" is synonymous to "corner", so three vertices make a triangle.

Edit: XMas beat me to it ..

I'm from the Netherlands :) Mostly I do quite allright in English, I think, but some words 'grow' upon you. From the context used, vertices seemed to be those triangles.

Learning all the time :D
 
Humus said:
DiGuru said:
Yes, I know bumb-maps are textures that add/substract lighting info. But I have seen some demos that do reflection based on the 'bumps'. So, a 'smart' shader could use that to smooth edges, surely?

A shader can't smooth silhuette egdes, but it can smooth the appearance of a character. For instance, look at the DPS demo on my site.

Yes, you make great technical demos! That demo does exactly what I was visualizing. But why can't edges be done the same way?

EDIT: I don't mean silhuette edges. I mean the edges between polygons.

And could edges be smoothed by calculating the edge of the object, smoothing that and using it as an inverse bump-map on the object? Like a superimposed AA on a larger scale?
 
DiGuru said:
Humus said:
DiGuru said:
Yes, I know bumb-maps are textures that add/substract lighting info. But I have seen some demos that do reflection based on the 'bumps'. So, a 'smart' shader could use that to smooth edges, surely?
A shader can't smooth silhuette egdes, but it can smooth the appearance of a character. For instance, look at the DPS demo on my site.
Yes, you make great technical demos! That demo does exactly what I was visualizing. But why can't edges be done the same way?
The vertex shader only works on a single vertex at a time, so it can't be used since it doesn't know anything about edges.

The pixel shader works on a single pixel at a time and also has no concept of edge.

The only way to do what you propose is to do something like supersampling or multipass rendering. With multipass rendering, you could run an edge-detect shader and do some processing on the pixels in the shader.
 
So, how are the pixels of the triangles generated, shaded and passed to the pixel pipeline/shader? I don't understand that part when the vertex shader only handles coordinates.
 
DiGuru, I'm not entirely sure what you are trying to say, but I assume you are talking about how normal vertex lighting paired with Gourad shading makes polygon edges stand out.

One solution to this is using Phong shading, but without all the frills. You can do this by doing normalization of the normals (using a cube map or using math) and then lighting using an ordinary dot product in the pixel shader. This will help a lot with making the lighting smoother, much like you see in a tesselated model.

You could say this is easier than making bump maps for the sole reason that you don't have to make them.
 
DiGuru said:
So, how are the pixels of the triangles generated, shaded and passed to the pixel pipeline/shader? I don't understand that part when the vertex shader only handles coordinates.
Between the vertex shader and the pixel shader, there's the triangle setup and rasterizer unit. This is basically a non-programmable part that forms triangles from three vertices and generates quads, 2x2 pixel sets, to be passed on to the pixel pipeline.
 
Mintmaster said:
DiGuru, I'm not entirely sure what you are trying to say, but I assume you are talking about how normal vertex lighting paired with Gourad shading makes polygon edges stand out.

One solution to this is using Phong shading, but without all the frills. You can do this by doing normalization of the normals (using a cube map or using math) and then lighting using an ordinary dot product in the pixel shader. This will help a lot with making the lighting smoother, much like you see in a tesselated model.

You could say this is easier than making bump maps for the sole reason that you don't have to make them.

Thanks. You are right, that was it. But a stupid question: what is a dot product? The mean between two coordinates?
 
Xmas said:
DiGuru said:
So, how are the pixels of the triangles generated, shaded and passed to the pixel pipeline/shader? I don't understand that part when the vertex shader only handles coordinates.
Between the vertex shader and the pixel shader, there's the triangle setup and rasterizer unit. This is basically a non-programmable part that forms triangles from three vertices and generates quads, 2x2 pixel sets, to be passed on to the pixel pipeline.

Thanks, that clears that up! So, the vertex shaders transform the coordinates and tag some info to them about how the lighting changes on that point? And then the rasterizer tags a lighting offset to the pixel generated?
 
DiGuru said:
Thanks. You are right, that was it. But a stupid question: what is a dot product? The mean between two coordinates?
The dot product of two 3D vectors x (x1 x2 x3) and y (y1 y2 y3) is
x1 * y1 + x2 * y2 + x3 * y3 which happens to be |x| * |y| * cos(theta)
where theta is the angle between the two vectors. So if both vectors are normalized, i.e. of uniform length, the result of x dot y is cos(theta), solely dependent on the angle between the two vectors. When doing lighting calculations it is important to know the angle between the surface normal (a vector perpendicular to the surface) and the light incident vector (diffuse lighting) or between the surface normal and a half-vector between the light incident vector and the eye vector (specular lighting). Simply put, it helps calculating how a surface is exposed to light, and how it reflects light.

This lighting can either be done per vertex (the resulting colors are interpolated across a triangle) or per pixel.
 
Xmas said:
DiGuru said:
Thanks. You are right, that was it. But a stupid question: what is a dot product? The mean between two coordinates?
The dot product of two 3D vectors x (x1 x2 x3) and y (y1 y2 y3) is
x1 * y1 + x2 * y2 + x3 * y3 which happens to be |x| * |y| * cos(theta)
where theta is the angle between the two vectors. So if both vectors are normalized, i.e. of uniform length, the result of x dot y is cos(theta), solely dependent on the angle between the two vectors. When doing lighting calculations it is important to know the angle between the surface normal (a vector perpendicular to the surface) and the light incident vector (diffuse lighting) or between the surface normal and a half-vector between the light incident vector and the eye vector (specular lighting). Simply put, it helps calculating how a surface is exposed to light, and how it reflects light.

This lighting can either be done per vertex (the resulting colors are interpolated across a triangle) or per pixel.

So, if I understand it correctly, it is like diffusion (or refraction): the vector that specifies the amount of light going in a certain direction, calculated from the vector of the source, by the reflecting surface (filter?) and on to the intended target?
 
Put simply:

As you have no doubt seen with real-life curved objects, the greater the and the surface faces away from the light source, the darker the surface is. When the surface is facing directly toward the light, it is at its brightest; when the angle approaches perpendicular to the light vector, the surface is at its darkest. The dot3 product of the light vector and the normal vector provides a good approximation of this. This is not only how gourad shading works, but it is also how bump-mapping and normal-mapping works.
 
That way, you could automatically soften a low-poly model. Or is this done already?

Have a search for "Subdivision Surfaces" such as "Catmull-Clark" or "Loop". These take poly meshes and produce more refined (i.e. higher poly count) versions.
NeARAZ said:
Yes, and it's called "N-Patches" :)
N-Patches are a bit of a kludge way of doing something like Subdivision surfaces.
 
Back
Top