Illumination "sticking" to the object?

All:

I'm in RenderMonkey and I noticed that I've assigned the light positions to texture coordinates. Is this illumination of light being applied per-pixel via the intensity is stored in a texture map? The reason I ask is because when I rotate the object in the 3d viewer, the light intensity doesn't change as the object moves in the viewer. Therefore if I place one light down the positive z axis, and illuminate an object on one side, but then rotate that object, the other side doesn't get lit.

-M
 
You're not very clear, but as I understand it you're seeing world-space lighting when you expected eye-space lighting. The lighting model if of course decided by the shader, so if it doesn't work the way you want it you should change the shader.
 
Humus said:
You're not very clear, but as I understand it you're seeing world-space lighting when you expected eye-space lighting. The lighting model if of course decided by the shader, so if it doesn't work the way you want it you should change the shader.

I'll fiddle with it some more. I assumed that the light vectors that are declared in the vertex shader would be in world space and that when I'm viewing the model in the window, that I'm seeing it in camera space (but the camera is in world space). I guess I'll try transforming the light vector that I get into camera space and see if that changes anything.

-M
 
I have still not been able to come up with a way to get realtime lighting computations. It seems that the light(n) positions are all in local space (as well as the in_position) so therefore it is being mapped onto the teapot in texture coordinates. I need for the light positions to be in world space so that when I rotate the teapot, I'll get illumination whenever the teapot faces the light. Here's my vertex shader code:

struct VS_OUTPUT
{
float4 pnt_cam : POSITION;
float4 normal : TEXCOORD0;
float3 L_V0 : TEXCOORD1;
float3 L_V1 : TEXCOORD2;
};

VS_OUTPUT vs_main(float4 pnt_world : POSITION,
float4 in_norm : NORMAL)
{
VS_OUTPUT Out;
Out.pnt_cam = mul(view_proj_matrix, pnt_world);
Out.normal = in_norm;
Out.L_V0 = normalize(light0 - pnt_world);
Out.L_V1 = normalize(light1 - pnt_world);

return Out;
}


What am I doing wrong here? Perhaps the point I *think* is in world space is really in object space?

-M
 
Mr. Blue said:
I have still not been able to come up with a way to get realtime lighting computations. It seems that the light(n) positions are all in local space (as well as the in_position) so therefore it is being mapped onto the teapot in texture coordinates. I need for the light positions to be in world space so that when I rotate the teapot, I'll get illumination whenever the teapot faces the light. Here's my vertex shader code:

struct VS_OUTPUT
{
float4 pnt_cam : POSITION;
float4 normal : TEXCOORD0;
float3 L_V0 : TEXCOORD1;
float3 L_V1 : TEXCOORD2;
};

VS_OUTPUT vs_main(float4 pnt_world : POSITION,
float4 in_norm : NORMAL)
{
VS_OUTPUT Out;
Out.pnt_cam = mul(view_proj_matrix, pnt_world);
Out.normal = in_norm;
Out.L_V0 = normalize(light0 - pnt_world);
Out.L_V1 = normalize(light1 - pnt_world);

return Out;
}


What am I doing wrong here? Perhaps the point I *think* is in world space is really in object space?

-M

perhaps you may want to transform your normals as well?
 
That code looks fine to me, except I wouldn't normalize the light vectors in the vertex shader. The length is useful for attenuation calculations in the fragments shader, and you'll have to normalize it the fragment shader anyway since interpolation won't give you unit length normals.

Anyway, a good way to see if things are right or wrong is to output the light too. Write a small pass that outputs a billboarded quad where the light is. It will help you debug.
 
Humus said:
That code looks fine to me, except I wouldn't normalize the light vectors in the vertex shader. The length is useful for attenuation calculations in the fragments shader, and you'll have to normalize it the fragment shader anyway since interpolation won't give you unit length normals.

Anyway, a good way to see if things are right or wrong is to output the light too. Write a small pass that outputs a billboarded quad where the light is. It will help you debug.

I think that what the problem may be is the fact that the light vectors are being stored as texture maps. Using the semantics TEXCOORDn. It's as if the shading is being mapped to the UV coordinates of the teapot (as if it's a texture map). Therefore, it isn't really calculating illumination on-the-fly (like OpenGL).

-M
 
I'm confused. :?
Passing information from the vertex shader to the fragment shader through texture coordinates is the standard way of doing things. It need not actually be related to textures in any way. It should be called "interpolators" or like in glslang "varying". It's generic interpolated attributes these days. Could you post the fragment shader too?
 
Humus said:
I'm confused. :?
Passing information from the vertex shader to the fragment shader through texture coordinates is the standard way of doing things. It need not actually be related to textures in any way. It should be called "interpolators" or like in glslang "varying". It's generic interpolated attributes these days. Could you post the fragment shader too?

Will do when I get home.

What I'm trying to do is simply implement a per-pixel lambertian shader and while I get the appearence that I want, when I rotate the teapot with the mouse, the light illuminating the surface stays on one side of the teapot, so that I see a dark side if I rotate it 180 degrees out.

-M
 
It sound as if you treat your light position/direction as if it's defined in world space. That wouldn't alter it when the viewing transformation changes (probably what happens in RenderMonkey when the tepaot rotates). You need it in view space if you want it to stay at a fixed position relative to the camera.
 
GameCat said:
It sound as if you treat your light position/direction as if it's defined in world space. That wouldn't alter it when the viewing transformation changes (probably what happens in RenderMonkey when the tepaot rotates). You need it in view space if you want it to stay at a fixed position relative to the camera.

Are you saying then, that the code should look like this?

VS_OUTPUT vs_main(float4 pnt_world : POSITION,
float4 in_norm : NORMAL)
{
VS_OUTPUT Out;
Out.pnt_cam = mul(view_proj_matrix, pnt_world);
Out.normal = in_norm;
Out.L_V0 = mul(view_proj_matrix, light0) - Out.pnt_cam;
Out.L_V1 = mul(view_proj_matrix, light1) - Out.pnt_cam;

return Out;
}

I think I did this, but I'll try it again.

-M
 
If you need to get to eye space you need to transform with the view matrix not the concatenation of the view and projection matrices. You also nede your normals and light vectors in the same space, so if you transoform the light into eye space you shouyld transofrm the normals from object space into eye space as well.

The alternative is doing all lighting in object space which requires transforming just the lights, not the normals.

A simple test to see what you have is working is to use the view vector as a lighting direction vector. This will give you simple "miner's lamp" lighting and allow you to check that everything looks correct.
 
Also, there's plenty of sample workspaces in RenderMonkey you can take a look at. For instance illumination.rfx, then activate the per-pixel illumination technique. I think that's roughly what you want.
 
Problem solved: HLSL not very intuitive.

All:

Thanks for the advice on getting my illumination shader to work. I'm somewhat taken aback though at the samples that ATI has given for the HLSL tutorial as well as this book I'm reading called Shader X2: Tutorials...

First thing, I was able to get the expected results by using the inverse transform matrix and multiplying the light positions into world space (or object space). This gives me the illumination I expect as the lights are positioned somewhere out in world space and can move the teapot or rotate it and that side will illuminate from the given lights' direction. Here is the source code for the shader (btw, Humus there are a lot of examples from RenderMonkey that are wrong in calculating the correct illumination, but are hidden because they are using the view vector to change the specular highlighting and not the diffuse component as well...i.e. the anisotropic, illumination, and NPR shaders come to mind)

VERTEX SHADER:

float4x4 view_proj_matrix : register(c0);
float4 view_position;
float4 light0;
float4 light1;
float4x4 inv_view_matrix;

struct VS_OUTPUT
{
float4 pnt_cam : POSITION;
float4 normal : TEXCOORD0;
float4 L_V0 : TEXCOORD1;
float4 L_V1 : TEXCOORD2;

};


VS_OUTPUT vs_main (float4 pnt_world : POSITION,
float4 in_norm : NORMAL)
{
VS_OUTPUT Out;
Out.pnt_cam = mul (view_proj_matrix, pnt_world);
Out.normal = in_norm;

// We need to transform the light positions from
// camera space to world space so that the lights will
// be static when the camera or the object moves
Out.L_V0 = mul(inv_view_matrix, light0) - pnt_world;
Out.L_V1 = mul(inv_view_matrix, light1) - pnt_world;

return Out;
}

PIXEL SHADER:

float4 Material;
float4 light0_color;
float4 light1_color;
float kd;

float4 ps_main (float3 normal : TEXCOORD0,
float3 light_v0 : TEXCOORD1,
float3 light_v1 : TEXCOORD2) : COLOR0
{

// declare final color
float4 final_color;

// Light loop
// N dot L : Lambertian equation
float n_dot_l_0 = kd * clamp(dot(normalize(light_v0), normal), 0, 1);

float n_dot_l_1 = kd * clamp(dot(normalize(light_v1), normal), 0, 1);

final_color[0] = (n_dot_l_0 * light0_color[0] + n_dot_l_1 * light1_color[0]) * Material[0];
final_color[1] = (n_dot_l_0 * light0_color[1] + n_dot_l_1 * light1_color[1]) * Material[1];
final_color[2] = (n_dot_l_0 * light0_color[2] + n_dot_l_1 * light1_color[2]) * Material[2];
final_color[3] = 1.0;

return final_color;
}


Some notes:

- A LOT of the shaders that come with RenderMonkey attempt to transform the dot products into a range between 0 - 1. The problem with this is that the dot product's range is always between 0 - 1 with the sole exception of some grazing angles that dip below zero. Instead of transforming the range, they should clamp. For example, suppose we calculate a dot product of 0.25 and then we halfen that by 0.5 and add 0.5 to it. We now have 0.625, which is not what we want since the illumination calculation will be much higher than what we'd expect.

- I STILL don't know what space these global variables are initialized too!!! I need to know this (as well as every other developer out there)!! Some of the variables in question are: view_position, light0 (is this in camera space initially?), inPos, and normal.

Btw, this code is an accurate implementation of a true per-pixel Lambertian lighting model taking into account a scalar component and the light's color into the equation. The main limitation being that I can't loop through an array of lights (or can I?)


Cheers,

-M
 
Re: Problem solved: HLSL not very intuitive.

Mr. Blue said:
First thing, I was able to get the expected results by using the inverse transform matrix and multiplying the light positions into world space (or object space).
I was too lazy to read through the whole thread and you probably already know this but based on my experience with writing the old PowerVR SGL ...
  • To do lighting in world space you need to transform the normals by the transpose of the inverse of the obj->world transformation matrix. Vertex positions are transformed in the normal manner. This scheme works even if you are unevenly stretching/skewing your object, but is a bit more mathematically expensive than local lighting.
  • To do the lighting in local space, to be safe you should guarantee that your local->world matrix doesn't have uneven scaling (and better still, no scaling at all), and then you can transform your lights back into the local space and you dont have to worry about the light fall-off and spread etc being distorted
 
Thanks for the advice Simon. It would be much easier if I knew what spaces I was working in and what matrices mean what (i.e. what the hell is view_proj_matrix?)

The difference in names for things is attrociously different than the film industry.. Arrgghh!:(

-M
 
Mr. Blue said:
Thanks for the advice Simon. It would be much easier if I knew what spaces I was working in and what matrices mean what (i.e. what the hell is view_proj_matrix?)

The difference in names for things is attrociously different than the film industry.. Arrgghh!:(

-M
See http://www.opengl.org/developers/documentation/version1_4/glspec14.pdf

For OpenGL fixed function, see Chapter 2, Section 10.

Object coordinates are transformed by the modelview matrix to eye coordinates. Eye coordinates are transformed by the projection matrix to clip coordinates. You can transform directly from object coordinatates to clip coordinates with the concatentation of the modelview and projection matrices, the modelviewprojection matrix.

(For DirectX, it's the view matrix, the projection matrix, and the viewprojection matrix.)

Note, the modelview matrix should be invertable so that normals and directions can be transformed. Singular or near-singular modelview matrices result in undefined normals and directions. The projection matrix can be singular or not.


All lighting is documented in OpenGL fixed function to take place in eye coordinates. But when you specify a light in the API its position is transformed by the current modelview matrix, and its directions are transformed by the inverse transpose of the modelview matrix. That implies that if you want to define your lights in object coordinates, you need to load the same modelview matrix, but if you want to specify your lights in eye coordinates, just load an identity modelview matrix.


For shader programming, you get to pick and choose what you want to do, which is why Humus keeps answering it's whatever space you want it in.

BTW, pay particular attention to frequency of calculations (something that you also don't have to worry much about in the film industry).

Example:

Code:
final_color[0] = (n_dot_l_0 * light0_color[0] + n_dot_l_1 * light1_color[0]) * Material[0]; 
final_color[1] = (n_dot_l_0 * light0_color[1] + n_dot_l_1 * light1_color[1]) * Material[1]; 
final_color[2] = (n_dot_l_0 * light0_color[2] + n_dot_l_1 * light1_color[2]) * Material[2]; 
final_color[3] = 1.0;

First, you can change this to:

Code:
final_color.rgb = (n_dot_l_0 * light0_color + n_dot_l_1 * light1_color) * Material;
final_color.a = 1.0;

BUT, since light0_color and light1_color and material are all constant across the surface, why not do this instead?

Code:
final_color.rgb = (n_dot_l_0 * light0_colorxMaterial + n_dot_l_1 * light1_colorxMaterial);
final_color.a = 1.0;

-mr. bill
 
mrbill said:
See http://www.opengl.org/developers/documentation/version1_4/glspec14.pdf

For OpenGL fixed function, see Chapter 2, Section 10.

Object coordinates are transformed by the modelview matrix to eye coordinates. Eye coordinates are transformed by the projection matrix to clip coordinates. You can transform directly from object coordinatates to clip coordinates with the concatentation of the modelview and projection matrices, the modelviewprojection matrix.

This makes sense to me, but this doesn't necessarily mirror HLSL's API.

According to this, all coordinates and vectors are automatically expressed in object space (or model space). But it seems that HLSL has different defaults for different variables.


All lighting is documented in OpenGL fixed function to take place in eye coordinates. But when you specify a light in the API its position is transformed by the current modelview matrix, and its directions are transformed by the inverse transpose of the modelview matrix. That implies that if you want to define your lights in object coordinates, you need to load the same modelview matrix, but if you want to specify your lights in eye coordinates, just load an identity modelview matrix.

If the lights are expressed in eye coordinates, if the camera moves, then the lights will move with it. This is not the behavior you want. Lights should stay where they are (like in the real world) and things should move around them.

Is there anyone that can come up with a solution to my problem other than using my solution? I've tried every combination to get the lights to stay still and the object to reflect the light depending on the direction of the lights and it seems only my solution works, but I don't quite understand my solution (because I don't know what space "light0" is in initially). That's driving me nuts, actually.


For shader programming, you get to pick and choose what you want to do, which is why Humus keeps answering it's whatever space you want it in.

Fair enough, but I still need to know what space I'm dealing with initially. And there is no definition (which sucks). Renderman and Mental Ray and our own custom renderer defines the initial space for the point 'P' to be shaded or position of the light 'L' to derive a vector from. You can then transform it to any space you want but the default is specified. This doesn't seem to be well defined with HLSL.


BTW, pay particular attention to frequency of calculations (something that you also don't have to worry much about in the film industry).

Thanks, but I'm not really trying to make a game here.:) This is good advice:


Code:
final_color.rgb = (n_dot_l_0 * light0_color + n_dot_l_1 * light1_color) * Material;
final_color.a = 1.0;

But this code is not intuitive at all and I probably wouldn't put this in a production shader:


BUT, since light0_color and light1_color and material are all constant across the surface, why not do this instead?


Code:
final_color.rgb = (n_dot_l_0 * light0_colorxMaterial + n_dot_l_1 * light1_colorxMaterial);
final_color.a = 1.0;

Be well,

-M
 
Re: Problem solved: HLSL not very intuitive.

Mr. Blue said:
(btw, Humus there are a lot of examples from RenderMonkey that are wrong in calculating the correct illumination, but are hidden because they are using the view vector to change the specular highlighting and not the diffuse component as well...i.e. the anisotropic, illumination, and NPR shaders come to mind)

Could you explain what you mean with that?
 
Re: Problem solved: HLSL not very intuitive.

Mr. Blue said:
- A LOT of the shaders that come with RenderMonkey attempt to transform the dot products into a range between 0 - 1. The problem with this is that the dot product's range is always between 0 - 1 with the sole exception of some grazing angles that dip below zero. Instead of transforming the range, they should clamp. For example, suppose we calculate a dot product of 0.25 and then we halfen that by 0.5 and add 0.5 to it. We now have 0.625, which is not what we want since the illumination calculation will be much higher than what we'd expect.

This is called Half-Lambert diffuse. It makes a softer diffuse which doesn't turn the whole backside black. Fake? Yup, like every other lighting model :)

Mr. Blue said:
- I STILL don't know what space these global variables are initialized too!!! I need to know this (as well as every other developer out there)!! Some of the variables in question are: view_position, light0 (is this in camera space initially?), inPos, and normal.

view_position is the position of the camera in world space. light0 isn't a standard variable AFAIK, so it's whatever you set it to be and in whatever space you imagine it is. The value you set it to in the variable editor is the exact value you get in the shader. Your inpos and normal is worldspace or object space (typically). It's whatever you vertices you send to the API. In the RenderMonkey case, it's the raw model vertices, which I suppose you could take to mean world space. Or object space if you so prefer.
 
Back
Top