Shading Instructions and Rasterizer !

Thanks , I think I can settle with this explanation .


Advice Appreciated :smile:, I am willing to do just that , but I am under the impression that being good in software (designing and writing code) doesn't necessarily mean understanding things , and by that I mean the deep-gut understanding , which gives you the ability to visualize what is going on in your mind .

Sure, you can understand things reasonably well without writing code. But writing/running/experimenting with lots of non-toy code is essential to have a deep-gut understanding.
 
Personally, I would not describe screen space as being the same as global coordinates. To me, global coordinates are used to assemble all the models etc into the same coordinate space and for doing, say, lighting calculations and perhaps collision detection.

Screen coordinates are in pixel dimensions and are used for rasterisation.
Agreed, I could have answered a bit better outside of the context of the question.
 
Personally, I would not describe screen space as being the same as global coordinates. To me, global coordinates are used to assemble all the models etc into the same coordinate space and for doing, say, lighting calculations and perhaps collision detection.

Screen coordinates are in pixel dimensions and are used for rasterisation.
Ok , I get it , what I don't get however , is the hardware location for storage of those global coordinates ? where is it exactly ? in the memory ?

Agreed, I could have answered a bit better outside of the context of the question.
Mr.Rys , would you be kind and answer this question :
In your article : http://www.beyond3d.com/content/reviews/51/2 , in which you reviewed GT200 archeticture you mentioned that each Cluster contains 3X8 FP32 Scalar ALU , and 3X8 FP32 Scalar Interpolator ! .. are those the Special Function Units ?
 
Yeah, the SFU is responsible for interpolation for the rest of the SM in that generation of their hardware.
 
Throughput is 1/4 rate :)
Aha ! I see :smile:.
so these units are not pipelined ?
to further elaborate on my question : for example the shader cores in GT200 (8 Units per block) might have a throughput of one multiply instruction per 4 clocks , but because they are pipelined , it is countable as one instruction per cycle , is the same applicable on those SFUs ?
 
The units are pipelined, but they do take four clocks to compute the result too. If you're keen on talking more about a specific vendor's hardware, we should probably setup a new thread so this fairly pure one stays that way :smile:
 
The units are pipelined, but they do take four clocks to compute the result too. If you're keen on talking more about a specific vendor's hardware, we should probably setup a new thread so this fairly pure one stays that way :smile:
Okay :D .. let's keep this thread as it is , I will be back with more "pure" questions once I nourish my 3D knowledge with reading .. but I will put a pin on that question , it will be brought back to light in a new thread once this one is done :cool: .

Of course I can't express my gratitude enough , thank you very much .
 
Ok I came across another thing that I don't understand :
It's about texture coordinates , each texel has a 4-way coordinate system , Now I realize that each texel is in fact a 2D color , and it's color is represented using the usual RBGA System , the question is: why use a 4-way coordinate system on a 2D point ?
 
Personally, I would not describe screen space as being the same as global coordinates. To me, global coordinates are used to assemble all the models etc into the same coordinate space and for doing, say, lighting calculations and perhaps collision detection.

Screen coordinates are in pixel dimensions and are used for rasterisation.
Ok , I get it , what I don't get however , is the hardware location for storage of those global coordinates ? where is it exactly ? in the memory ?
Global coordinates might not be "stored" anywhere. They might only exist temporarily in a shader program.
 
Global coordinates might not be "stored" anywhere. They might only exist temporarily in a shader program.
I see , but that would still count as being in "memory " , whether it is System RAM or registers , right?
What about this question then :
It's about texture coordinates , each texel has a 4-way coordinate system , Now I realize that each texel is in fact a 2D color , and it's color is represented using the usual RBGA System , the question is: why use a 4-way coordinate system on a 2D point ?
 
I see , but that would still count as being in "memory " , whether it is System RAM or registers , right?
What exactly do you mean by "global coordinates"? If you talk about transform matrices then they will usually be stored in "constant" buffers which reside somewhere in memory (either CPU or GPU) and get uploaded to GPU cache when used.

It's about texture coordinates , each texel has a 4-way coordinate system , Now I realize that each texel is in fact a 2D color , and it's color is represented using the usual RBGA System , the question is: why use a 4-way coordinate system on a 2D point ?
You don't always deal with just 2D coordinates. You can have 3D textures, projected 2D textures, cube maps,... Not to mention that this is the only way for vertex shaders to talk to pixel shaders and any data you want to pass over has to go through here. These are also quite limited resources. SM 3.0 can pass 10 4D vectors, SM 4.0 16 4D vectors and SM 4.1/5.0 32 4D vectors. Of course hardware might deal faster with 2D vectors then it does with 4D vectors.
 
What exactly do you mean by "global coordinates"? If you talk about transform matrices then they will usually be stored in "constant" buffers which reside somewhere in memory (either CPU or GPU) and get uploaded to GPU cache when used.
Yes , (sigh) I meant the transform matrix , thanks for the fulfilling answer .:runaway:

You don't always deal with just 2D coordinates. You can have 3D textures, projected 2D textures, cube maps,... Not to mention that this is the only way for vertex shaders to talk to pixel shaders and any data you want to pass over has to go through here. These are also quite limited resources. SM 3.0 can pass 10 4D vectors, SM 4.0 16 4D vectors and SM 4.1/5.0 32 4D vectors. Of course hardware might deal faster with 2D vectors then it does with 4D vectors
Thanks again for the excellent answer .
 
Alright I am back !

I have been reading about Lighting for a while now , the problem is that I gathered most of my knowledge from different sporadic sources , without a consistent organized supply of info , so my understanding of the concept still has massive holes , I plan to fill them by asking you guys .

First , as I understand it , objects that need to be lit has some pre-defined characteristics , like Ambient , Emissive , Specular .. etc , these factors are like colors , they are used to alter the interaction between the light and the object .

To put it simply , they are like certain color modifiers , if an object has a value of 4 in it's Ambient criteria , then it behaves in a certain way when exposed to light , for example , when exposed to a green light , the object would always gain a dark color .

Q1 : Is my understanding of that concept here correct ?

Secondly , the process of calculating light boils down to two basic operations : determining distance and angles .

it starts with creating a normal for the triangle needed to be lit , then the distance between that triangle and the light source is calculated , and since lighting occurs after Rasterization , it is basically done in 2D mode , and the distance is calculated simply by subtraction , for example , the distance between point 9 and point 3 is 6 ! , 9-3=6 .

The value of the distance is then treated as a Vector , and the angle between that vector and surface normal is calculated by the Cosine law .

kinda like this picture :
normals_02.gif


Finally , the color of the triangle is calculated from the equation : L(luminance of light source)X Angle .

Q2 : Is my understanding of this concept correct ?

Q3 : Tf it is correct , then where the (emissive , specular , ambient ) criteria fit in ? after or before the luminance equation ?

thanks in advance .
 
Last edited by a moderator:
That's a bit complex subject but here it goes:
What you are describing is called Lambertian reflectance. It comes from observation that a ray of light hitting an object will be alot more spread out if it hits an object at a shallow angle (so angle between N and L is high => N.L is close to zero) compared to that same ray hitting an object at right angle (angle between N and L is near 0 => N.L is close to one). This is also independent of observer position. This is called diffuse term.

This of course is just an approximation and actual lighting is far far more complicated then that. The first obvious problem is that some materials (plastics, metalic colors on cars,...) reflect far more light in one direction, so material tends to shine when you look at it from correct angle. This is called specular term and is obviously dependant observer location (so it's view dependant). This is again just an simplification and far more complicated materials can be found (such as CD and DVD media for example).

Probably the most complicated (or evil if you will) of all is the ambient term. This simulates lighting that came from the environment. If you park your car into a garage without any windows or lights on a sunny day you'll still see your car just fine even though none of the photons comming from the sun hit your car directly. They hit you car indirectly (hence the name indirect lighting). They might have bounced of you neighbours house and then of your car and into your eye. This is what global illumination algorithms are trying to solve. Generally this is to complicated for real time and it is just replaced by a constant color.

So your pixel will be something like: color = ambientcolor + diffusecolor + specularcolor

it starts with creating a normal for the triangle needed to be lit , then the distance between that triangle and the light source is calculated , and since lighting occurs after Rasterization , it is basically done in 2D mode , and the distance is calculated simply by subtraction , for example , the distance between point 9 and point 3 is 6 ! , 6-3=9.
Not true. Lighting can be done per vertex (and intepolated) or per pixel. In both cases you need distance from your 3D sample position to your 3D light position (light direction vector). You will calculate this light direction vector + distance (sqrt(lightdir dot lightdir)) per vertex and pass it on through interpolators to pixel shader. Normals are again either per vertex or per texel (packed in texture called normal map).
 
First of all thanks for your time and your highly detailed answer ..

This of course is just an approximation and actual lighting is far far more complicated then that
Of course I understand that the actual lighting in real life is far more complicated , I also understand that the values of ambient+diffuse+specular are just approximations , however , I presume that these values are pre-defined by the game developer , they might be stored in a database or something .. right ?

So your pixel will be something like: color = ambientcolor + diffusecolor + specularcolor
In this case , are you talking about color before lighting calculations are performed , or after ?

Not true. Lighting can be done per vertex (and intepolated) or per pixel.
Yes that is correct , I chose triangles for the simplicity sake

You will calculate this light direction vector + distance (sqrt(lightdir dot lightdir)) per vertex
And by distance per vertex you mean the angle ?

Normals are again either per vertex or per texel (packed in texture called normal map)
You mean that normals of texels are pre-calculated offline and stored in something called a normal map .. right ? , I don't suppose the same applies to vetices do I ?

Again , I can't thank you enough for your detailed insights .
 
I presume that these values are pre-defined by the game developer , they might be stored in a database or something .. right ?

It can be done both ways. IE precomputed offline and stored or computed on the fly.

In this case , are you talking about color before lighting calculations are performed , or after ?

He's saying that lighting calculations can be done either before (per vertex lighting) or after (per pixel lighting) rasterization.

Color of a pixel is always determined after lighting calculations have been done.

And by distance per vertex you mean the angle ?

No, distance means just that. Distance. As in distance bw LA and NY.
 
It can be done both ways. IE precomputed offline and stored or computed on the fly.
the way I understand it , is that these values are just values to alter the outcome of lighting calcualtions , they don't need to be calculated themselves , they only need to be determined.

it's like saying that the frequency factor alters the outcome of a sound sample , by setting the value of frequency higher , the sound becomes high pitched , by tuning the value down , the sound changes accordingly .


He's saying that lighting calculations can be done either before (per vertex lighting) or after (per pixel lighting) rasterization.
I know that , I am asking whether the values of Ambientcolor + diffusecolor + specular color are incorporated before or after the lighting calcualtions ?

No, distance means just that. Distance. As in distance bw LA and NY
Please elaborate on this point .. what is the use of distance per vertex ?
 
I know that , I am asking whether the values of Ambientcolor + diffusecolor + specular color are incorporated before or after the lighting calcualtions ?
calculating Ambient color + diffusecolor + specular color is the lighting calculation.

Please elaborate on this point .. what is the use of distance per vertex ?

The farther you go from light sources, the weaker they become. So for an accurate lighting calculation, strength of each light source at that point is also considered.
 
Back
Top