how to perform the lighting after the rasterization?

vrmm

Newcomer
I have read some optimazations for the opengl es. There is one for it that they perform the lighting after the rasterization. But, I don't know how to implement it because we only get the fragment after the rasterization. How could i perform the lighting after the rasterization. I wonder that we can do per-vertex or per-pixel lighting after the rasterization.
 
Link? I agree that this is a confusing choice of words.

This may be a metter of terms. Rasterization can be used to refer to just the coverage part of the affair. Ie find out what fragments are inside a surface and get the parameters ready.

And lighting may, in this case, refer to computing the color of individual fragments.

And then maybe I got it all wrong, and they talk specifically about a deferred rendering system, which would compute visibility first, and color later.
 
vrmm said:
I have read some optimazations for the opengl es. There is one for it that they perform the lighting after the rasterization. But, I don't know how to implement it because we only get the fragment after the rasterization. How could i perform the lighting after the rasterization. I wonder that we can do per-vertex or per-pixel lighting after the rasterization.

I believe you are referring to a technique known as Deferred Shading. Deferred shading using several render targets into which per-fragment data such as position, surface normal, color data, etc are rendered/rasterized. A second rendering pass then performs the final lighting on a per-fragment basis by rendering a full screen quad using the render targets from the previous pass as input textures. In many situations (such as when you're app is pixel shader bound) this can be an optimization since the complex material shaders only get performed on visible fragments.

Here's a presentation on deferred shading that was given a GDC:
http://www.gdconf.com/archives/2004/pritchard_matt.ppt


--Chris
 
Thanks a lot !

That's for a per-pixel shading. Is there someone knows the technology of the per-vertex shading after the rasterization?
 
vrmm said:
Thanks a lot !

That's for a per-pixel shading. Is there someone knows the technology of the per-vertex shading after the rasterization?

Errrrr.... What are you hoping to achieve with that? You will get nothing whatsoever with a HW system.

AFAICS, if you are doing a purely software renderer, then you are going to have to transform and rasterise your triangles up to three times. The first pass would be just to set up the Z depths, the second would be to detect if ANY of the pixels on a particular triangle are visible. That would then determine if you need to do the additional lighting calculations on your vertices and then allow you to do the final pass with lighting.

Sounds like a complete waste of time and effort.
 
Simon F said:
Errrrr.... What are you hoping to achieve with that? You will get nothing whatsoever with a HW system.
You'll gain in number of cycles you end up executing per-pixel. When you are doing deferred shading you do lighting only for visible pixels which are within the range of the light, unlike for regular multi-pass lighting you end up processing loads of pixels outside the range as well. It's also elegant from pipeline architecture point of view to first lay down material properties and then add lighting on top of that. You'll also gain in batching since there is no lighting breaking it. Anyway, I'm not totally convinced that deferred shading is good approach (atleast for now), since you end up fetching huge amount of data per-pixel for lighting.
 
Altair said:
Simon F said:
Errrrr.... What are you hoping to achieve with that? You will get nothing whatsoever with a HW system.
You'll gain in number of cycles you end up executing per-pixel. When you are doing deferred shading you do lighting only for visible pixels which are within the range of the light,
..

Yes I know what deferred shading is all about ( and for those who know where I work, ":rolleyes:" ) it's just that the O.P. seemed to want to reduce the amount of vertex processing work as well. I presumed he wanted to achieve this by performing the vertex lighting calculations only on the polygons that were visible. That would be rather tricky to do and probably of little benefit.
 
I presumed he wanted to achieve this by performing the vertex lighting calculations only on the polygons that were visible.

Yes, you are right. I want to perform the vertex lighting calculations only on the visible polygons . I have read the hybrid company's ppt and find their ways to do like that. I am also confused with that. So , I post this question here and hope somebody help me. :)  
That would be rather tricky to do and probably of little benefit.

But, why are they of little benfit? I think it maybe be good for this kind of performation because we can avoid the invisble polygons computions.so,we can save many time in theory. :)
 
Well, you can do it just in the same way even if you do plain vertex lighting. There just isn't necessarely as much benefit, depending what kind of lighting function you have per vertex and what's your polygon density.
 
Back
Top