Will we got ogl per pixel lighting and fog in consumer cards

bloodbob

Trollipop
Veteran
Just wondering do you think the major graphics vendors ATI,Nvidia will do per pixel lighting and fog in opengl?? ( No I'm not talking about fragment shaders I'm talking about using the quality hint). I know a card jointly produced by NV and SGI once had per pixel specular lighting in opengl.
 
I was pretty certain that this has been around since the GeForce 256.

Besides, why would you want to do basic specular, when you can do much more advanced forms of lighting through fragment shaders?
 
So, you mean, automatically doing per-pixel when using the NICEST hint?

Well, that seems like a waste... I mean, if as you seem to imply all current customer cards do it per-vertex then, a card doing it per-pixel then would get a signifiant performance disadvantage in many games and benchmarks, considering many programmers would have then said NICEST for per-vertex cases.

Current GPUs would most likely emulate that through fragment programs anyway, so I suggest anyone interested in doing that stuff per-pixel does the fragment program himself.


Uttar
 
Re: Will we got ogl per pixel lighting and fog in consumer c

bloodbob said:
Just wondering do you think the major graphics vendors ATI,Nvidia will do per pixel lighting and fog in opengl?? ( No I'm not talking about fragment shaders I'm talking about using the quality hint). I know a card jointly produced by NV and SGI once had per pixel specular lighting in opengl.
As long as the video card supports cubemaps and some of the advanced texture combining functions (GL_ARB_texture_env_combine, GL_ARB_texture_env_dot3), per-pixel lighting is easily possible. Fog has also been in video cards for a heck of a long time. . .
 
No, all current PC cards do per vertex lighting, and then interpolate accross pixels. So, if there's a specular highlight in the middle of your triangle, you won't see it.

It certainly could be done on a per pixel basis, but you would take a performance hit. You need to compute the light vector per pixel, normalize, compute specular and then add everything and combine with texture. Very easy to do with a pixel shader, but it's probably 6~12 instructions, depending on equation (full attenuation, etc...).

Because of that performance hit, no, nobody will replace the standard path with this. However, by using the fragment shader extensions, any application can do it itself.

As for fog, it can be done per vertex or per pixel now.
 
I put the hint in the GPLed engine code i.e. glquake. Doing exponentional fog over large triangles you can quite easily see the linear interpolation between the vertexs well at the time I could maybe it was supported in hardware? and drivers just didn't listen to the hint. If it is supported at zero cost and its not being used in game maybe some developers should be bugged to included a extra line of code.
 
K.I.L.E.R said:
I haven't seen any game doing fog per pixel.
Unreal and Unreal Tournament did.

Anyway, in order to do fog on a per-pixel basis, you need to have data enough for it. Since fog is a 3D effect, one would need to use a 3D texture to store all fog data to do a really varying fog effect. Barring that, procedural techniques could also produce decent results.

In the end, it all comes back to the fact that modern 3D hardware can do it all, and has been able to do it all for quite some time. The reason we haven't seen it all yet is performance.
 
That explains why the lighting looked mad house compared to all the other games. :)

Thanks for the info. I can't believe I never knew U/UT had PPF. :|

Chalnoth said:
K.I.L.E.R said:
I haven't seen any game doing fog per pixel.
Unreal and Unreal Tournament did.

Anyway, in order to do fog on a per-pixel basis, you need to have data enough for it. Since fog is a 3D effect, one would need to use a 3D texture to store all fog data to do a really varying fog effect. Barring that, procedural techniques could also produce decent results.

In the end, it all comes back to the fact that modern 3D hardware can do it all, and has been able to do it all for quite some time. The reason we haven't seen it all yet is performance.
 
Actually Chalnoth I was more talking about the simple fog not the volumeteric fog so all the card would need is the depth per pixel (z value) which is why I thought it was pretty sad that it my tnt2 didn't do it.
 
The fog in Unreal/UT was an extension of the light mapping approach to lighting. To add fog to an area, you added a fog "light" to that area. The engine called its fog algorithm "volumetric," which basically meant that it then added up the fog between you and any surface in the game in order to calculate the final color. I'm not exactly sure how this was accomplished, but I think it was based upon recalculating the lightmaps for volumetric fog each and every frame. This made the approach very slow, except for software rendering.
 
bloodbob said:
Actually Chalnoth I was more talking about the simple fog not the volumeteric fog so all the card would need is the depth per pixel (z value) which is why I thought it was pretty sad that it my tnt2 didn't do it.
Simple depth fog? Why would you care if it's interpolated or not?
 
cause when u do expectional fog on large triangles that are near but at differnet depths to each other and you can see the fog being applied doesn't get applied with the same density it looks rather crappy. Linear fog its fine but as soon as you go non-linear its not nice. I'll chuck up a screen next week if my 9500 does per triangle at all.
 
Wow, you really need to work on that spelling.

Anyway, exponential fog can be handled just fine by the pixel shader if the developer so chooses.
 
From the Halo performance FAQ on focing ps 1.1 rather than 1.4 or 2.0:
Pixel shaders 1.1 (DirectX 8.0)
PS1.1 is probably the most widespread pixel shader version currently. When running in the PS1.1 rendering code path, the visual compromises are (in addition to the PS1.4 compromises):
- No model self-illumination (excluding some specific environmental models)
- No animated lightmaps
- Fog calculations are triangle based, not pixel based
- No specular lights
which indicates that the 1.4 and 2.0 path indeed use per-pixel fog.
 
sireric said:
It certainly could be done on a per pixel basis, but you would take a performance hit. You need to compute the light vector per pixel, normalize, compute specular and then add everything and combine with texture. Very easy to do with a pixel shader, but it's probably 6~12 instructions, depending on equation (full attenuation, etc...).
I actually managed to do this in a very half-assed fashion on my Radeon 8500, a while ago. In the vertex shader, I calculated the light position in tangent space and in units of the distance the texture spanned over the surface (this was to remain within the [-8,8] range allowed by ATI_fragment_shader). The remaining calculations were all done in the fragment fragment shader. It actually looked pretty good. . .
 
Colourless said:
Ye olde Voodoo 1 always did per pixel table based w Fog.
I believe fog has always been applied at the pixel level.

But was the exact color/amount of the fog to be applied calculated per pixel, or just extrapolated from vertex data?
 
Well yeah the tnt2 possible was per pixel but the calculation were based on vertex data linearly which is what bugged me.

Off topic but no animated lightmaps :/ even quake 1 SW had animated lightmaps.

I just find it is rather academic for opengl to have backwards compatible support in the up coming OGL 2.0 ( atleast I think it is gonna be bw compat its been a few months since reading the draft documentation) if hardware manufactors aren't gonna make the old functions done more accurately.But alas my views aren't shared in this regared by many. That is one good thing I like about the R3XX is that they use FP pipelines for opengl where as nvidia is using INT piplines where it isn't required.
 
Back
Top