Realistic Lighting: Can it be done in realtime?

All:

I've just finished coding up some things I needed to brush up on. I've been studying the realistic lighting models by Blinn, Cook-Torrance, Ward, etc.. I've enclosed some screenshot links for your viewing. Here are my questions:

(1) Is the hardware to the point that we can do realistic microfacet distribution of the slopes of rough surfaces, shadowing and masking attenuation of light, and fresnel without using specular maps?

In this pic here:

http://userpages.umbc.edu/~mrowle1/shiny_golf_ball.tif

You see a shiny ball using a very tight slope for microfacets (0.2) with a very high Fresnel value (2.74). Because the ball is bump-mapped, the highlight is spread about and falls off eventually. I can adjust these parameters at will and get an accurate highlight. One concern of mine with using specular maps is how could you build it around the geometry so that it accurately represents the highlight when mapped back on.

Here is a rendering with the specular pass only to get a feel for the spread of the specular calculation:

http://userpages.umbc.edu/~mrowle1/specular_only_ball.tif

Here is the diffuse pass with no specular calculation at all:

http://userpages.umbc.edu/~mrowle1/diffuse_only_ball.tif

If I wanted to make a realtime scene with this ball would that have to be done by multitexturing the specular, bump, diffuse, base color, and 3d noise texture? How many passes would it take on a GF3?

The noise is based off of Ken Perlin's new noise paper (SIGGRAPH 2002) and the noise drives the bump gradient calculation. Could this noise be calculated in realtime and a bump texture map generated on-the-fly based off of this noise?

I'm looking for Cg (or any other language) to allow me to go in and code up things the *REAL* way and then see it in realtime. I believe this is John Carmack's goal too.

Comments?

-M
_________________
Marlin Rowley
Color & Lighting TD
ESC Entertainment, marlin@escfx.com
 
BoddoZerg said:
I think we'll find this out when JC unveils his post-DOOM3 engine.

Find what out? There are several techniques in that one scene. Which one are you referring to? The realistic lighting model? The realtime on-the-fly bump-mapping? The 3d texture noise in realtime?

-M
 
Mr. Blue said:
Comments?

Why is this topic is titled "Realistic lighting"?
The content of these images are very far from being "realistic"...

The contour of the ball is smooth (without the roughs), destroying the illusion of having real bumps on it.
Same goes for the shadow contour.
Also the bumps do not cast shadow to each other.


OTOH, if you wanted to talk about "Is bumpmapping possible in real-time", the answer is most certainly yes.
Look at Humus's demos, or Tenebrae for some examples that was posted here.

Basicly blinn is Dcolor * N*L + Scolor * (N*H)^e.
The dot products are possible since the GF1.
N should be get from a normal-map, which can be easily calculated from the bump-map (the one containing the heights).
Getting L and H might get tricky, depending on whether you have a directional, point or spot light and whether you expect polygons being relatively small.
The simplest way is calculating L & H per vertex and putting them in one of the interpolated regs (texture or color).
With PS1.4 (or in the above special case with PS1.1) you can lookup the result of the power operation from a lookup texture.
Or you can hardcode a fixed exponent (Tenebrae does that).
 
For a given material (constant microfacet slope and index of refraction), Cook-Torrance specular reflection can be calculated and stored in 2 textures (a 2D distribution*fresnel texture indexed by (N.H, L.H) and a 3D geometric attenuation/V.H texture indexed by (N.V, N.L, and V.H)).

Then, in the color blend, you just multiply DF_tex * G_tex * Cs * scale. The rescale at the end is necessary since the G/V.H texture must be in the [0..1] range, which wouldn't give you much range without rescaling after lookup.

Doing a good job of calculating Cook-Torrance per-pixel without lookup textures pretty much requires next-gen hardware with per-pixel exponentiation and division capabilities (and lots of math instructions, to calculate acos and tan).
 
Hyp-X said:
Why is this topic is titled "Realistic lighting"?
The content of these images are very far from being "realistic"...

The content isn't realistic but the lighting model is mostly based on a real physics model.

The contour of the ball is smooth (without the roughs), destroying the illusion of having real bumps on it.

Certainly, I'm not using displacement mapping.:)

Also the bumps do not cast shadow to each other.

You are correct, but that's because of blinn's illusion of perturbed geometry. Again, displacement mapping will solve this.

OTOH, if you wanted to talk about "Is bumpmapping possible in real-time", the answer is most certainly yes.
Look at Humus's demos, or Tenebrae for some examples that was posted here.

That was not the question. Humus's demos don't simulate realistic lighting models.

Basicly blinn is Dcolor * N*L + Scolor * (N*H)^e.

Wrong. That is blinn's extension to the phong lighting model (a.k.a. a shortcut to R * V).


-M
 
gking said:
For a given material (constant microfacet slope and index of refraction), Cook-Torrance specular reflection can be calculated and stored in 2 textures (a 2D distribution*fresnel texture indexed by (N.H, L.H) and a 3D geometric attenuation/V.H texture indexed by (N.V, N.L, and V.H)).

Then, in the color blend, you just multiply DF_tex * G_tex * Cs * scale. The rescale at the end is necessary since the G/V.H texture must be in the [0..1] range, which wouldn't give you much range without rescaling after lookup.

Doing a good job of calculating Cook-Torrance per-pixel without lookup textures pretty much requires next-gen hardware with per-pixel exponentiation and division capabilities (and lots of math instructions, to calculate acos and tan).

Much appreciated for someone who knows his stuff! Thanks!:)

I still see some problems (correct me if I'm wrong). It appears that this still doesn't solve the problem of mapping the 2d textures to a 3d object. To me, both the Fresnel and Distribution textures would have to be 3d to get a one-to-one mapping and an accurate specular spread.

Would these Fresnel and Distribution textures be calculated on-the-fly? If so, that would be awesome! If not, then it is very limited. In fact, it would be like taking a snapshot of a particular Cook-Torrance model at certain constant settings and making the whole scene based on this since I wouldn't be able to change the ior or microfacet slope at will.

Yea, that tan and acos is a problem too..:(

-M
 
Ok. I'm a little lost between lighting models.
Sorry.

Still.
Can you tell me what to look at a picture to differentiate it to phong shading?
 
Hyp-X said:
Ok. I'm a little lost between lighting models.
Sorry.

Still.
Can you tell me what to look at a picture to differentiate it to phong shading?

You are forgiven.:)

Here is a pic of the true phong lighting model (R * V)^50. I chose a lower exponent to get a wider spread on the specular highlight... Notice it doesn't even come close to spreading reflected energy across the surface like the Blinn model.

http://userpages.umbc.edu/~mrowle1/phong_shiny_golf.tif

-M
 
OpenGL guy said:
Mr. Blue said:
Yea, that tan and acos is a problem too..:(
Nah... just use a Taylor series with enough precision to suit your needs :)
Is well know that Taylor expansion sucks for trigonometric functions.
There are plenty of other better way to make it ;)

ciao,
Marco
 
Mr. Blue said:
OpenGL Guy! How's the Radeon's going?:)
Looks good to me and most of the reviews agree :)
Do you use the 3d hardware for the Taylor series or the CPU? Just curious.
You can't use the CPU because it's not a part of the 3D pipeline. If you start using CPU for pixel shader ops, you might as well do a software rasterizer for everything, as then you could optimize a lot of stuff differently that you wouldn't do on an immediate mode rasterizer.
 
OpenGL guy said:
Do you use the 3d hardware for the Taylor series or the CPU? Just curious.
You can't use the CPU because it's not a part of the 3D pipeline. If you start using CPU for pixel shader ops, you might as well do a software rasterizer for everything, as then you could optimize a lot of stuff differently that you wouldn't do on an immediate mode rasterizer.

Hmm.. I'll go a step further then. How is the Taylor series approximated on the GPU? Do you have to store the values from an Nth Taylor series in a texture? Or does the GPU integrate by summation? How many passes? Whats the error?

Thanks,

-M
 
Here's a hot new topic: precomputed radiance transfer with spherical harmonics.

Video here.

Realistic lighting, multiple light sources, fixed frame rate, executed in real time on a Radeon 8500. Microsoft's Research Department brews up some scary stuff. Basically what you do is calculate highly accurate lighting across an object and convert it into spherical harmonics, which means that you can evaluate it in real time as a series of dot products. All the lights you want, with shadows and interreflections. Some of the advanced permutations of this algorithm are a little bit too slow still, but in its simplest form this should be fine for real time work.

Tragically, this hasn't been extended to deformable surfaces yet. I suppose you could keyframe objects, but that would require keyframing the precomputed radiance transfer as well, and this would probably take about ten weeks for a model (ugh.) Stay tuned...
 
Mordred said:
Here's a hot new topic: precomputed radiance transfer with spherical harmonics.

Video here.

Realistic lighting, multiple light sources, fixed frame rate, executed in real time on a Radeon 8500. Microsoft's Research Department brews up some scary stuff. Basically what you do is calculate highly accurate lighting across an object and convert it into spherical harmonics, which means that you can evaluate it in real time as a series of dot products. All the lights you want, with shadows and interreflections. Some of the advanced permutations of this algorithm are a little bit too slow still, but in its simplest form this should be fine for real time work.

Tragically, this hasn't been extended to deformable surfaces yet. I suppose you could keyframe objects, but that would require keyframing the precomputed radiance transfer as well, and this would probably take about ten weeks for a model (ugh.) Stay tuned...


This is nice!:)

One thing though: Specular component costs some frame rate like crazy! :(

This is VERY similiar to the "heated plate" problem in Numerical Computations.:) This is the next step toward Final Gathering and Radiosity.

I can't wait till this type of stuff will need no pre-computed tables, matrices, etc..

-M
 
Back
Top