Global Illumination: (56k modem warning)

i always look at geometry as something that should be dynamically changeable. if i can open a door, i want to see the light from the sun outside flooding into the room, filling it up with light.

(i explicitely stated that in an additional statement. dynamic geometry.. everyone wants that, physics are hip today..)
 
Images rendered using global illumination algorithms are more photorealistic than images rendered using local illumination algorithms.

I disagree with that - a talented lighting artist can create images that are just as good looking, if not better, than the result of a GI calculation. The obvious difference is that the work can take days for the lighting artist; but the resulting scene will be at least an order of magnitude faster to render.
 
Laa-Yosh said:
I disagree with that - a talented lighting artist can create images that are just as good looking, if not better, than the result of a GI calculation. The obvious difference is that the work can take days for the lighting artist; but the resulting scene will be at least an order of magnitude faster to render.

well, you can't disagree with that. you can just say artists can create more beautiful images with the (arbitary restrictable) featureset they have.

but photorealistic is a term that is built upon two simple words: photo and realistic. and if you create algorithms that simulate physics to a certain degree, you get a certain degree of photorealism automatically. and any gi implementation is to a much higher degree close to the real physics, and thus much more realistic. how good that looks is up to the artist.
 
Photorealism is a totally subjective term, thus you cannot state that an algorithmic approach is automatically closer to it. There's no definition for photorealistic in physics, and what we consider to be physically correct is probably a very rough approximation of the real world anyway (especially if you ask a buddhist ;).

And the original statement was that using global vs. local illumination will automatically get you a better result, which is even further from the truth. Even mildly undersampled GI will get you noise and other artifacts that aren't really common in photos... :p

I'll get back to various illumination techniques used in CGI later today, but I gotta go now.
 
SlmDnk said:
Getting closer: http://graphics.ucsd.edu/papers/plrt/

Watch the real-time video demonstration.

Ah, you just beat me to it!

This is actually the area of research that I'm trying to get into. Photon Mapping is just neat. :D I've been following Henrik Jensen's work since he used to post on the radiance mailing list ages ago. He, Greg Ward Larson, and Paul Debevec are some of my heroes. ;)

Nite_Hawk
 
Conker's Live and Reloaded on Xbox already does use PRT for its lighting engine.
It looks damn nice, really close to what you'd expect from a CG movie (To make a reference to an another thread we had).
 
Vysez said:
Conker's Live and Reloaded on Xbox already does use PRT for its lighting engine.
It looks damn nice, really close to what you'd expect from a CG movie (To make a reference to an another thread we had).

PLRT seems more interesting to me because we can actually start talking about local lighting rather than distance lighting. Still, I think what everyone really wants is a non-precomputed solution. Some of the papers at gpgpu are pretty interesting in this regard. I'm especially interested in techniques to only sample areas of the scene where dramatic changes in ambient lighting occur (and then using gradients to fill in the sample points like current techniques). It'd be interesting if you could have something like irradiance caching, but that could be enhanced to handle things like caustics more accurately.

Granted, perhaps this has already been done and I'm behind the times. I can't wait to get back into reading papers again and catching up on the newer research. To much to read, too little time.

Nite_Hawk
 
Maxwell Render is a fine piece of software indeed. However, it can easily require 24 hours, or even several days, on a 4xXeon rig before reaching a sufficiently clean solution to more complex scenes. So we're a couple of generations away from doing that kind of stuff in realtime.
 
The speedup needed from a 24 hour render time to 30 frames per second is about 2 and a half million...
 
Just curious

Laa-Yosh said:
The speedup needed from a 24 hour render time to 30 frames per second is about 2 and a half million...

Well not really 30 fps, but how much time do you think can be saved using the CELL processor?

Do you think the render time could go down to 1 hour, 10 hours, 30 mintues?
 
Laa-Yosh said:
The speedup needed from a 24 hour render time to 30 frames per second is about 2 and a half million...
a 24 hour render time is pretty gross unless you are talking about a really complex scene or have extremely high quality requirements. I used to render semi-complex scenes in radiance on a dual 400mhz celeron in several hours with two ambient light bounces. Sure there was some inaccuracy in places, but it was close enough for non critical work.

Maxwell Render's whole selling point is that it "always converges to the correct solution" given enough time due to it being an "unbiased renderer". This basically just means they are not using interpolation/extrapolations that other renderers use to speed up render times. They make a big deal about it on their webpage and everything.

for realtime work you would be a lot more interested in whatever methods give you the best bang for your buck, which certainly isn't Maxwell's technique.

Nite_Hawk
 
Back
Top