Visibility directional ambient occlusion (VDAO)

Hello, I just registered because some people recommended me to come here and show some stuff for all around.
I present you my own variantion of SSAO, I called it that way because of things mentioned below, read and enjoy!

A simple video showing that technique can be seen there:

Why not casual SSAO implementation?

Many people think SSAO doesn’t have enough data to produce pretty results. This is true and false at the same time. SSAO is screen space effect, means it doesn’t know the geometry well.
Old SSAO implementations took depth and surface normal to calculate approximate result. This approach is invalid to the roots, it’s a hack and shouldn’t be used today.
HBAO is almost the same, but the calculation of light is different. It calculates free horizon of a fragment to approximate soft shadowing. This is a hack and shouldn’t be used today too!

Okay, so what’s so good about VDAO

VDAO uses camera space position texture, ambient light producing surface position and visibility test in screen space. Basically, it produces near physically correct results in screen space.

What I need to get it working?

Deferred pipeline, which also requires camera space position texture. Camera space position is just world position after applying View Matrix to it, or even simpler, after subtracting camera position from world position. This way no matter how away camera is from (0,0,0) origin, lighting will always be very accurate, because it will operate on small floating point numbers instead of some huge ones, and this gives the precision.

You will also need texture which holds fragments normals, but it’s mandatory in deferred pipeline too. VDAO can be simplified to not respect diffuse component and then you can omit normals texture, but I recommend using it.

How does it work?

It samples some count of different size hemispheres centered at fragment position and rotated towards ambient light source (sky, for example), for each sample it checks visibility between fragment and sampled position, and if visibility test is passed, it calculates diffuse component between sample position and fragment normal, and add the result to buffer value. After all sampling is done, buffer is divided by samples count and it’s the final result.

Hemisphere is a sphere cut in a half.

Some details about the implementation:

I sample three hemispheres with different sizes, but the sample direction from fragment is calculated only once. I start with radius calculated by fragment distance to camera, clamped from 0.1 to 2.0, and divide it by half two times. So, when first hemisphere has radius = 2, next will be 1 and last will have radius 0.5. This is required to limit samples count and recalculations, and makes it possible to object cast very long distance soft shadows, while preserving high resolution self-shadowing too.
Hemisphere is primarily computed towards Y+ direction, and can be rotated afterwards by any rotation matrix. I don’t do it, but it’s one liner.
When a sample position in 3d space is known, it’s time to test visibility. Fragment position in 3d space is known, sample position in 3d space is known too, so it’s needed to project them into screen space coordinates. It’s done by moving them into world space from fake camera space (my camera space doesn’t use rotation) and transforming them from world space into screen space by projection * view matrix. This way you get two 2d coordinates on screen of those two points. Then run a loop, I use 10 iterations, to mix them and find out visibility. Visibility is tested this way:

– Find out distance of sample and fragment to camera.
– Get 2d coordinates
– Run a loop with some, like 10, samples, from 0.0 to 1.0 (10 samples will result in having 0.1 as iterator) and mix between previously got 2d coordinates with loop current value. Also mix distances got previously the same way.
– Sample camera space position texture and check if fragment at current mixed position is closer to camera than mixed distance of sample and processed fragment
– If it’s closer – visibility test failed so return false immediately
– After looping, return true. Visibility test is passed

While sample direction from fragment will not change, diffuse component can be computed only once for sample and reused when hemisphere gets smaller.
If sample and fragment see each other, so visibility test has passed okay, add original fragment color multiplied by diffuse component to buffer value.

After sampling, divide by samples count and that’s all!

Cheers
simple-smile.png
find me on github https://github.com/achlubek

The project where I use that and many other alghos is VEngine, MIT Licensed
The code is available here:
https://gist.github.com/achlubek/deed8f68e9e45a6c57e2#file-vdao


Sorry about not setting Tags for this topic, is seems opengl and ssao aren't allowed and I cant create new.
 
Hey. Great to see your enthusiasm for graphics. The visual results you are getting are pretty good (but rather noisy)

I took a look at your code and it seems to be extremely inefficient. Best I can tell in the worst case scenario (which looks to be the common case - not occluded) it'll be doing ~1300 texture samples per pixel (I only had a quick look, but it appeared to do 65 iterations of the outer loop, with two iterations of a middle loop that finally has an inner loop with 10 texture samples - so 65×2×10). That'll make even ultra high end GPUs struggle. I'd be curious what the millisecond time for 1080p would be - at that res you are asking the GPU to do ~2.7 billion texture lookups, or roughly 11GB of sampled data per frame (assuming 32bit depth).

The reason SSAO is popular is because despite the fairly poor visual quality it's really cheap. Most forms of SSAO actually do what you describe (occlusion of a hemisphere in the normal direction), the trick is doing it in a way such that you balance the quality with cost, noise with blur, etc. It's a difficult problem.
 
Back
Top