Depth of Field Smartshader

Is it possible to modify the following Ati developed blur smartshader that applies blur to the output frame of a renderer for a depth of field effect? Is there enough information provided by the final frame of rendering to produce such an effect? If it's possible to modify the blur filter below for such an effect, would the change required be trivial?
shader convolutionPixelShader =
"!!ARBfp1.0

# This is a general purpose 3x3 convolution filter. Unused instructions
# and constants will get culled out by the driver so no need to remove them here.

PARAM texCoord00 = program.local[0];
PARAM texCoord01 = program.local[1];
PARAM texCoord02 = program.local[2];
PARAM texCoord10 = program.local[3];
PARAM texCoord12 = program.local[5];
PARAM texCoord20 = program.local[6];
PARAM texCoord21 = program.local[7];
PARAM texCoord22 = program.local[8];

# These constants are setup to do a blur filter.
PARAM const00 = {0.0769, 0.0769, 0.0769, 0.0};
PARAM const01 = {0.1538, 0.1538, 0.1538, 0.0};
PARAM const02 = {0.0769, 0.0769, 0.0769, 0.0};
PARAM const10 = {0.1538, 0.1538, 0.1538, 0.0};
PARAM const11 = {0.0769, 0.0769, 0.0769, 0.0};
PARAM const12 = {0.1538, 0.1538, 0.1538, 0.0};
PARAM const20 = {0.0769, 0.0769, 0.0769, 0.0};
PARAM const21 = {0.1538, 0.1538, 0.1538, 0.0};
PARAM const22 = {0.0769, 0.0769, 0.0769, 0.0};

TEMP finalPixel;
TEMP coord00;
TEMP coord01;
TEMP coord02;
TEMP coord10;
TEMP coord11;
TEMP coord12;
TEMP coord20;
TEMP coord21;
TEMP coord22;

OUTPUT oColor = result.color;

# Generate all the texture coordinates for the 9 texture lookups
ADD coord00, texCoord00, fragment.texcoord[0];
ADD coord01, texCoord01, fragment.texcoord[0];
ADD coord02, texCoord02, fragment.texcoord[0];
ADD coord10, texCoord10, fragment.texcoord[0];
ADD coord12, texCoord12, fragment.texcoord[0];
ADD coord20, texCoord20, fragment.texcoord[0];
ADD coord21, texCoord21, fragment.texcoord[0];
ADD coord22, texCoord22, fragment.texcoord[0];

# Do the texture lookups for the 3x3 kernel
TEX coord00, coord00, texture[0], 2D;
TEX coord01, coord01, texture[0], 2D;
TEX coord02, coord02, texture[0], 2D;
TEX coord10, coord10, texture[0], 2D;
TEX coord11, fragment.texcoord[0], texture[0], 2D;
TEX coord12, coord12, texture[0], 2D;
TEX coord20, coord20, texture[0], 2D;
TEX coord21, coord21, texture[0], 2D;
TEX coord22, coord22, texture[0], 2D;

# Multiply all texture lookups by their weights and sum them up
MUL finalPixel, coord00, const00;
MAD finalPixel, coord10, const10, finalPixel;
MAD finalPixel, coord20, const20, finalPixel;
MAD finalPixel, coord01, const01, finalPixel;
MAD finalPixel, coord11, const11, finalPixel;
MAD finalPixel, coord21, const21, finalPixel;
MAD finalPixel, coord02, const02, finalPixel;
MAD finalPixel, coord12, const12, finalPixel;
MAD oColor, coord22, const22, finalPixel;
END";

shader copyPixelShader =
"!!ARBfp1.0
OUTPUT oColor = result.color;
TEMP pixel;
TEX pixel, fragment.texcoord[0], texture[0], 2D;
MOV oColor, pixel;
END";

surface temp = allocsurf(width, height);

convolutionPixelShader.constant[0] = {-ds_dx, -dt_dy, 0, 0};
convolutionPixelShader.constant[1] = {0, -dt_dy, 0, 0};
convolutionPixelShader.constant[2] = {ds_dx, -dt_dy, 0, 0};
convolutionPixelShader.constant[3] = {-ds_dx, 0, 0, 0};
convolutionPixelShader.constant[4] = {ds_dx, 0, 0, 0};
convolutionPixelShader.constant[5] = {-ds_dx, dt_dy, 0, 0};
convolutionPixelShader.constant[6] = {0, dt_dy, 0, 0};
convolutionPixelShader.constant[7] = {ds_dx, dt_dy, 0, 0};

texture[0].source = backbuffer;
destination temp;
apply convolutionPixelShader;


texture[0].source = temp;
destination backbuffer;
apply copyPixelShader;
Anyone familiar with smarshader programming care to give try modyfing the shader or produce a new one for depth of field? If not, at least a hint on how to do it or whether it is even possible would be appreciated? I have no experience in the realm of graphics programming, otherwise I would have tried to write the shader myself. I would really enjoy experiencing the effect on some of my OGL games.
 
Last edited by a moderator:
To generate a "Depth of Field" effect you need the depth of every pixel. Unfortunately SmartShader gives you no access to this information as the Z-Buffer is not readable from the pixelshader at all.

To force DOF for a game you need a more complex infra structure than SmartShader offers.
 
I had a feeling z information would be required, but wasn't sure whether perhaps there was some other trick that could be used for the depth sorting.

Can I assume, then, that any effect that requires positional pixel information is out of the question? There is no way of discarding pixels based on positional information?
 
Last edited by a moderator:
SmartShaders are only work as post filters on the final content of the frame buffer before it will become the front buffer. If you need positional information you are lost because you have no access to this information and SmartShaders offer no way to write additional information to a second buffer during rendering.
 
IMO because of that, the smartshader idea just seems pointless.

There are obvious reasons why the pixel shader only runs on the final frame (cheating via wall hacks anyone?)... but..
 
Unless you are specifically trying to emulate photography, depth of field makes no sense.
It depends on a number of optical factors, and photographers develop a reasonable sense for what the result should look like. If you don't do a decent job of it, your image will look odd. (For instance applying DOF as if the lens had a field of view of 10 degrees when the rest of the scene implies that your FOV is actually around 90 degrees)

This is the formula
The near and far distance values of depth of field can be calculated as
d1 = s/[1 + ac(s-f)/f^2]
d2 = s/[1- ac(s-f)/f^2]
with plus in the denominator used for the near (d1), and minus — for the far (d2) value. The notation is:
d1 or d2 — the minimum or maximum subject distance in acceptable focus (measured from the lens, or more exactly, from its entrance pupil, see below)
Please note my definifion of d. I am getting emails from people who compare this formula with other sources and report an "error" without noticing that the other expression may compute distances measured from the plane at which ideal focus is achieved, not from the lens (in other words, they compute d-s). I'm tired responding to these emails.
s — the focused subject distance (this is what is set on the lens focus scale)
f — lens focal length
a — aperture (or F-stop), like e.g., 2.8
c — the diameter of the acceptable circle of confusion.
In other words, if your camera is focused at s, acceptable circle of confusion will be achieved for subjects ranging in distance from d1 to d2.
Negative results for the far limit (i.e., with a '-' in the denominator) mean that it reaches the infinity.
Of course I don't have to remind you that the formula will work as long as you express all lengths in the same units (whatever they are: millimeters, inches, or nautical miles).

If you want to dig into this, just put the formulae into a spreadsheet and play around with the variables.

David Jacobsons Lens tutorial is a good starting resource if you want to learn a bit more.
http://www.photo.net/learn/optics/lensTutorial

But - if you want to emulate someone looking, rather than fake photography, DOF effects just don't make sense, even when doing a proper job of it.

Edit: For your purposes, it might be better to rewrite the equations so that they show the circle of confusion (c) as a function of the other variables, and plot c vs. d for a bunch of different cases.
 
Last edited by a moderator:
Entropy said:
Unless you are specifically trying to emulate photography, depth of field makes no sense.
I'm pretty sure that's the only reason anyone implements DOF. If you want to make a "cinematic" experience, then you need the things that make a visual experience look like a movie.

Of all our photorealistic experiences of sports, racing, fantasy, action, etc., >90% come the TV or the movie theatre. This is the "reality" that games must strive for to feel real and enhance immersion, especially on a 2D display.
 
I guess one of the main reasons I like depth of field is because it reproduces the effect of loss of sharp detail of distant objects, which might allow for more detail/rendering resources to be allocated on objects closer in proximity to the viewer or at least it might reduce the need for as much pixel/model detail on objects far from the viewer. Perhaps, as in the context of my situation, the software has already been developed and the game engine in question has a bad texture/model LOD algorithm or produces too much aliasing at a distance and the end user wants to mask this with a DOF smartshader (which doesn't seem like its going to happen).

Although I haven't done enough research to determine whether the DOF algorithm is more ALU intense, requiring relatively less bandwith/rop/texture, or whether it is relatively more dependent on sampling and bandwith, I was thinking that perhaps on an architecture that is more ROP/Texture sampling/bandwith limited, the DOF algorithm makes relatively better use of resources than applying plain old AA/Aniso or full LOD on distant objects, while maintaining or enhancing visual fidelity (detailed distant objects/pixels might end up a waste due to the natural fall off of percievable detail on objects at a distance). Thus, if the reference platform might has ALU cycles to burn, assuming it's texture sampling/bandwith limited, the technique would make for a good tradeoff of rendering resources. But then again, this is all just off the top of my head and it might be that the DOF altorithm is very sampling intense or intense in some other area (haven't had time to look at the equations) or not of good for imitating the natural fall off of percievable detail on objects at a distance.
 
Last edited by a moderator:
While DOF is cool when used appropriately, I think the players would be annoyed if they were constantly running around in a game that made them feel nearsighted.
 
Well DOF in combination with an eye tracking system would be nice. The player could look at the content on screen and focus single objects with his/her eyes. The program would blur anything else and give a cinematic/realistic feeling - at least in theory ;-)
 
I guess the problem would lie in using the proper method for recognizing when the player wants to implicitly or explicitly focus on something.
 
Back
Top