You don't need an offset (bias), although numeric precision issues will still cause a slight problem (which is unavoidable without exact rational arithmetic
). The easiest solution is to clamp the variance to a small value before calculating Chebyshev's inequality, something like:
Variance = E_x2 - E_x*E_x;
Variance = max(Variance, 0.000001);
p_max = ...
The value that you clamp to is not really related to the scene, and a very small value will usually remove all self-shadowing artifacts without affecting the "real" shadows at all. The value also does not need tweaking: it really can be set once and forgotten about. You can also do some fancier math related to the surface normal (or depth derivatives), but that's probably overkill since clamping the variance works so well in my experience.
Regarding depth metrics: if you're using a directional light, you can use the light-space Z value (BEFORE diving by z!!!) as the depth metric, and that will work well. The easiest way of implementing this is something like:
// Vertex shader
LightSpacePos = mul(Position, LightViewMatrix);
Output.Position = mul(LightSpacePos, LightProjectionMatrix);
Output.Depth = (LightSpacePos.z - NearClipPlane) / (FarClipPlane - NearClipPlane);
Alternatively (if you're using a "normal" orthogonal projection matrix) you can just divide Output.Position by FarClipPlane to save a few instructions, but the above is a bit more clear on what's going on.
For a spot or point light, the best depth metric in my experience is "distance to light", which must actually be calculated by interpolating the light space position (float3) and computing the length in the pixel shader (the NVIDIA demo actually does it incorrectly I believe).