John Carmack starts a blog, and get rid of its now (in)famous .plans.
His first entry contains some thoughts about 3D that some people here might want to read:
His first entry contains some thoughts about 3D that some people here might want to read:
Random Graphics Thoughts
Years ago, when I first heard about the inclusion of derivative instructions in fragment programs, I couldn’t think of anything off hand that I wanted them for. As I start working on a new generation of rendering code, uses for them come up a lot more often than I expected.
I can’t actually use them in our production code because it is an Nvidia-only feature at the moment, but it is convenient to do experimental code with the nv_fragment_program extension before figuring out various ways to build funny texture mip maps so that the built in texture filtering hardware calculates a value somewhat like the derivative I wanted.
If you are basically just looking for plane information, as you would for modifying things with texture magnification or stretching shadow buffer filter kernels, the derivatives work out pretty well. However, if you are looking at a derived value, like a normal read from a texture, the results are almost useless because of the way they are calculated. In an ideal world, all of the samples to be differenced would be calculated at once, then the derivatives calculated from there, but the hardware only calculates 2x2 blocks at a time. Each of the four pixels in the block is given the same derivative, and there is no influence from neighboring pixels. This gives derivative information that is basically half the resolution of the screen and sort of point sampled. You can often see this effect with bump mapped environment mapping into a mip-mapped cube map, where the texture LOD changes discretely along the 2x2 blocks. Explicitly coloring based on the derivatives of a normal map really shows how nasty the calculated value is.
Speaking of bump mapped environment sampling… I spent a little while tracking down a highlight that I thought was misplaced. In retrospect it is obvious, but I never considered the artifact before: With a bump mapped surface, some of the on-screen normals will actually be facing away from the viewer. This causes minor problems with lighting, but when you are making a reflection vector from it, the vector starts reflecting into the opposite hemisphere, resulting in some sky-looking pixels near bottom edges on the model. Clamping the surface normal to not face away isn’t a good solution, because you get areas that “see right through†to the environment map, because a reflection past a clamped perpendicular vector doesn’t change the viewing vector. I could probably ramp things based on the geometric normal somewhat, and possibly pre-calculate some data into the normal maps, but I decided it wasn’t a significant enough issue to be worth any more development effort or speed hit.
Speaking of cube maps… The edge filtering on cube maps is showing up as an issue for some algorithms. The hardware basically picks a face, then treats it just like a 2D texture. This is fine in the middle of the texture, but at the edges (which are a larger and larger fraction as size decreases) the filter kernel just clamps instead of being able to sample the neighbors in an adjacent cube face. This is generally a non-issue for classic environment mapping, but when you start using cube map lookups with explicit LOD bias inputs (say, to simulate variable specular powers into an environment map) you can wind up with a surface covered with six constant color patches instead of the smoothly filtered coloration you want. The classic solution would be to implement border texels, but that is pretty nasty for the hardware and API, and would require either the application or the driver to actually copy the border texels from all the other faces. Last I heard, upcoming hardware was going to start actually fetching from the other side textures directly. A second-tier chip company claimed to do this correctly a while ago, but I never actually tested it.