John Carmack opens a blog and has random graphics thoughts

Farid

Artist formely known as Vysez
Veteran
Supporter
John Carmack starts a blog, and get rid of its now (in)famous .plans.
His first entry contains some thoughts about 3D that some people here might want to read:

Random Graphics Thoughts


Years ago, when I first heard about the inclusion of derivative instructions in fragment programs, I couldn’t think of anything off hand that I wanted them for. As I start working on a new generation of rendering code, uses for them come up a lot more often than I expected.


I can’t actually use them in our production code because it is an Nvidia-only feature at the moment, but it is convenient to do experimental code with the nv_fragment_program extension before figuring out various ways to build funny texture mip maps so that the built in texture filtering hardware calculates a value somewhat like the derivative I wanted.


If you are basically just looking for plane information, as you would for modifying things with texture magnification or stretching shadow buffer filter kernels, the derivatives work out pretty well. However, if you are looking at a derived value, like a normal read from a texture, the results are almost useless because of the way they are calculated. In an ideal world, all of the samples to be differenced would be calculated at once, then the derivatives calculated from there, but the hardware only calculates 2x2 blocks at a time. Each of the four pixels in the block is given the same derivative, and there is no influence from neighboring pixels. This gives derivative information that is basically half the resolution of the screen and sort of point sampled. You can often see this effect with bump mapped environment mapping into a mip-mapped cube map, where the texture LOD changes discretely along the 2x2 blocks. Explicitly coloring based on the derivatives of a normal map really shows how nasty the calculated value is.


Speaking of bump mapped environment sampling… I spent a little while tracking down a highlight that I thought was misplaced. In retrospect it is obvious, but I never considered the artifact before: With a bump mapped surface, some of the on-screen normals will actually be facing away from the viewer. This causes minor problems with lighting, but when you are making a reflection vector from it, the vector starts reflecting into the opposite hemisphere, resulting in some sky-looking pixels near bottom edges on the model. Clamping the surface normal to not face away isn’t a good solution, because you get areas that “see right through†to the environment map, because a reflection past a clamped perpendicular vector doesn’t change the viewing vector. I could probably ramp things based on the geometric normal somewhat, and possibly pre-calculate some data into the normal maps, but I decided it wasn’t a significant enough issue to be worth any more development effort or speed hit.


Speaking of cube maps… The edge filtering on cube maps is showing up as an issue for some algorithms. The hardware basically picks a face, then treats it just like a 2D texture. This is fine in the middle of the texture, but at the edges (which are a larger and larger fraction as size decreases) the filter kernel just clamps instead of being able to sample the neighbors in an adjacent cube face. This is generally a non-issue for classic environment mapping, but when you start using cube map lookups with explicit LOD bias inputs (say, to simulate variable specular powers into an environment map) you can wind up with a surface covered with six constant color patches instead of the smoothly filtered coloration you want. The classic solution would be to implement border texels, but that is pretty nasty for the hardware and API, and would require either the application or the driver to actually copy the border texels from all the other faces. Last I heard, upcoming hardware was going to start actually fetching from the other side textures directly. A second-tier chip company claimed to do this correctly a while ago, but I never actually tested it.
 
A little something to make StratoMaster feel better. :D

I can’t respond to most of the email I get, but I do read everything that doesn’t immediately scan as spam. Unfortunately, the probability of getting an answer from me doesn’t have a lot of correlation with the quality of the question, because what I am doing at the instant I read it is more dominant, and there is even a negative correlation for “deepâ€￾ questions that I don’t want to make an off-the-cuff response to.
 
Re: John Carmack opens a blog and has random graphics though

Vysez said:
Speaking of bump mapped environment sampling… I spent a little while tracking down a highlight that I thought was misplaced. In retrospect it is obvious, but I never considered the artifact before: With a bump mapped surface, some of the on-screen normals will actually be facing away from the viewer. This causes minor problems with lighting, but when you are making a reflection vector from it, the vector starts reflecting into the opposite hemisphere, resulting in some sky-looking pixels near bottom edges on the model. Clamping the surface normal to not face away isn’t a good solution, because you get areas that “see right throughâ€￾ to the environment map, because a reflection past a clamped perpendicular vector doesn’t change the viewing vector. I could probably ramp things based on the geometric normal somewhat, and possibly pre-calculate some data into the normal maps, but I decided it wasn’t a significant enough issue to be worth any more development effort or speed hit.
It seems to me that parallax mapping should automatically solve this problem in some limited fashion. It won't solve it completely, obviously, but I would think it'd help a bit.
 
Pete said:
A little something to make StratoMaster feel better. :D

I can’t respond to most of the email I get, but I do read everything that doesn’t immediately scan as spam. Unfortunately, the probability of getting an answer from me doesn’t have a lot of correlation with the quality of the question, because what I am doing at the instant I read it is more dominant, and there is even a negative correlation for “deep†questions that I don’t want to make an off-the-cuff response to.
LOL, I was going to post :

If you really, really want to email me, add a “[JC]†in the subject header so the mail gets filtered to a mailbox that isn’t clogged with spam.

for him!
 
There are still bits of early Quake code in Half Life 2. . .

Heh. This would be a good example of why my first instinct is to :rolleyes: whenever anyone claims to be taking a "clean sheet" approach to any major software project.
 
geo said:
There are still bits of early Quake code in Half Life 2. . .

Heh. This would be a good example of why my first instinct is to :rolleyes: whenever anyone claims to be taking a "clean sheet" approach to any major software project.

If the project is written in a object oriented language there aren't many reasons why you would start from absolute scratch.
 
Possibly I shouldn't have had that last Saphire & tonic. . .but while reading John's thots it occured to me that the various governments and such that cry out that there aren't enuf young people going into the "hard" sciences, like math, are missing a big bet by not doing ad campaigns featuring 3d graphics & game development as the centerpiece.
 
Last I heard, upcoming hardware was going to start actually fetching from the other side textures directly.

And this would be a reference to. . .?
 
Reverend said:
LOL, I was going to post :

If you really, really want to email me, add a “[JC]â€￾ in the subject header so the mail gets filtered to a mailbox that isn’t clogged with spam.

for him!
Hilarity! I left that out in deference to JC. I figured that if someone wanted to email him, they should at least go to the effort of reading his blog. I thought posting it here would just paint a bigger target on his inbox.
 
I'd ask him why Doom3 perfomance sucks so bad on ATI hardware unlike almost every software title out there. :devilish:
 
rwolf said:
I'd ask him why Doom3 perfomance sucks so bad on ATI hardware unlike almost every software title out there. :devilish:
We already know the answer.
Ati's OGL needs some work :?:
 
radeonic2 said:
rwolf said:
I'd ask him why Doom3 perfomance sucks so bad on ATI hardware unlike almost every software title out there. :devilish:
We already know the answer.
Ati's OGL needs some work :?:
And the Hyper-Z quirks, and this, and that and PLEASE let's not go there yet again, shall we? :(
 
anaqer said:
radeonic2 said:
rwolf said:
I'd ask him why Doom3 perfomance sucks so bad on ATI hardware unlike almost every software title out there. :devilish:
We already know the answer.
Ati's OGL needs some work :?:
And the Hyper-Z quirks, and this, and that and PLEASE let's not go there yet again, shall we? :(

I didn't hear of a quirk in H-Z. I thought it was just the way H-Z worked that caused it to be useless under most circumstances under D3.
 
anaqer said:
radeonic2 said:
rwolf said:
I'd ask him why Doom3 perfomance sucks so bad on ATI hardware unlike almost every software title out there. :devilish:
We already know the answer.
Ati's OGL needs some work :?:
And the Hyper-Z quirks, and this, and that and PLEASE let's not go there yet again, shall we? :(
Isn't that was 3dcenter said, then ati corrected them, saying hyper-z wasn't the issue?
I certainly seems like it is- comparable at lower resolutions.
 
Well, I was referring to this minus the update which I've missed. All in all, Hyper-Z may or may not be the main culprit, but it wasn't really the point to begin with - I just wanted to ask that though we could (in fact, had already) come up with various explanations, we should rather not start it all over again.
 
Back
Top