Compressed lightmap versus standard lightmap (IQ)

Would a compressed lightmap (such as in ut2003) yield as much visual IQ as a regular lightmap? Just wondering if the extra compression performance would reduce IQ.
 
Lightmaps generally have very gradual (and predictable) color/intensity changes, which has allowed developers to store extremely low resolution textures for lightmaps (anywhere from 4-256x less resolution than the color map placed on top).

Storing a slightly higher resolution lightmap with DXT compression should result in better image quality than what was available previously; however, the difference probably won't be very noticeable.
 
just a thought .. lightmaps are/were one of the occasions where textures could use higher-level resampling than plain bilinear. Mitchell's bicubic would be nice, for instance :eek:
 
no_way said:
just a thought .. lightmaps are/were one of the occasions where textures could use higher-level resampling than plain bilinear. Mitchell's bicubic would be nice, for instance :eek:

Wouldn't that just make it run really slow without much benefit in appearance? AFAIK bicubic is a lot slower than bilinear.
 
Chalnoth said:
It might reduce IQ if you are using a GeForce...unless you force DXT3 compression.

UT2003 uses DXT3 as default for lighting, which in case this problem with GF and DXT1 wouldn't have existed would have been a very odd choise and the alpha channel isn't used. In the editor you can though change to DXT1 and RGB8 if you want to.
 
Nagorak said:
Wouldn't that just make it run really slow without much benefit in appearance? AFAIK bicubic is a lot slower than bilinear.

Yes, bicubic takes 16 samples per pixel. That alone should make it obvious that it takes up a heck of a lot more processing power, let alone memory bandwidth.

Personally, I think other interim solutions should be used first. For example, it should be relatively straightforward to do a simple quadratic interpolation in two directions on five samples (Three samples in either direction, sharing the middle sample...).
 
While bicubic interpolation would be computationally expensive, there is no reason why it should take up that much more memory bandwidth than standard bilinear interpolation - texels would be reused from pixel to pixel the same way for both methods.

And I find it hard to see how you can do quadratic interpolation with 5 sample points without introducing color discontiunities along texel edges - which would make the method substantially worse than standard bilinear interpolation. You cannot just decouple interpolation along each texture axis just like that.

Full biquadratic interpolation takes at least 3x3 samples - first you do quadratic interpolation along the s texture axis for each of the 3 texel rows, taking 3 sample points each, then you do interpolation along the t texture axis, using the 3 points that resulted from the s axis interpolation. This method extends naturally to a 3x3x3-sample-point method for 3d textures etc.
 
arjan de lumens said:
While bicubic interpolation would be computationally expensive, there is no reason why it should take up that much more memory bandwidth than standard bilinear interpolation - texels would be reused from pixel to pixel the same way for both methods.

But the increased number of samples required per pixel would most certainly reduce cache hits.

And I find it hard to see how you can do quadratic interpolation with 5 sample points without introducing color discontiunities along texel edges - which would make the method substantially worse than standard bilinear interpolation. You cannot just decouple interpolation along each texture axis just like that.

Oh, yeah, that's right. If you look at sample coverage, the effective sample coverage of bilinear is 4, whereas the effective sample coverage of a 5-sample method like I mentioned would be just above one. I suppose you would need to use a square sampling pattern in order to increase sample coverage while at the same time making certain to not miss texels.

Full biquadratic interpolation takes at least 3x3 samples - first you do quadratic interpolation along the s texture axis for each of the 3 texel rows, taking 3 sample points each, then you do interpolation along the t texture axis, using the 3 points that resulted from the s axis interpolation. This method extends naturally to a 3x3x3-sample-point method for 3d textures etc.

Which, of course, would still be significantly less computationally-intensive than bicubic, but also quite a bit more intensive than bilinear. Nevertheless, I do feel that biquadric is probably the next step (unless there is another method that I'm not aware of that is capable of producing similar/better image quality with fewer calculations...which is far from out of the question, as I don't know much about filtering ops outside bilinear).
 
But the increased number of samples required per pixel would most certainly reduce cache hits

Not really - coherency between pixels should increase in most cases. As long as the number of samples per pixel wouldn't cause thrashing, cache hits would probably increase.
 
Not really - coherency between pixels should increase in most cases. As long as the number of samples per pixel wouldn't cause thrashing, cache hits would probably increase.

Except at polygon edges.

Triangles are getting much smaller thesedays.
 
gking said:
Not really - coherency between pixels should increase in most cases. As long as the number of samples per pixel wouldn't cause thrashing, cache hits would probably increase.

Unless sample coverage per pixel was also increased.

That is, with more samples taken per pixel, it should be possible to increase the LOD more than with normal bilinear filtering.
 
Can someone explain the difference between

Bi-linear, Bi-cubic, Tri-linear and Tri-cubic (does it exist?) as it applies to a 3D game. I know you can do bi-linear/bi-cubic interpolation even on 2D video, so I assume that Tri-linear requires the 3rd dimension...

I've never been really clear on what the difference between bi-linear/cubic is...other than bi-cubic is better (for video processing, etc).
 
Dave wrote:
Except at polygon edges.

No, only at UV discontinuities. In strip or fan form (or even triangle lists where each pair of triangles shares an edge), texels will frequently be shared.

Chalnoth wrote:
Unless sample coverage per pixel was also increased.

That's irrelevant -- at the same LOD, provided that the number of samples doesn't cause cache thrashing, biquadratic/bicubic filtering would have a higher cache hit rate than bilinear.

Nagorak wrote:
so I assume that Tri-linear requires the 3rd dimension...

Sometimes (if you are using 3D textures). For 2D textures, trilinear refers to linearly interpolating between mipmap levels. Bilinear, on the other hand, just chooses the mipmap level closest to the desired LOD, and samples from it.

Linear/Quadratic/Cubic refer to the weightings of the various samples (fitting a line, quadratic, or cubic spline to the samples). Quadratic and Cubic filtering look better because they use more samples and because smooth curves are much less alarming to our visual systems than linear artifacts (which we detect quite easily).
 
Well, the "bi" in regards to both methods is in regards to the 2-dimensional nature of the filtering.

Trilinear, of course, linearly interpolates between different MIP maps to make for the third dimension.

Now, I believe that linear filtering assumes point sampling. That is, it attempts to calculate the value for the entire pixel by estimating a value for the center of that pixel.

I'm reasonably certain that bicubic filtering does the exact same thing, only instead of using a linear function, it uses a cubic function (Ax^3 + Bx^2 + Cx + D), and takes the value of that function at the center of the pixel. Now, a computer doesn't do all of this work explicitly, as the math simplifies to a much shorter form, something that I'm sure you can look up with a simple search, if you're interested.

The implication with these sampling methods is that they aren't particularly good at sampling whenever the textures are minified (more texture pixels visible in the region than screen pixels...), so MIP maps must be used in either case.

Where bicubic has a huge benefit over bilinear is whenever textures are magnified significantly. For example, have you ever seen in a game where a texture looked rather blocky? A really obvious example is with the old Quake2. Remember how blocky the light from the explosions was? Bicubic would help to reduce that problem.

As a side note, the idea that I was working on previously (particularly the statement about worse cache hits), was that biquadric wouldn't be based upon a taking the interpolation and sampling at a single point, but would instead be based upon an integral over the size of the pixel, which would not take significantly more processing power. I think the big problem would be in finding the values to integrate over for 3D rendering (by contrast, this shouldn't be a problem for video processing, as you usually keep the aspect ratio when scaling, or at least have nothing more complex than rectangular pixels). I do wonder whether a method similar to this is used when filtering over video that has fewer destination pixels than source pixels.
 
gking said:
That's irrelevant -- at the same LOD, provided that the number of samples doesn't cause cache thrashing, biquadratic/bicubic filtering would have a higher cache hit rate than bilinear.

Actually, I was operating under the assumption that the filtering method would use an integral form for deciding the final color. I suppose this is not the case, so that independent of the filtering method (bilinear, biquadric, bicubic), you would try to keep the ratio of texels to pixels close to 1, meaning lots of cache hits.

Still, in the situation where the ratio of texels to pixels is close to one, biquadric/bicubic certainly put more stress on the texture cache. That is, at any given time, more texels need to be used by the pixel pipelines, and thus at any given time, more data must be in cache for optimal usage.

Of course, if the hardware is good enough at predicting which texels will be needed, so that the ratio of the total texture memory displayed is very close to the texture memory bandwidth used, then there won't be an additional memory bandwidth hit from enabling the higher-degree sampling patterns.
 
Chalnoth:
What rules would you use to define "biquadratic filtering"?

Here I'm using the notation that texcoords straight at the texels are at integer values, and the fractional part is the coefficient for the interpolation. This isn't correct in OGL terms, but makes the explanation easier, and the convertion to "real" texcoords are trivial.

Bilinear:
* The interpolating function is a bilinear function.
* It hits the texture values at the endpoints. (At integer values.)

Bicubic (proposal):
* The interpolating function is a bicubic function.
* It hits the texture values at the endpoints. (At integer values.) Notice that this is the endpoints of the range where the interpolating function is used, not the range of texels that affect the function.
* The derivates of the interpolating function at the endpoints is equal to the derivate of a straight line between the neighbours of the endpoint.

Simpler explanation of bicubic:
Think in 1D.
The interpolating function between x and x+1 (x is an integer) hits the texture value at x and x+1.
The slope of the function at x is equal to the slope of a straight line from x-1 to x+1.
The slope of the function at x+1 is equal to the slope of a straight line from x to x+2.
So the texture values used for this segment is those for x-1, 1 x+1 and x+2.

Now what rules would you use for biquadratic.


What blockiness in Quake2 explosions could be helped by better filtering?
 
Basic said:
Chalnoth:
What rules would you use to define "biquadratic filtering"?

Here is how you could think of what the technique would attempt to accomplish (Looking at the 1D case for simplicity):

1. Find a quadratic curve that fits the three closest texture samples.
2. Evaluate that function at the sample point.

I'd have to do some actual math to figure out exactly how much processing it would take to calculate this, but I think you get the jist of the situation. Also note that three samples are required for quadratic interpolation (as four are required for cubic interpolation).

For example, plain bilinear filtering does this exact thing, but the math just simplified to:

A * T1 + (1-A) * T2 (1D case, again, this time in the x-direction), where A is S2.x - SamplePosition.x (Note that S2.x is the texel position in screenspace).

But the essence of why it calculates this math is similar to the two steps I laid out above.

What blockiness in Quake2 explosions could be helped by better filtering?

You know how the edges of the lightmaps are always really blocky in this situatation? That's what I'm talking about specifically. That is, any dynamic light in the game displays what looks like a jagged circle. With better filtering, it would look more like a smooth circle.
 
So you've made your interpolating function that hits the texel values at x-1, x and x+1.

In what interval will you use this function?
 
Back
Top