Why 3Dc may not always be a better solution

Status
Not open for further replies.
Scali said:
You can ofcourse use supersampling instead, but it's a lot slower.

Unfortunately, super sampling doesn't really fix the issue except make it happen a little further away since the mipmap LOD chosen has been pushed away. Definately not a true fix for the issue but does allievate the issue somewhat of course.
 
Cryect said:
Use 2 coordinate polar system and you shouldn't have the aliasing issues (at least not as bad as any system using Cartesian coordinates). The issue has to do with bilinear and trilinear filtering is not a method to interpolate between vectors. Its much better to use a polar coordinate system and interpolate theta, phi, and magnitude.

I don't think I agree here. The issue is because the lower mipmaps store only one normal which has to represent a larger area of surface.
If you have a rough surface, you have high frequency signals in your normalmap. When generating lower miplevels from that, you get sampling problems, and just renormalizing the normals isn't the solution.
So when you start to get a large number of different normals that need to light one pixel, it should give a rather dull effect, because only a few of those normals will actually have a sharp highlight on them, and the average would not be that bright. But if you sample only one normal from a mipmap, and this normal happens to have a bright highlight, that's what the pixel will get too, and this is what gives the aliasing.
The local lighting system needs to be changed, because a single normal alone cannot represent the information required to solve the sampling problem.
I don't really understand how polar coordinates would solve the problem. But perhaps I'm overlooking something, so if someone can explain it, please do.
 
Scali said:
Infinisearch said:
In the same way 3DC seems to be more suited to normal maps than DXT5 is DXT compression or 3DC suitable for height maps, gloss maps or storing error terms in the first place?

DXT is pretty much based on the idea that colours don't change all that much in a 4x4 block, and they can be approximated by storing two RGB values and doing simple interpolation for all 16 pixels. This basically means that if you want to store multiple channels, they would have to be similar in some way, else the interpolation will be very inaccurate.
It also means that if values differ greatly within a 4x4 block, there will be considerable compression artifacts.

3Dc compresses the components separately, so you can easily use 2 completely different channels in the same texture.
The idea of the compression is similar to DXT, and it also uses 4x4 blocks.

Thanks Scali however i know how DXT works, what I was asking was whether or not the data itself in the case of gloss maps, height maps, and error terms would be well suited for any of the available compression methods? I have not worked with variable specular, parallax mapping or using a 'scale factor'/error term on my normals and have no idea what common data for said maps looks like and therefore can't evaluate the suitability of a compression scheme for said abstract classes of data sets. (I think that came out correctly) So what i'm asking is for people who have experience with said map types what was the suitability (in the most frequent case) for compressing said map with any of the current hardware supported texture compression format?


Scali said:
Can't you manually make mip-map levels?

Yes, this is possible.

Sorry if I was ambiguous, what I meant was if one was to manually make the mip-map levels would it be possible to do it in such a way so that it wouldn't...
Chalnoth said:
may do screwy things with this MIP mapping technique anyway

Scali said:
And on a related note, Is there any other possible way to get rid of the sparkly problem mentioned in the OP besides the method described in the nvidia paper?

You can ofcourse use supersampling instead, but it's a lot slower.
And there are variations on NVIDIA's method...
NVIDIA's paper also has some references to earlier work on reducing normalmapping aliasing. You might want to check those out. None of them are particularly suited for realtime-usage though.

Is the problem directly related to the mipmapping of normal maps or lack thereof? Do you have a link to a picture of said problem? I'd like to see how it looks to get a better grasp of the problem.
 
Infinisearch said:
Sorry if I was ambiguous, what I meant was if one was to manually make the mip-map levels would it be possible to do it in such a way so that it wouldn't...
Chalnoth said:
may do screwy things with this MIP mapping technique anyway

Since this method uses the length of the vector, which depends on all three components, it should be less sensitive to compression artifacts than the normalvector itself. So if you have compression artifacts, I think the direction of the normal is a bigger worry than the length of the normal.
If you're lucky, one may even compensate a bit for the other.

Is the problem directly related to the mipmapping of normal maps or lack thereof? Do you have a link to a picture of said problem? I'd like to see how it looks to get a better grasp of the problem.

There is a picture of the problem in the link to the NVIDIA paper in my opening post.
I believe my previous post already explains how mipmapping is a problem for normalvectors. If you think it's not clear, please ask a more specific question.
In short, not mipmapping is wrong, just like with regular textures. But regular mipmapping is still wrong in the case of lighting calculations. There's no such thing as an average normalvector on an undersampled surface. The surface is infinitely detailed, and spots where normalvectors give sharp highlights could be infinitely small.
 
Scall can u show us some examples of each of the compression types Or more exactly where 3dc fails against the other methods you claim are better or the same . I'd be interested to see .
 
Scali upon browsing the paper from the nvidia site i noticed that when applied to mip-mapping the method seems to require an additional texture look up into a precalcuted cubemap. Am I misinterpreting what i'm reading? That and it seems like its a dependent texture read. Am i wrong? The free gloss map sounds nice, however a dependent texture read to a cube map does make it look rather expensive.
 
Scali said:
As Chalnoth already more or less indicates, it's a matter of performance vs quality. DXT5 is the most compact way we know to store this information. Using no compression is the way that gives the best quality. 3Dc would fall somewhere in between. You may be able to get better quality than DXT5, but the cost is higher. Depending on the situation, you could even choose to go for no compression. If you are not bandwidth-limited, that may be a faster solution than sampling 2 3Dc maps.

I bolded some of your text, could you care to clarify it? Namely the last portion, I forgot to bring it up in my earlier post. Are you saying 3dc falls inbetween DXT5 and an uncompressed state for normal maps in regards to size, quality or both? The only paper i've found on 3dc at the ati website seems to be one of the less technical white papers... more of an advertisement for 3dc.
 
Scali said:
Since this method uses the length of the vector, which depends on all three components, it should be less sensitive to compression artifacts than the normalvector itself. So if you have compression artifacts, I think the direction of the normal is a bigger worry than the length of the normal.
If you're lucky, one may even compensate a bit for the other.
What I was worrying about is that with these compression formats, you are going to be changing the length of the vector anyway, just like you do with bilinear and trilinear filtering. This will result in the normal vector length no longer being quite so sensitive to the local 4-texel differential in direction, but more to the 4x4 block differential in direction.
 
Infinisearch said:
Scali upon browsing the paper from the nvidia site i noticed that when applied to mip-mapping the method seems to require an additional texture look up into a precalcuted cubemap. Am I misinterpreting what i'm reading? That and it seems like its a dependent texture read. Am i wrong? The free gloss map sounds nice, however a dependent texture read to a cube map does make it look rather expensive.

I think you misinterpreted it. First of all, NVIDIA assumes you are already using a dependent texture-read, because you use a texture for the falloff function (which is the best way on most NVIDIA chips anyway). Secondly, it's not a cubemap, but a 2d-texture instead of a 1d-texture. You simply create a set of falloff functions, one for each vector length.
The free gloss map comes from the fact that the shininess is now a function of the length of the vector. So by modifying the length of the vectors, you can bake the glossmap into the method for free.
 
Infinisearch said:
I bolded some of your text, could you care to clarify it? Namely the last portion, I forgot to bring it up in my earlier post. Are you saying 3dc falls inbetween DXT5 and an uncompressed state for normal maps in regards to size, quality or both? The only paper i've found on 3dc at the ati website seems to be one of the less technical white papers... more of an advertisement for 3dc.

I meant that 3Dc will never be as compact as DXT for this technique, and the quality will never be as good as uncompressed normals for this technique (it's lossy compression).
And as I also mentioned, how well 3Dc performs, depends on the situation. You can't make a direct comparison, because there are various ways to implement it with DXT or 3Dc, and each has different quality and size. The only thing you know for sure is that DXT allows for the fastest implementation (smallest texture and single lookup), and uncompressed maps give the best quality (no compression artifacts).
So whatever implementation you choose to make with 3Dc would fall somewhere in between, with regard to performance, size and quality.
 
and the quality will never be as good as uncompressed normals for this technique

Actually, not true in ALL cases. 3dc can interpolate intermediate values are higher precision than you can get from an uncompressed 8BPP texture.
 
Scali said:
So whatever implementation you choose to make with 3Dc would fall somewhere in between, with regard to performance, size and quality.
Your rating is based on a specific set of input data, which doesn't change in (uncompressed) size. It's just compressed sometimes and sometimes not. And in this situation your rating might be correct.

But what if we change the rating scale?

What happens to IQ if we aim for a specific game FPS performance? In bandwidth limited situations I'd guess that uncompressed textures will give the worst IQ, because the input data will have to be reduced in size to give the aimed at performance. With 3Dc and DXT we can use larger input data (thanks to compression) at the same performance. That should give us better IQ detail, but slight compression artifacts. The compression (at least 3Dc) should be quite effective, though, so I think the added detail should greatly win over the added compression artifacts. So how would the rating look like here? I'd guess 3Dc would win. What do you think?
 
I don't think the texture size would have that much to do with performance. It would only affect performance significantly in the region where the lower-quality normal map is magnified, but a higher-quality version would be minified.

By contrast, compressed textures will affect performance all over the image, since they require less data read per pixel rendered. But will only affect performance if you are memory bandwidth limited, which probably won't be the case if you have long shaders.

So no, I don't think you can make a claim like that, madshi. It's just not that simple, and the performance characteristics of the different techniques will depend largely upon other factors (shader length, memory bandwdith/fillrate ratio of the hardware, etc.), such that one could probably produce a scenario where uncompressed would be faster than compression (though not by much....).
 
Chalnoth said:
I don't think the texture size would have that much to do with performance.
Texture size should make a significant performance difference, as soon as the video card's RAM can't hold all data, anymore, right?
 
Colourless said:
Actually, not true in ALL cases. 3dc can interpolate intermediate values are higher precision than you can get from an uncompressed 8BPP texture.

This assumes that 8 bits per channel is the only uncompressed texture format.
There's also 10 bit on certain hardware (ideal for normalmaps, still 32 bit per pixel, only the alpha is reduced, but if it's not used, it's pure gain), 16 bit, and ofcourse float formats are no contest.
Besides, even 8 bit may look better if the interpolated values are wrong (compression artifacts).
This all depends on the type of texture.
 
madshi said:
Your rating is based on a specific set of input data, which doesn't change in (uncompressed) size. It's just compressed sometimes and sometimes not. And in this situation your rating might be correct.

But what if we change the rating scale?

What happens to IQ if we aim for a specific game FPS performance? In bandwidth limited situations I'd guess that uncompressed textures will give the worst IQ, because the input data will have to be reduced in size to give the aimed at performance. With 3Dc and DXT we can use larger input data (thanks to compression) at the same performance. That should give us better IQ detail, but slight compression artifacts. The compression (at least 3Dc) should be quite effective, though, so I think the added detail should greatly win over the added compression artifacts. So how would the rating look like here? I'd guess 3Dc would win. What do you think?

I still think what I always think, the only given facts are that 3-channel DXT is the fastest possible way, and uncompressed textures give the highest quality.
All the other variations fall somewhere in between, and it's a combination of how well the source art can be compressed with various compression algorithms, how much memory and bandwidth you have available in the situation, and what balance you seek between quality, size and performance.
Which gives the conclusion that was already in the topic: 3Dc may not always be better.
 
Scali said:
This assumes that 8 bits per channel is the only uncompressed texture format.
There's also 10 bit on certain hardware, and ofcourse float formats are no contest.
Besides, even 8 bit may look better if the interpolated values are wrong (compression artifacts).
This all depends on the type of texture.

Uh huh... so? Those textures are 4 or 8 times larger than the 3dc texture for the same data. I guess it depends on what is meant by 'uncompressed'
 
Scali said:
I still think what I always think, the only given facts are that 3-channel DXT is the fastest possible way, and uncompressed textures give the highest quality.
I think you've missed my point here. You only seem to compare:

(1) 1024x1024 uncompressed and
(2) 1024x1024 compressed

But what happens if you compare:

(1) 1024x1024 uncompressed or
(2) 2048x2048 compressed

What has the higher IQ? I think (2) should be better (at least when using 3Dc), don't you agree?

Please note that I'm not claiming that 3Dc would *always* be better.
 
Colourless said:
Uh huh... so? Those textures are 4 or 8 times larger than the 3dc texture for the same data. I guess it depends on what is meant by 'uncompressed'

You claimed that 3Dc gave more than 8 bits per channel. I merely pointed out that there are various uncompressed formats that can do the same.
And now you want to shift to the difference in size?
 
madshi said:
I think you've missed my point here. You only seem to compare:

(1) 1024x1024 uncompressed and
(2) 1024x1024 compressed

But what happens if you compare:

(1) 1024x1024 uncompressed or
(2) 2048x2048 compressed

What has the higher IQ? I think (2) should be better (at least when using 3Dc), don't you agree?

Please note that I'm not claiming that 3Dc would *always* be better.

I think you missed my point, which is 'it depends'.
Also, I don't see why only the resolution should be taken into account. If you want to apply the unnormalized mipmap-method, then (2) will only be possible with DXT, since 3Dc can't do better than 2:1 compression in this case.
So I think in that case the question would be:

(1) 1024x1024 uncompressed
(2) 2048x1024 3Dc
(3) 2048x1024 DXT with a 2-component approach
(3) 2048x2048 DXT wiht a 3-component approach

And the answer would still be 'it depends'.
 
Status
Not open for further replies.
Back
Top