Why 3Dc may not always be a better solution

Status
Not open for further replies.

Scali

Regular
Regarding the exclusion of 3Dc in 3DMark05, there seems to be a discussion of 3Dc being used in many future titles.
ATi boasts that 3Dc is faster and gives better quality than other solutions.
It seems that most people take ATi's words for granted... however, ATi is not telling the whole story.

3Dc is faster than non-compressed normalmaps, no question about it, since it reduces bandwidth requirements at the cost of some extra instructions.
However, DXT5-compression can also be used on normalmaps, which gives the same bandwidth reduction as 3Dc. Whether it is slower than 3Dc or not, depends on the implementation. In Humus' 3Dc demo, he renormalizes the DXT5 samples, in which case, 3Dc is at a slight advantage. If you however choose not to renormalize, chances are that DXT5 is not slower, or even faster than 3Dc.

3Dc could be considered higher quality than non-compressed normalmaps, if we compare with the same memory footprint. 3Dc can store larger normalmaps in the same amount of space, so more detail can be encoded. However, 3Dc introduces slight compression artifacts, so it is slightly lower quality than non-compressed normalmaps of the same resolution.
3Dc has less compression artifacts than DXT5, so in that sense it is higher quality.

However, normalmaps have an entirely different problem, which is related to mipmapping. Most people are probably familiar with the 'sparkling' of specular highlights when rough surfaces are viewed from a distance. Multisampling AA doesn't help to solve this problem.
A possible solution is to unnormalize the normals in the mipmaps.
By varying the length of the normals, the intensity of the highlights can be controlled, and sparkling can be reduced.

Here is a paper by NVIDIA (yes, ironically, but it was the best I could find, and it doesn't seem to be biased in any way), which demonstrates a method of unnormalizing normals and using it to antialias the lighting: http://developer.nvidia.com/object/mipmapping_normal_maps.html
Carmack has mentioned in his keynote that his next engine will use a technique such as this one.

Now, here is the catch with 3Dc: the format only stores 2 components, since the 3rd component of a unit-length vector can be derived from the first two components with a simple formula. This means that 3Dc can only be used for normalized vectors, and an antialiasing method such as the one in the paper mentioned earlier, is not possible to implement with 3Dc.

Since ATi fails to point this out, I thought I would.
 
Scali said:
By varying the length of the normals, the intensity of the highlights can be controlled, and sparkling can be reduced.
More specifically, the spread of the highlight; IIRC that paper uses the denormalisation as an estimation of directional variance -- the exponent is reduced based on this.

Scali said:
Since ATi fails to point this out, I thought I would.
Yes I came to the same conclusions a while ago. In other news, 3 > 2.
 
SteveHill said:
More specifically, the spread of the highlight; IIRC that paper uses the denormalisation as an estimation of directional variance -- the exponent is reduced based on this.

That is one possible implementation yes. There are others. I was not aiming at the implementation in the paper specifically, so I deliberately didn't get more specific.

Yes I came to the same conclusions a while ago. In other news, 3 > 2.

Good for you, you are not part of the target-audience of this thread.
And you can check the smart remarks at the door, thank you.
 
Scali said:
However, DXT5-compression can also be used on normalmaps, which gives the same bandwidth reduction as 3Dc.

For 512x512 and smaller, DXT5 produces rubbish for normalmaps. Take my word for it. Although it's less of a problem for 1024 and up.
 
Scali said:
That is one possible implementation yes. There are others. I was not aiming at the implementation in the paper specifically, so I deliberately didn't get more specific.
Right, my comment was for the benefit of the "target audience" who won't have read the aforementioned paper or related ones (and those that I've come across take a similiar exponent adjustment approach).

It's not an issue of intensity per se either; you wouldn't get very good results just scaling (scali-ing?) the intensity by the length of the normal for instance.

Scali said:
Good for you, you are not part of the target-audience of this thread.
And you can check the smart remarks at the door, thank you.
You're no fun any more.
 
SteveHill said:
Right, my comment was for the benefit of the "target audience" who won't have read the aforementioned paper or related ones (and those that I've come across take a similiar exponent adjustment approach).

Since the paper was linked directly, I don't think this comment really added anything.

It's not an issue of intensity per se either; you wouldn't get very good results just scaling (scali-ing?) the intensity by the length of the normal for instance.

The aliasing is an issue of the intensity of nearby pixels varying greatly (mostly because of the specular highlights ofcourse).
This variation is what you need to control in some way. That is what I meant to say.
If you're only doing diffuse lighting, I suppose the scaling of the intensity wouldn't even be such a bad solution. It would be free at any rate.
 
Scali said:
Good for you, you are not part of the target-audience of this thread.
And you can check the smart remarks at the door, thank you.

You have no right to tell anyone here what they can or cannot do.
 
Scali said:
Since the paper was linked directly, I don't think this comment really added anything.
I thought it was a reasonable summary of the paper's main insight, for the more leisurely reader.

Scali said:
If you're only doing diffuse lighting, I suppose the scaling of the intensity wouldn't even be such a bad solution. It would be free at any rate.
Aliasing is far less of a problem in that case.
 
SteveHill said:
I thought it was a reasonable summary of the paper's main insight, for the more leisurely reader.

A summary (with pretty pictures) could also be found on the page I linked to. In other news, 3 > 2.

Aliasing is far less of a problem in that case

Indeed, but the point of having unnormalized normals with diffuse lighting is to retain the roughness of the surface in some form. In this case as darker shades.
 
Scali said:
Regarding the exclusion of 3Dc in 3DMark05, there seems to be a discussion of 3Dc being used in many future titles.
ATi boasts that 3Dc is faster and gives better quality than other solutions.
It seems that most people take ATi's words for granted... however, ATi is not telling the whole story.

3Dc is faster than non-compressed normalmaps, no question about it, since it reduces bandwidth requirements at the cost of some extra instructions.
However, DXT5-compression can also be used on normalmaps, which gives the same bandwidth reduction as 3Dc. Whether it is slower than 3Dc or not, depends on the implementation. In Humus' 3Dc demo, he renormalizes the DXT5 samples, in which case, 3Dc is at a slight advantage. If you however choose not to renormalize, chances are that DXT5 is not slower, or even faster than 3Dc.

3Dc could be considered higher quality than non-compressed normalmaps, if we compare with the same memory footprint. 3Dc can store larger normalmaps in the same amount of space, so more detail can be encoded. However, 3Dc introduces slight compression artifacts, so it is slightly lower quality than non-compressed normalmaps of the same resolution.
3Dc has less compression artifacts than DXT5, so in that sense it is higher quality.

However, normalmaps have an entirely different problem, which is related to mipmapping. Most people are probably familiar with the 'sparkling' of specular highlights when rough surfaces are viewed from a distance. Multisampling AA doesn't help to solve this problem.
A possible solution is to unnormalize the normals in the mipmaps.
By varying the length of the normals, the intensity of the highlights can be controlled, and sparkling can be reduced.

Here is a paper by NVIDIA (yes, ironically, but it was the best I could find, and it doesn't seem to be biased in any way), which demonstrates a method of unnormalizing normals and using it to antialias the lighting: http://developer.nvidia.com/object/mipmapping_normal_maps.html
Carmack has mentioned in his keynote that his next engine will use a technique such as this one.

Now, here is the catch with 3Dc: the format only stores 2 components, since the 3rd component of a unit-length vector can be derived from the first two components with a simple formula. This means that 3Dc can only be used for normalized vectors, and an antialiasing method such as the one in the paper mentioned earlier, is not possible to implement with 3Dc.
Good for you! Just one problem: What's the definition of normal map? A normal vector is a vector with a particular direction and a length of 1. Obviously, if you want non-unit length vectors then you don't want a normal map. I believe that ATI has only recommended 3Dc for normal maps.

Good work pointing out the incredibly obvious.

-FUDie

P.S. I've taken the liberty of pointing out that you've changed the topic from normal maps to something completely different.
 
Of course 3Dc isn't always the best solution. But it's often enough to be well worth the extra silicon, which shouldn't be much if you already support DXT5. 3Dc isn't only for normal maps either, you can use it for any two channel data.

Regarding the normal map mipmapping paper, I've made a number of serious attempts at it, but found that the improvement wasn't enough to be worth releasing a demo on. Essentially, if there was aliasing without the technique, there was aliasing with it too, just slightly toned down. Not really worth the pretty large slowdown compared to regular specular computation.
 
Scali said:
Since ATi fails to point this out, I thought I would.
If you actually look at ATI's developer resources you would notice that in our original paper on potential methods of normal map compression we do talk about making use of this denormalisation to avoid aliasing, so saying that we fail to point it out is a bit disingenuous.

http://www.ati.com/developer/NormalMapCompression.pdf

Whether denormalising your normal maps is the best solution to the problem of normal aliasing is another story.

Anyway, since you apparently couldn't be bothered to do your research and instead chose to believe that we were just ignoring this aspect of normal compression I thought I'd point it out.
 
FUDie said:
Good for you! Just one problem: What's the definition of normal map? A normal vector is a vector with a particular direction and a length of 1. Obviously, if you want non-unit length vectors then you don't want a normal map. I believe that ATI has only recommended 3Dc for normal maps.

From http://mathworld.wolfram.com/NormalVector.html:

The normal vector, often simply called the "normal," to a surface is a vector perpendicular to it. Often, the normal unit vector is desired, which is sometimes known as the "unit normal." The terms "normal vector" and "normalized vector" should not be confused, especially since unit norm vectors might be called "normalized normal vectors" without redundancy.

Is that the best you can do?
 
Scali said:
A summary (with pretty pictures) could also be found on the page I linked to. In other news, 3 > 2.
I'm flattered by your imitation. Had you discovered the art of succinctness earlier, we may have been spared several paragraphs.
 
andypski said:
If you actually look at ATI's developer resources you would notice that in our original paper on potential methods of normal map compression we do talk about making use of this denormalisation to avoid aliasing, so saying that we fail to point it out is a bit disingenuous.

http://www.ati.com/developer/NormalMapCompression.pdf

This paper appears to predate 3Dc, and as such does not cover 3Dc or its uses at all.

I am however concerned about statements such as this, from http://www.hexus.net/content/reviews/review.php?dXJsX3Jldmlld19JRD04NzgmdXJsX3BhZ2U9MTA=:

We're disappointed to see that Futuremark didn't choose to add support for 3Dc texture compression which would have demonstrated an even greater advantage for ATI hardware and technology.

This statement clearly does not tell the whole truth about 3Dc. And given the reactions on forums such as these, it seems that many people take ATi's word for 3Dc being the best normalmapping solution, and aren't aware of the possible problems with the method. I am merely pointing out these problems for those who are not aware of them, since it doesn't seem like ATi is actively doing such.

Anyway, since you apparently couldn't be bothered to do your research and instead chose to believe that we were just ignoring this aspect of normal compression I thought I'd point it out.

This was uncalled for. I can understand some of the people here just venting... but if you want to be taken seriously, I suggest you stop making such rude remarks. It's not my fault that your PR department is not informing the public properly.
 
Scali said:
FUDie said:
Good for you! Just one problem: What's the definition of normal map? A normal vector is a vector with a particular direction and a length of 1. Obviously, if you want non-unit length vectors then you don't want a normal map. I believe that ATI has only recommended 3Dc for normal maps.

From http://mathworld.wolfram.com/NormalVector.html:

The normal vector, often simply called the "normal," to a surface is a vector perpendicular to it. Often, the normal unit vector is desired, which is sometimes known as the "unit normal." The terms "normal vector" and "normalized vector" should not be confused, especially since unit norm vectors might be called "normalized normal vectors" without redundancy.
Is that the best you can do?
Thanks, now you've shown that your own post is contradictory. You claim that ATI hasn't pointed out that 3Dc only works on "normalized normal maps", but then you say that "NVIDIA has shown a way to work on denormalized normal maps". You're the one putting words into ATI's mouth. Obviously if the third component is derived from the other two, you're going to get a unit normal since you assume the length is 1. Anyone working with such a format would realize that unit normals are the only outcome.

It's clear you have an agenda to attack the usefulness of 3Dc.

-FUDie

P.S. To make it more clear. Throughout your whole post you mention "normalmaps "(sic), then when it comes to NVIDIA, you mention "denormalized normalmaps". Seems clear to me that when you refer to ATI you are talking about "normalized normal maps" since you make the point of referring to "denomalized normal maps" later. You're making up an argument just to find fault where there is none.
 
Scali said:
andypski said:
If you actually look at ATI's developer resources you would notice that in our original paper on potential methods of normal map compression we do talk about making use of this denormalisation to avoid aliasing, so saying that we fail to point it out is a bit disingenuous.

http://www.ati.com/developer/NormalMapCompression.pdf

This paper appears to predate 3Dc, and as such does not cover 3Dc or its uses at all.

I am however concerned about statements such as this, from http://www.hexus.net/content/reviews/review.php?dXJsX3Jldmlld19JRD04NzgmdXJsX3BhZ2U9MTA=:
We're disappointed to see that Futuremark didn't choose to add support for 3Dc texture compression which would have demonstrated an even greater advantage for ATI hardware and technology.
This statement clearly does not tell the whole truth about 3Dc. And given the reactions on forums such as these, it seems that many people take ATi's word for 3Dc being the best normalmapping solution, and aren't aware of the possible problems with the method. I am merely pointing out these problems for those who are not aware of them, since it doesn't seem like ATi is actively doing such.
Since you seem to overlook it, I'll point it out to you. One drawback of DXT5 is that it can require a swizzle. If your hardware doesn't support arbitrary swizzles, this can hurt performance. 3Dc doesn't suffer from this drawback. Your claims against ATI are wrong. You are wrong. QED.

-FUDie
 
FUDie said:
Thanks, now you've shown that your own post is contradictory.

Oh really, how is that?

Obviously if the third component is derived from the other two, you're going to get a unit normal since you assume the length is 1. Anyone working with such a format would realize that unit normals are the only outcome.

Yes, but I am more concerned about the ramifications of using unit normals. After discussing with DaveBaumann, it became clear that he had no idea about this. And from the reactions on 3Dc of many other people on this forum, it appeared that he wasn't alone.
So I decided to point it out.

It's clear you have an agenda to attack the usefulness of 3Dc.

I think one can only attack the usefulness of 3Dc if one modifies the hardware or drivers supporting it. I am merely pointing out that there are certain methods of normalmapping (which are implemented in current and future engines) that aren't compatible with 3Dc, so ATi promoting 3Dc for everything is a tad premature. It may make developers think twice about using 3Dc, but is that my fault? If there are better solutions than 3Dc in certain situations, why wouldn't people use them?
 
Scali said:
This paper appears to predate 3Dc, and as such does not cover 3Dc or its uses at all.
The paper covers differences between two-component normal representations and three component ones. ATI2N is clearly a two component normal representation - just because it's a compressed two-component representation doesn't make its characteristics magically different in some way from those of other two-component formats discussed in the paper. It just makes it much more efficient.

So I don't see that your statement that the paper predates 3DC matters one iota - all that shows is that we were actively providing help and research for developers on this subject before 3DC became public.

This statement clearly does not tell the whole truth about 3Dc. And given the reactions on forums such as these, it seems that many people take ATi's word for 3Dc being the best normalmapping solution, and aren't aware of the possible problems with the method. I am merely pointing out these problems for those who are not aware of them, since it doesn't seem like ATi is actively doing such.

We can't be much more active than talking to developers and publishing white-papers helping people with tackling normal map compression, as well as openly discussing potential pitfalls with different normal map representations.

You can ask the developers themselves if they find our research and technologies useful - you will note that we provided assistance to ID in tackling normal compression for Doom3, for example.
 
Status
Not open for further replies.
Back
Top