Why 3Dc may not always be a better solution

Status
Not open for further replies.
One thing Scali hasn't shown in the past a bias against ati and for Nv, I think whoever said he is biased for himself is much more accurate.
 
DaveBaumann said:
I had already told you - to which you had no objection to at the time - that I had only considering the use of compressed normal mapping for character detail at the time.

You literally said that 3Dc was never lower quality than DXT5, actually in a RESPONSE to this particular case where DXT5 can give higher quality.
 
Because from the compression artifacts point of view, it shouldn't be as it was designed to get over the limitations of the block based compression scheme of S3TC. As has been pointed out, there are evidently other things that can be done to circumvent the other aliasing cause by its two component nature, something that you yourself agreed to and even suggested that 3Dc may be an even better option in the future.
 
DaveBaumann said:
Because from the compression artifacts point of view, it shouldn't be as it was designed to get over the limitations of the block based compression scheme of S3TC.

Yes, but I was obviously not talking about the compression artifacts. Either you didn't understand this, or your faith in what ATi told you was big enough to not make you think about it for yourself.

As has been pointed out, there are evidently other things that can be done to circumvent the other aliasing cause by its two component nature, something that you yourself agreed to and even suggested that 3Dc may be an even better option in the future.

Yes, but that still means that 3Dc may not always be faster or look better in current applications, because as has been said, with 3Dc some things would require more work.
So that doesn't change anything about the validity of ATi's statement.
Since you didn't respond to any of the comments I had about Richard Huddy's statement, it seems that you agree.
 
Shock horror, there are some confusing terms in real-time graphics!!!

To be pedantic, the nVIDIA example is NOT a normal map. A normal by defination always has normalised length. A normal is defined on the unit sphere.

Just so we are clear, a true normal map can always be defined using 2 components (in polar coodinates with r=1), in 2 components + 1 bit (in cartesian coordinates). If you can assume you normals are always in a single hemisphere (i.e. 99.99% tangent space respresentation) than true normal maps can be expressed perfectly in 2 components.

If however you allow general vectors (unnormalised 'normals'), the you need at least 3 components (in both polar or cartesian).

nVIDIA 'trick' is to encode an error term into the overspecification provided by a 3 component normal. ATI 'trick' was to reduce the coverage to the minimum required (in fact 1 bit to low for general normal maps). PowerVR's trick was correct ;-) (DC used polar coodinates).

Also 3 components is enough to store a normal and a displacement, as long as you are not using nVIDIA trick.

You can say both vendors looked at the problem in different ways. NVIDIA decided to see if they could use the wasted data in a normal map in a good way. ATI decided to reduce the size.
 
Yes, but that still means that 3Dc may not always be faster or look better in current applications, because as has been said, with 3Dc some things would require more work.

It doesn't negate the statement since it depends on the conditions its used - it can always look at least as good to better becuase it will remove the fundamental issue of block based compression and there are other ways to circumvent the aliasing.

Scali said:
Since you didn't respond to any of the comments I had about Richard Huddy's statement, it seems that you agree.

Do not assume anything, you've already done far to much of that.
 
DaveBaumann said:
It doesn't negate the statement since it depends on the conditions its used - it can always look at least as good to better becuase it will remove the fundamental issue of block based compression and there are other ways to circumvent the aliasing.

I will try to explain it once more: in some (all?) cases, the blocky compression artifacts are less noticable than the aliasing caused by normalized mipmaps. So if the most noticable aliasing is not reduced with method A, but is reduced with method B, I don't see how method A can possibly look better than method B.
As for other ways to circumvent the aliasing... Since they require extra work, I don't see how they could improve performance.
 
Scali said:
DeanoC said:
To be pedantic, the nVIDIA example is NOT a normal map. A normal by defination always has normalised length. A normal is defined on the unit sphere.

Wrong, see http://mathworld.wolfram.com/NormalVector.html as pointed out before.
So the rest is as much grasping-at-straws as FUDie tried earlier.

And mathworld is often wrong! Read its article on vectors and the glib way if claims they are rank 1 tensors. Thats only partially true, vectors are rank 1 tensors in the same way as sqrt(-1)=1 (which hopefully most people know isn't true)

Now in the normal case, I could be wrong and they could be right but I know of many definations that use the unit sphere to define it.

To help my case I'll pick a random website
http://csep1.phy.ornl.gov/pt/node5.html

Notice N_L is defined as being divided by the length of the vector (hence on the unit sphere).

Certainly most use in 3D graphics has been normals being defined on the unit sphere. OpenGL and Direct3D both use this defination.
 
Scali said:
As for other ways to circumvent the aliasing... Since they require extra work, I don't see how they could improve performance.

But, it can be used to improve the quality since we won't necessarily have aliasing and the compression artifacts will also be further reduced. I didn't say that it was always improve quality and improve performance at the same time, in fact the discussion was not about improving performance since DXT5 was already in place and we know that 3Dc requires an extra instruction anyway.
 
DeanoC said:
And mathworld is often wrong!

Not in this particular case though, pick any random math book and you'll find the same definition.

Certainly most use in 3D graphics has been normals being defined on the unit sphere. OpenGL and Direct3D both use this defination.

I disagree. OpenGL and Direct3D allow you to use normalvectors of any size. Whether they have any meaning depends on the situation...

D3DRS_NORMALIZENORMALS
TRUE to enable automatic normalization of vertex normals, or FALSE to disable it. The default value is FALSE. Enabling this feature causes the system to normalize the vertex normals for vertices after transforming them to camera space, which can be computationally time-consuming.

GL_NORMALIZE If enabled, normal vectors specified with glNormal are scaled to unit length after transformation. See glNormal.
 
DaveBaumann said:
But, it can be used to improve the quality since we won't necessarily have aliasing and the compression artifacts will also be further reduced.

Do you think there's aliasing in... say... 3DMark05?
 
Scali said:
DeanoC said:
And mathworld is often wrong!
I disagree. OpenGL and Direct3D allow you to use normalvectors of any size. Whether they have any meaning depends on the situation...

I'll conceed on this one, after checking the OpenGL specification, your right.

The particular bit that convinced me is
Note that if the normals sent to GL were unit length and the model-view matrix
uniformly scales space, then rescale makes the transformed normals unit length.
It clearly says IF the normals were unit length, which implies they could be un-normalised.

Looks like the defination I was taught (from my uni math course) was wrong.

Learn something new everyday...
 
I may regret this post..but scali tempted me enough with BS that I'll take the plunge

3Dc fundamentally has nothing to do with normalized or unnormalized normal maps...It's just a 2-component cute data format....It just happens that 2-component normalized normal map representations can take advantage of this for quality and perf gains..

The rest of the discussion about artifacts of mipmapping and de-normalization have nothing to do with 3dc..Let's cut the BS here...they are just the natural result of one 3d graphics hack being built on top of other hacks..And more hacks are being proposed on top of these hacks to reduce the artifacts of these hacks...Note that in-case of Doom3 -carmack chose to take the hack-hit/tradeoff on close up objects v/s aliasing on far planes

Reg. potential perf gains in 3dm05 - note that to see perf gains you may not need all normals maps compressed...Just the key ones that are used most or strategically hitting your texture bandwidth can give a substantial perf boost..say for eg:- the normal map used on water

And scali, please stop mixing up marketing and technology..we are in the technology forum here...Good marketing has always been about amplifying the truth..developers know what 3dc about and when they get a good leverage from it
 
croc_mak said:
Reg. potential perf gains in 3dm05 - note that to see perf gains you may not need all normals maps compressed...Just the key ones that are used most or strategically hitting your texture bandwidth can give a substantial perf boost..say for eg:- the normal map used on water

Is the water aliased?

developers know what 3dc about and when they get a good leverage from it

DaveBaumann and Reverend are not developers.
 
Would using two 3Dc textures (one containing the x and y components of the normal-map and other containing the z and the specular value -- or some other useful data) be worthwhile? It would result in the denormalized vectors required by the technique described in the paper while retaining the high precision that 3Dc offers. The two big issues, of course, are the creation of the texture pair and the possibility that the two textures may be bigger than a single 32 bit RGBA texture (I don't know how well 3Dc compresses).
 
Ostsol said:
Would using two 3Dc textures (one containing the x and y components of the normal-map and other containing the z and the specular value -- or some other useful data) be worthwhile? It would result in the denormalized vectors required by the technique described in the paper while retaining the high precision that 3Dc offers. The two big issues, of course, are the creation of the texture pair and the possibility that the two textures may be bigger than a single 32 bit RGBA texture (I don't know how well 3Dc compresses).

See here for an explanation: http://www.ati.com/products/radeonx800/3DcWhitePaper.pdf
You can get 4:1 compression on 32 bit textures, like DXT compression.
So while 3Dc would still be 2:1 compared to uncompressed textures, it would not be able to win over DXT compression when there are more textures required.
Effectively 3Dc is just a generic 2-channel compression format with a 2:1 ratio, since only two of the channels are actually stored. The 4:1 figure compared to 32 bit RGBA textures is a bit misleading, since half of the information is discarded rather than compressed. In the case of normalized normalmaps, half of this information can be discarded, since only 3 of the 4 channels are used (assuming the 4th channel doesn't store additional info like an occlusion term or such), and the 3rd component can be derived mathematically from the first two.
 
Ostsol said:
Would using two 3Dc textures (one containing the x and y components of the normal-map and other containing the z and the specular value -- or some other useful data) be worthwhile? It would result in the denormalized vectors required by the technique described in the paper while retaining the high precision that 3Dc offers. The two big issues, of course, are the creation of the texture pair and the possibility that the two textures may be bigger than a single 32 bit RGBA texture (I don't know how well 3Dc compresses).

Yes it would, a 4 component (stacked out of two, 3DC textures) would be enough to store a displacement (for parallax mapping), an error term for nVIDIA normal fudging and two channels for a tangent space normal (in fact it would also do object space with a bit of packing). And it would still be 50% smaller than a 32 bit RGBA texture.

Also texture stacking (when we can use it) will make this very easy and not waste an extra texture sampler.
 
There's a difference between normal vector and unit normal. A normal vector just means that N dot V = 0. Normalized Vector might be the proper term, but I think this is 3D graphics nomenclature. When I was working on my B.S. in Math, I don't recall normal vector implying unit length, although it was the case, and frequently desirable.
 
Scali said:
developers know what 3dc about and when they get a good leverage from it

DaveBaumann and Reverend are not developers.

WTF?! Where did that come from?! I personally haven't said much about 3Dc (personally, I think it is---... wait, I don't want to get into it when Scali is around!). If you want to bash Dave, go right ahead !! 8) :LOL: :LOL:
 
Status
Not open for further replies.
Back
Top