DaveBaumann said:I had already told you - to which you had no objection to at the time - that I had only considering the use of compressed normal mapping for character detail at the time.
DaveBaumann said:Because from the compression artifacts point of view, it shouldn't be as it was designed to get over the limitations of the block based compression scheme of S3TC.
As has been pointed out, there are evidently other things that can be done to circumvent the other aliasing cause by its two component nature, something that you yourself agreed to and even suggested that 3Dc may be an even better option in the future.
DeanoC said:To be pedantic, the nVIDIA example is NOT a normal map. A normal by defination always has normalised length. A normal is defined on the unit sphere.
Yes, but that still means that 3Dc may not always be faster or look better in current applications, because as has been said, with 3Dc some things would require more work.
Scali said:Since you didn't respond to any of the comments I had about Richard Huddy's statement, it seems that you agree.
DaveBaumann said:It doesn't negate the statement since it depends on the conditions its used - it can always look at least as good to better becuase it will remove the fundamental issue of block based compression and there are other ways to circumvent the aliasing.
Scali said:DeanoC said:To be pedantic, the nVIDIA example is NOT a normal map. A normal by defination always has normalised length. A normal is defined on the unit sphere.
Wrong, see http://mathworld.wolfram.com/NormalVector.html as pointed out before.
So the rest is as much grasping-at-straws as FUDie tried earlier.
Scali said:As for other ways to circumvent the aliasing... Since they require extra work, I don't see how they could improve performance.
DeanoC said:And mathworld is often wrong!
Certainly most use in 3D graphics has been normals being defined on the unit sphere. OpenGL and Direct3D both use this defination.
D3DRS_NORMALIZENORMALS
TRUE to enable automatic normalization of vertex normals, or FALSE to disable it. The default value is FALSE. Enabling this feature causes the system to normalize the vertex normals for vertices after transforming them to camera space, which can be computationally time-consuming.
GL_NORMALIZE If enabled, normal vectors specified with glNormal are scaled to unit length after transformation. See glNormal.
DaveBaumann said:But, it can be used to improve the quality since we won't necessarily have aliasing and the compression artifacts will also be further reduced.
Scali said:I disagree. OpenGL and Direct3D allow you to use normalvectors of any size. Whether they have any meaning depends on the situation...DeanoC said:And mathworld is often wrong!
It clearly says IF the normals were unit length, which implies they could be un-normalised.Note that if the normals sent to GL were unit length and the model-view matrix
uniformly scales space, then rescale makes the transformed normals unit length.
croc_mak said:Reg. potential perf gains in 3dm05 - note that to see perf gains you may not need all normals maps compressed...Just the key ones that are used most or strategically hitting your texture bandwidth can give a substantial perf boost..say for eg:- the normal map used on water
developers know what 3dc about and when they get a good leverage from it
Ostsol said:Would using two 3Dc textures (one containing the x and y components of the normal-map and other containing the z and the specular value -- or some other useful data) be worthwhile? It would result in the denormalized vectors required by the technique described in the paper while retaining the high precision that 3Dc offers. The two big issues, of course, are the creation of the texture pair and the possibility that the two textures may be bigger than a single 32 bit RGBA texture (I don't know how well 3Dc compresses).
Ostsol said:Would using two 3Dc textures (one containing the x and y components of the normal-map and other containing the z and the specular value -- or some other useful data) be worthwhile? It would result in the denormalized vectors required by the technique described in the paper while retaining the high precision that 3Dc offers. The two big issues, of course, are the creation of the texture pair and the possibility that the two textures may be bigger than a single 32 bit RGBA texture (I don't know how well 3Dc compresses).
Scali said:developers know what 3dc about and when they get a good leverage from it
DaveBaumann and Reverend are not developers.