Why 3Dc may not always be a better solution

Status
Not open for further replies.
So this post is all one reason why you feel 3Dc shouldn't have been used ?

It seems to me that on ati hardware that supports 3dc you can get better quality with better performance (no swizlle ) by using this on most textures .

To me this is the same as all forms of dxtc . So because there is no magical dxtc format that works for every type of texture we shouldn't use it in any of the 3dmark tests ?

Your jump in logic doesn't make much sense to me.

If 3dc can be used on a majority of normal maps and increases performance using it then why wasn't it used ?

There are pictures of dst showing reduced image quality in 3dmark2005 yet that was put in. Which means as far as i know there is no magical solution to all problems and we need more than 1 form of compresson or sahdow tech. But that doesn't mean the ones we have currently should be excluded .

I don't really see this thread as anything but a bash on 3Dc because its not magic .
 
Actually, I think Scali brings up a very valid point. Going into the future, we're going to want to do things like parallax mapping and antialiasing of bump mapping. It doesn't look like 3Dc is amenable to either algorithm (parallax mapping requires a heightmap, which can be most easily stored in the alpha/w component of the normal map, and 3Dc, apparently, doesn't work well with non-normalized normal maps).

From that viewpoint, 3Dc would be useful in the short term or for low-quality rendering. But in the long term or with higher quality rendering, you're going to want to use more normal map data per pixel to get a better image. This lends itself to a much more significant quality disparity between 3Dc and uncompressed than "standard" normal mapping would imply.
 
Well, I'm sure you could use the gradient instructions to detect when it is necessary to "blur" the normal map, but then you'd basically be doing the same thing as doing MIP mapping with the normal map, so why not just do that?

But I do have to mention that I don't currently understand exactly why you can't use two-component normal vectors (obtaining z by inference) for this algorithm, so I'm looking through the white paper.

Edit:
Ah, I see. This MIP mapping technique appears to use an approximation of the average of the specular lighting result to instead get the result as a function of the average that you get from normal texture mapping. As it turns out, this approximation does not include the renormalization of the filtered normal vector, and thus would return invalid results if one assumed a normalized normal vector (yeah, that sounds really strange....).
 
martrox said:
There is no reasoning with Scali..... he has a specific anti ATI/pro nVidia agenda. It's a total waste of bandwidth to even respond to him. If you disagree with him he gets abusive. Think in terms of Radar1200 here. This isn't DC or even Chalnoth. :rolleyes:
I'm not seeing that. He's very pro himself, and anti everybody else though. Overly self confident, coupled with a tolerance level equalling zero... He's makes some valid points, it's just that he offends everybody while making them...

You really should lighten up Scali. :D
 
Scali - Leave the attitude outside of the forum. Consider that a warning.

Everyone else, I would caution you that if you correspond privately with Scali he will freely (mis)interpret your replies (or not) as he pleases and use that openly.
 
Scali said:
Here is a paper by NVIDIA (yes, ironically, but it was the best I could find, and it doesn't seem to be biased in any way), which demonstrates a method of unnormalizing normals and using it to antialias the lighting: http://developer.nvidia.com/object/mipmapping_normal_maps.html
Carmack has mentioned in his keynote that his next engine will use a technique such as this one.
You have this backwards. John Carmack used a method similar to this for Doom3, but the keynote implies he will NOT be using it for future engines as he has found renormalisation of normal maps to be a quality gain.

John Carmack said:
Another thing that turned out to be a really cheap and effective quality improvement is doing renormalization of the normal maps before it does all the lighting calculations.
 
Doom 3 itself does basic normal mapping with no attempt at MIP mapping such as in that nVidia paper. So I think you'd have to post a whole lot more of the context of that quote to convince me, Dio.
 
I haven't read the nvidia paper, so I don't know if the Doom3 method matches that.

On one thing I am quite certain: for Doom3, John rejected techniques that required renormalisation in the shader because he preferred to allow his normals to denormalise in the mip maps. Please do not doubt me on this.

http://www.antoniocheca.com/blog/archives/2004/08/john_carmack_ke_1.html#more
You will find I have not quoted him out of context, although it is possible that the transcript is bad or that I have misunderstood his intention. I should also have quoted the end of the same paragraph though:

John Carmack said:
That exaserbates the aliasing problem with the in-surface specular highlights and such, but it makes a lot of surfaces look a whole lot better, where you can go up to surfaces that may have been, when you walk up to them, they would have been just more blurry smears right now, and with renormalization you can actually see a little one unit wide normal map divet becomes a nice corner rounded indentation in the surface.
 
DemoCoder said:
I resent the fact that you guys don't think I'm personally abusive enough! :devilish:

Welll.....you are only abusive to me....... and for that you get a gold star! ;)
 
<sigh> This is just Scali trying to continue with his 3DMark thread. He's trying to convince us that 3DC has very little value, so it validates his standpoint that 3DMark05 uses DSM but exludes 3DC.

Once again, he completely misses the point. This isn't about DSM being more "valuable" than 3DC, and thus deserving of it's use in 3DMark05. It's about the fact that both 3DC and DSM are linked to only one IHV, not part of the DX9 spec, and being used in future games - yet one is included in 3Dmark05 and one is not.

Scali is just trying to continue his crusade to prove that 3DMark05 including DSM was a valid choice, and that excluding 3DC was also valid.
 
Dio said:
I haven't read the nvidia paper, so I don't know if the Doom3 method matches that.

On one thing I am quite certain: for Doom3, John rejected techniques that required renormalisation in the shader because he preferred to allow his normals to denormalise in the mip maps. Please do not doubt me on this.

http://www.antoniocheca.com/blog/archives/2004/08/john_carmack_ke_1.html#more
You will find I have not quoted him out of context, although it is possible that the transcript is bad or that I have misunderstood his intention. I should also have quoted the end of the same paragraph though:

John Carmack said:
That exaserbates the aliasing problem with the in-surface specular highlights and such, but it makes a lot of surfaces look a whole lot better, where you can go up to surfaces that may have been, when you walk up to them, they would have been just more blurry smears right now, and with renormalization you can actually see a little one unit wide normal map divet becomes a nice corner rounded indentation in the surface.
Right, so, from what I can tell this is basically describing the replacing of this code:
DP3 specular, specular, localNormal

With this:
NRM localNormal, localNormal
DP3 specular, specular, localNormal

Doom 3 does the first. The nVidia paper is a step beyond this where it actually uses the length of the interpolated normal vector to determine how smudged out the reflection should be (the idea is that if there is not much variance, the vectors that enter the average are all about the same, making for a normal vector of length one. If, however, the interpolation is done and the vectors are significantly different, then the normal vector will be much shorter, and thus indicate that "smudging" should be done).
 
Bouncing Zabaglione Bros. said:
Scali is just trying to continue his crusade to prove that 3DMark05 including DSM was a valid choice, and that excluding 3DC was also valid.
Well, if 3DMark05 uses the Mipmapping algorithm described in the nVidia whitepaper, then it can't use 3Dc. Similarly, if the program uses parallax mapping then it would be inefficient to use 3Dc. Not that I'm defending Scali being an ass, I'm just suggesting that there may have been a good reason.

On a side note, though, I think that once again, 3DMark05 is another useless benchmark from Futuremark. In particular, the massive vertex shader load is utterly unrealistic. Vertex shader loads in near future games will be low compared to this benchmark for the simple reason that games have to actually run on low-end hardware. I would tend to expect that the first batch of "DX9 required" games will limit themselves to the vertex performance available in the lower range of DX9 cards.

One may make the argument that 3DMark05 is looking further out, but by that time we should have the next iteration of DX, and thus more efficient ways of doing many of the things in that benchmark. For example, the firefly forest demo apparently used a shadow map rendered to a cube map. This rendering will apparently become much more efficient with the next iteration of DX (i.e. it should be much less vertex-limited).
 
Alrighty, just for kicks I went ahead and attempted to implement the technique described in that nVidia paper. The only problem is that since I wasn't able to add in the new texture for the lookup (well, actually, I suppose that could be done pretty easily, if one desired...just replace the essentially 1-D specular factor lookup table with the 2-D described in the paper...but, anyway, I didn't do this), I went ahead and implemented the math. It came to a hell of a lot of instructions, so I wouldn't recommend using this path for anything other than amusement (though it didn't actually seem to drop performance by all that much on my 6800...maybe it made good use of coissuing...).

Now, I think this is a valid implementation of the technique, but I'm welcome to any proofreading if anybody's up to it. Note that it suffers from the same "white pixels" problem that Humus' tweak did at first, due to the fact that it uses the same basic power function.

Here's what I did, I replaced this code in the interaction.vfp file:
Code:
# perform the specular bump mapping
DP3	specular, specular, localNormal;

# perform a dependent table read for the specular falloff
TEX	R1, specular, texture[6], 2D;

# modulate by the constant specular factor
MUL	R1, R1, program.env[1];

...with this code:
Code:
#n=length of localNormal
DP3	n.x, localNormal, localNormal;
RSQ	n.x, n.x;
RCP	n.x, n.x;

#Choose specular factor to be 10
SUB	T1.x, temps.x, n.x;
MAD	T1.x, temps.y, T1.x, n.x;
RCP	T1.x, T1.x;
MUL	f.x, n.x, T1.x;

RCP	R1.x, n.x;
DP3	T1.x, localNormal, specular;
MUL	R1.x, T1.x, R1.x;
MUL	T1.x, f.x, temps.y;
POW	R1.x, R1.x, T1.x;
ADD	T1.x, 1, T1.x;
RCP	T2.x, temps.z;
MUL	T1.x, T1.x, T2.x;
MUL	R1, T1.xxxx, R1.xxxx;

And, additionally, I had to modify the variable definitions. I therefore added n, f, T1, T2 to the TEMP line, and a new parameter:
PARAM temps = { 1, 10, 11, 0};

With limited testing, I found that I got similar output with either code, but in the short time I played I wasn't able to find a good place for testing the difference, i.e. I didn't find a place that shows significant aliasing in Doom 3's original code, so I'm open to any suggestions on a good place to compare this with the default.
 
Humus' trick lacked a _sat on the dp3, I suppose yours is the same.
(Doom3's own power table replaces the sat in the original shader iirc).
 
Hrm, maybe that would help. I'll try it later (in Linux now....want to get some actual work done!).
 
Richard Huddy has mailed me about this discussion in relation to his comments at Hexus about 3DMark05:

Richard Huddy said:
I didn’t claim that ALL normal maps should ALWAYS be represented with 3Dc. I said we were disappointed that 3Dc wasn’t used.

It does seem likely that at least some normal maps would have been well represented with 3Dc in a vanilla kind of way.

It happens that I suspect that 3Dc could have been used throughout 3DMark05, but the fact that there was no technical dialog between FM and ATI made it a discussion that never happened.

I think they could have used it, simply because the distance attenuation of normals is something that’s easy to include in a pixel shader. If that’s all they needed, then I believe that it would have been easy to include it. If they had other technical reasons for not wanting to use it I was never given the chance to argue the case.

Because I’m travelling at present it’s not practical for me to get directly engaged in this dispute until Monday at the earliest.
 
I didn’t claim that ALL normal maps should ALWAYS be represented with 3Dc. I said we were disappointed that 3Dc wasn’t used.

Funny how people like DaveBaumann interpreted it that way then. That's my point. They fail to tell the whole story. The way the statement was presented was far more bold than it should actually have been.

It does seem likely that at least some normal maps would have been well represented with 3Dc in a vanilla kind of way.

Always a same assumption that 'some' normalmaps can use 3Dc... But jumping from there to the conclusion that it "would have demonstrated an even greater advantage for ATI hardware and technology" is a bit rich.

I think they could have used it, simply because the distance attenuation of normals is something that’s easy to include in a pixel shader. If that’s all they needed, then I believe that it would have been easy to include it. If they had other technical reasons for not wanting to use it I was never given the chance to argue the case.

So basically he says ATi is making unqualified statements in public.
He *thinks that they could be used, and *believes* that it would be easy to include it, but the public statement did not show ANY of this doubt. It just proclaimed 3Dc giving "an even greater advantage" as the absolute truth.
 
Scali said:
Funny how people like DaveBaumann interpreted it that way then.

I had already told you - to which you had no objection to at the time - that I had only considering the use of compressed normal mapping for character detail at the time.
 
Status
Not open for further replies.
Back
Top