Why 3Dc may not always be a better solution

Status
Not open for further replies.
FUDie said:
Since you seem to overlook it, I'll point it out to you. One drawback of DXT5 is that it can require a swizzle. If your hardware doesn't support arbitrary swizzles, this can hurt performance. 3Dc doesn't suffer from this drawback. Your claims against ATI are wrong. You are wrong. QED.

-FUDie

Nice logic there. Apparently that normalvector-thing *was* the best you could do ;)
 
Scali said:
This was uncalled for. I can understand some of the people here just venting... but if you want to be taken seriously, I suggest you stop making such rude remarks. It's not my fault that your PR department is not informing the public properly.
Rather less uncalled for than your unsubstantiated, and in fact refuted comment about our 'failure to point things out', I think.
 
FUDie, Only if you don't have arbitrary swizzle, don't use the Z coordinate, or don't pre-swizzle your source data. I think Scali is simply pointing out that 3Dc has both pros and *cons*, not just all pro and no negative aspects as is assumed in these forums. In other words, 3Dc gains you some things, but then limits some of the tricks you can perform in other ways.

3Dc maps could probably be antialiased in the pixel shader, but then again, that would probably be expensive compared to an extra swizzle.
 
andypski said:
Rather less uncalled for than your unsubstantiated, and in fact refuted comment about our 'failure to point things out', I think.

Apparently I hit a sore spot here. I stand by my words that the quote on hexus.net does not point out the disadvantages of 3Dc in any way.
You are trying to pull it out of context in a rather nasty way. I suggest you stop doing that, or else I'll trade in my Radeon for a GeForce :)
 
DemoCoder said:
3Dc maps could probably be antialiased in the pixel shader, but then again, that would probably be expensive compared to an extra swizzle.
While we're on the topic of expense, the implementation described in NVidia's paper requires a dependant texture read, which can be expensive. The whole point of Humus' shader edit in Doom 3 was to avoid the existing dependant texture read, because it is apparently quite costly on ATI hardware. Of course, that's just ATI hardware. NVidia seems to be much faster in this area.
 
DemoCoder said:
FUDie, Only if you don't have arbitrary swizzle, don't use the Z coordinate, or don't pre-swizzle your source data. I think Scali is simply pointing out that 3Dc has both pros and *cons*, not just all pro and no negative aspects as is assumed in these forums. In other words, 3Dc gains you some things, but then limits some of the tricks you can perform in other ways.

This is correct, I tried to name both the pros and the cons of 3Dc in various situations. If you also read the topic, it doesn't say 3Dc is useless or anything, just that it may not always be what you want to use.
Humus is also right in pointing out that 3Dc is technically just a 2-channel compression format, and can be used for other things than storing normal vectors. That is a pro that I forgot to mention.
If anyone thinks that 3Dc has more cons than pros, then I suggest they complain to ATi, not me.
 
Ostsol said:
While we're on the topic of expense, the implementation described in NVidia's paper requires a dependant texture read, which can be expensive. The whole point of Humus' shader edit in Doom 3 was to avoid the existing dependant texture read, because it is apparently quite costly on ATI hardware. Of course, that's just ATI hardware. NVidia seems to be much faster in this area.

In all fairness, the NVIDIA paper assumes you were already doing the dependent read in the first place.
Additionally, I think the main performance gain was when anisotropic filtering was enabled, since the ATi hardware apparently took many samples from the LUT. When anisotropic filtering was disabled, the gain was marginal, so I don't think dependent reads on ATi hardware are that much more costly than on NVIDIA hardware.
We can only guess why NVIDIA hardware did not take a similar performance hit when anisotropic filtering was enabled...
 
Scali said:
FUDie said:
Thanks, now you've shown that your own post is contradictory.
Oh really, how is that?
From an edit to my earlier post:
P.S. To make it more clear. Throughout your whole post you mention "normalmaps "(sic), then when it comes to NVIDIA, you mention "denormalized normalmaps". Seems clear to me that when you refer to ATI you are talking about "normalized normal maps" since you make the point of referring to "denomalized normal maps" later. You're making up an argument just to find fault where there is none.
Obviously if the third component is derived from the other two, you're going to get a unit normal since you assume the length is 1. Anyone working with such a format would realize that unit normals are the only outcome.
Yes, but I am more concerned about the ramifications of using unit normals. After discussing with DaveBaumann, it became clear that he had no idea about this. And from the reactions on 3Dc of many other people on this forum, it appeared that he wasn't alone.
So I decided to point it out.
It's clear you have an agenda to attack the usefulness of 3Dc.
I think one can only attack the usefulness of 3Dc if one modifies the hardware or drivers supporting it. I am merely pointing out that there are certain methods of normalmapping (which are implemented in current and future engines) that aren't compatible with 3Dc, so ATi promoting 3Dc for everything is a tad premature. It may make developers think twice about using 3Dc, but is that my fault? If there are better solutions than 3Dc in certain situations, why wouldn't people use them?
Did ATI say that 3Dc was the be-all-end-all? I don't think so. Of course they'd like to see it being used where possible, but since 3DMark05 doesn't use it at all, it's easy to see why they would be disappointed. Have you checked to see whether 3DMark05 uses (normalized) normal maps?

-FUDie
 
FUDie said:
From an edit to my earlier post:
P.S. To make it more clear. Throughout your whole post you mention "normalmaps "(sic), then when it comes to NVIDIA, you mention "denormalized normalmaps". Seems clear to me that when you refer to ATI you are talking about "normalized normal maps" since you make the point of referring to "denomalized normal maps" later. You're making up an argument just to find fault where there is none.

Actually, I speak of normalmaps in general, and then I take one specific example which cannot be performed with 3Dc (but can still be performed on ATi hardware), and explain the significance of this particular example.
This is not about ATi vs NVIDIA at all, since the method described in the paper from NVIDIA can be implemented on both. If anything, NVIDIA is at a disadvantage, since they don't offer 3Dc at this point, and therefore cannot benefit from the mentioned pros of 3Dc.

Did ATI say that 3Dc was the be-all-end-all? I don't think so. Of course they'd like to see it being used where possible, but since 3DMark05 doesn't use it at all, it's easy to see why they would be disappointed.

Let's just say that ATi didn't go out of their way to explain to people like DaveBaumann that 3Dc may not be the be-all-end-all, even though these people believed this (which may not have been because ATi told them so, but then again, I never claimed they did).

Have you checked to see whether 3DMark05 uses (normalized) normal maps?

andypski would know that better than I do, I think.
 
Scali said:
FUDie said:
From an edit to my earlier post:
P.S. To make it more clear. Throughout your whole post you mention "normalmaps "(sic), then when it comes to NVIDIA, you mention "denormalized normalmaps". Seems clear to me that when you refer to ATI you are talking about "normalized normal maps" since you make the point of referring to "denomalized normal maps" later. You're making up an argument just to find fault where there is none.
Actually, I speak of normalmaps in general, and then I take one specific example which cannot be performed with 3Dc (but can still be performed on ATi hardware), and explain the significance of this particular example.
This is not about ATi vs NVIDIA at all, since the method described in the paper from NVIDIA can be implemented on both. If anything, NVIDIA is at a disadvantage, since they don't offer 3Dc at this point, and therefore cannot benefit from the mentioned pros of 3Dc.
Except that NVIDIA can still used the swizzled DXT trick: Put high resolution data into the alpha component and the second component into Green. This gives pretty good results and the conversion from 3Dc to DXT5 is trivial in this case.
Did ATI say that 3Dc was the be-all-end-all? I don't think so. Of course they'd like to see it being used where possible, but since 3DMark05 doesn't use it at all, it's easy to see why they would be disappointed.
Let's just say that ATi didn't go out of their way to explain to people like DaveBaumann that 3Dc may not be the be-all-end-all, even though these people believed this (which may not have been because ATi told them so, but then again, I never claimed they did).
So can we expect a similar post about Ultrashadow? Has NVIDIA gone and pointed out that it's not useful in all situations?
Have you checked to see whether 3DMark05 uses (normalized) normal maps?
andypski would know that better than I do, I think.
Good job passing the buck. The correct answer would be "I don't know" instead of putting the onus onto andypski.

-FUDie
 
FUDie said:
Except that NVIDIA can still used the swizzled DXT trick: Put high resolution data into the alpha component and the second component into Green. This gives pretty good results and the conversion from 3Dc to DXT5 is trivial in this case.

I fail to see the significance of this. We are not talking about how to convert 3Dc to DXT5, we are talking about converting from DXT5 to 3Dc, and how it can't always be done.
If you're going to use DXT5 anyway, it doesn't matter if you use ATi, NVIDIA or whatever other hardware with DXT5 support.
Heck, the same goes for 3Dc, except that ATi happens to be the only one offering 3Dc at this time. The issue was never about ATi or NVIDIA or any other IHV, but strictly about 3Dc itself.

So can we expect a similar post about Ultrashadow? Has NVIDIA gone and pointed out that it's not useful in all situations?

I don't know of any shadowing algorithm that's currently being used on 3d hardware that would not work at least as well with UltraShadow than without. So I don't see any analogy. If you do, feel free to open a thread about it.

Good job passing the buck. The correct answer would be "I don't know" instead of putting the onus onto andypski.

No, my answer was indeed the correct one.
 
Humus said:
Regarding the normal map mipmapping paper, I've made a number of serious attempts at it, but found that the improvement wasn't enough to be worth releasing a demo on. Essentially, if there was aliasing without the technique, there was aliasing with it too, just slightly toned down. Not really worth the pretty large slowdown compared to regular specular computation.
I didn't like that paper very much myself, nor was I particularly impressed with the results or the cost.

I think one of the better solutions to this problem is PTM (polynomial texture mapping). There, mipmapping is actually mathematically correct, unless your saturating a lot, which will be the case for sharp highlights. Still, you can make whatever quadratic function you want. The computations are much easier - 2 texld and 3 math ops should suffice if you're clever. Plus you get self shadowing bumps and interreflection as well for free.
 
Scali said:
FUDie said:
Except that NVIDIA can still used the swizzled DXT trick: Put high resolution data into the alpha component and the second component into Green. This gives pretty good results and the conversion from 3Dc to DXT5 is trivial in this case.
I fail to see the significance of this. We are not talking about how to convert 3Dc to DXT5, we are talking about converting from DXT5 to 3Dc, and how it can't always be done.
I am talking about using DXT5 for two components and generating the third component, just like 3Dc.
If you're going to use DXT5 anyway, it doesn't matter if you use ATi, NVIDIA or whatever other hardware with DXT5 support.
Heck, the same goes for 3Dc, except that ATi happens to be the only one offering 3Dc at this time. The issue was never about ATi or NVIDIA or any other IHV, but strictly about 3Dc itself.
Except that 3Dc can be better than DXT5 because it can offer better performance (no swizzle) and better quality (8 bits for each component).
So can we expect a similar post about Ultrashadow? Has NVIDIA gone and pointed out that it's not useful in all situations?
I don't know of any shadowing algorithm that's currently being used on 3d hardware that would not work at least as well with UltraShadow than without. So I don't see any analogy. If you do, feel free to open a thread about it.
So if it doesn't help at all, that's ok with you? Heck you can always turn it off, right? How can 3Dc be any worse if you don't even use it?
Good job passing the buck. The correct answer would be "I don't know" instead of putting the onus onto andypski.
No, my answer was indeed the correct one.
No, it's not. There's no reason to bring a third party into this. As I said, that's passing the buck.

-FUDie
 
FUDie said:
I am talking about using DXT5 for two components and generating the third component, just like 3Dc.

Yes, and how is that relevant?

Except that 3Dc can be better than DXT5 because it can offer better performance (no swizzle) and better quality (8 bits for each component).

I mentioned that in my opening post.

So if it doesn't help at all, that's ok with you? Heck you can always turn it off, right? How can 3Dc be any worse if you don't even use it?

I think you missed the point completely. By the looks of it you haven't even read the opening post.

No, it's not. There's no reason to bring a third party into this. As I said, that's passing the buck.

There's a reason alright.
 
There is no reasoning with Scali..... he has a specific anti ATI/pro nVidia agenda. It's a total waste of bandwidth to even respond to him. If you disagree with him he gets abusive. Think in terms of Radar1200 here. This isn't DC or even Chalnoth. :rolleyes:
 
sorry but no one is worse than chalnoth, with the exception of nv40 from the nvnews forum. then you have the other side of the fence with people like jvd and hamixda from rage3d
 
i just meant the gross level of extreme fanboyism displayed by said party. chalnoth is at the top of the mountain
 
Here's my thoughts on the mud-slinging. Forgive me if I am wrong on any point and feel free to correct me.

The problem is that alot of people are talking about the wrong type of aliasing for this thread (or are just plain trolls). The type that Scali brought up has nothing to do with the artifacts caused by compression.

The technique described in the paper utilizes the denormalized vectors caused by interpolating values in a 3-componant normal-map. Since the third component in 3Dc textures is derived by the assumption that the two other components make up part of a normalized vector, the technique described in the paper Scali posted cannot be used. That is his first point.

The part about using DXT5 comes from the fact that it certainly can be used for compressing normalmaps. The result is some precision aliasing, which can potentially be considered to be outweighed by the reduction of the aliasing described in the paper. That's his other point.

The purpose of this thread is to discuss the validity of those two points and perhaps to discuss possible alternative techniques for reducing aliasing. Whether Scali's motivation for post this is based on some form of bias is a matter for another thread or -- better yet -- private messages.
 
Status
Not open for further replies.
Back
Top