3Dc

No, I'm saying a developer can support both. DXT5 can be used for normal maps and provide good IQ increases over low-res normal maps, or hi-res maps using DXT1. ATI released a paper on the technique last year. Using the DXT5 trick would in fact benefit the greatest number of people.
 
I see what DemoCoder is saying, and it makes sense to me. Correct me if I'm wrong, but he's saying use the compression technique that gives the best compression for the type of texture you are trying to compress (eg. 3DC for normal maps) and then have a fall-back to the next-best standardised compression technique if that's not available (eg. DXT5). As long as it's not a pain for devs, this seems the best solution for the majority of the gaming public.
 
Because DXT1 for normal maps suck. The whole point of using DXT5 is that one of the channels can store to end point components at 8-bit precision, with 3-bits of lookup yielding 8 different component values. The other component can only store 2-bit indexes, yielding 4 different component values. 3Dc is just a generalization of the DXT5 alpha-channel method. Each 64-bit block is like the alpha-channel of DXT5. They may have extra HW to do renormalization on the fly or computation of the third component like NVidia's 8_8 texture format.

ATI's normal map paper said:
5.1 DXT1 without renormalisation
- Highly visible noise is introduced into the lighting equation.
- Visible directional lighting errors are introduced. - Visible lighting magnitude changes on specular highlights.
- Specular highlights on some surfaces are lost almost completely.
- Significant blocking in some areas.
5.2 DXT5 without renormalisation
- Noise is reduced, but still very significant.
- Directional lighting errors are somewhat corrected
- Lighting magnitude changes are reduced
- Specular highlights are still frequently broken
- Blocking still a problem, but usually on specular regions only.
5.3 DXT1 with renormalisation
- Noise is greatly improved.
- Lighting directional errors are still visible.
- Lighting magnitudes are correct, but with significant detail loss.
- Specular highlights are the correct brightness.
- Specular blocking errors are massively reduced. Magnitude of highlights is now roughly correct

5.4 DXT5 with renormalisation
- Very slight noise remains, but almost invisible.
- All other areas look essentially identical to the original.
- Specular blocking has almost completely disappeared. Specular highlights are the correct brightness.
 
DemoCoder said:
Because DXT1 for normal maps suck.

Thank you. In other words, you trade of "installed base" for "quality" as you move from 3Dc to DXT5 to DXT1 etc.

Is it better to push for the best quality solution (that would be realatively little cost for additional hardware support), or push for lower quality and a wider user base.

It's a trade-off. Devs can do what they want, but I prefer that ATI evangelize 3Dc, and hope it gains enough support such that other IHVs adopt it.
 
Joe DeFuria said:
DemoCoder said:
Because DXT1 for normal maps suck.

Thank you. In other words, you trade of "installed base" for "quality" as you move from 3Dc to DXT5 to DXT1 etc.

I believe his main point was how much better is 3Dc than DXT5? As he said, "Enough to justify not supporting DXT5 *at all*?" I think it's similar to the FP24 vs. FP32 and PS2.0 vs. PS3.0. scenarios - how much farther is the step in improvement? Why use PS3.0 when PS2.0 is 'almost' as good? The same with FP24 vs. FP32? I guess it'll all depend on much value you place on the improvement itself.
 
Joe DeFuria said:
DemoCoder said:
Because DXT1 for normal maps suck.

Thank you. In other words, you trade of "installed base" for "quality" as you move from 3Dc to DXT5 to DXT1 etc.

What are you talking about? The installed base of DXT5 capable cards is equal to the installed base of DXT1 capable cards. Why would you choose 3Dc OR DXT1 when DXT5 is available on the same base?

Moreover, Joe, do you have any IQ comparisons of 3Dc vs DXT5 normal maps to even make a comparison to judge on? Again, ATI's own paper claims it looks very close to the original uncompressed version.

Is it better to push for the best quality solution (that would be realatively little cost for additional hardware support), or push for lower quality and a wider user base.

False dichotomy. Every owner of R2x0, NV3x, R3x0, and even NV2x can enjoy close to 3Dc quality. In fact, it can be done on any PS1.4 + DXTC capable card easily. Maybe we should drop FP24 support eh? Hey, vast majority of users have PS1.1, and now we have a high quality FP32 option, right?

It's a trade-off. Devs can do what they want, but I prefer that ATI evangelize 3Dc, and hope it gains enough support such that other IHVs adopt it.

How about supporting something that works on all cards today, requires an equivalent amount of developer code in their pixel shaders, and delivers much better quality for the vast majority of users?

And how about supporting something which is more transparent for the future. 3Dc requires 2 extra pixel shader instructions to work. Why not support an extended 3Dc mode where the decompression is done automatically by the HW and doesn't require additional shader support to work.

Right now, the developer labor to support 3Dc is equal to the developer labor to support DXT5, so IMHO, they can and should support both.
 
I guess the best use for 3Dc is for use with high precision, two-component texture formats for normal-maps, like D3DFMT_G16R16.
 
16-bit values get converted to 8 bits eventually. Just like with S3TC, if you start with 8:8:8, you don't get back 8:8:8, the basis for interpolation is 5:6:5. True, you can linearly interpolate at 8-bit resolution and get back some low-order bits. The original values can aid in picking a better interpolation curve during compression. But this also applies to DXT5.
 
DemoCoder said:
Right now, the developer labor to support 3Dc is equal to the developer labor to support DXT5, so IMHO, they can and should support both.

It takes the same labor and cost to support both as it does one?

And all these other cards that you mentioned....they are capable of using the normal maps (compressed or not) with any meaningful performance?
 
It takes 1-2 pixel shader instructions to do, and some HW may not even need that. If your storing tangent space normal maps, you need to generate the Z coordinate = sqrt(1-x*x-y*y). This is dp2 + rsq, which comes out to 1-clock due to the ability to co-issue the rsq on some HW.

The only difference between 3Dc version and DXT5 version is a swizzle I think.

At the very least, it will run well on NV3x, R3x0, and R2x0 with similar performance hit to X800.

You just seem to be arguing for support of something which, yes, like FP32 vs FP24, may have some quality differences, while leaving everyone else out in the cold, given that it is not much extra work to support both formats.
 
I don't advocate that anyone provide special support for both FP24 and FP32.

Just support "full precision." If in supporting full precision, FP24 based cards take a quality hit vs. FP32...so be it.
 
But you're advocating support of something that a) isn't a standard yet b) in lieu of another solution which provides nearly the same effect, works with the existing standard

What about PS3.0 effects? You know many things that can be coded for PS3.0 can also be done for PS2.0, perhaps more expensively, or slightly lower quality. Do you advocate developers not provide fallbacks for 2.0 and go straight to adding effects which only 3.0 cards can support, even if 2.0 cards *COULD* support them?

I'm thinking you got into this discussion without reading ATI's original Normal Map compression paper, and somehow confused DXT5 with some grossly inferior method, instead of 3Dc being a logical minor tweak of a nice solution that ATI came up with using DXT5.
 
>>"I'm thinking you got into this discussion without reading ATI's original Normal Map compression paper, and somehow confused DXT5 with some grossly inferior method, instead of 3Dc being a logical minor tweak of a nice solution that ATI came up with using DXT5."<<

I should hope developers target both PS2.0 and 3.0 and 1.4 and 1.1. Think about it, we've had shader HW on the market for several years, dating back to the the GF3. It's taking a damn long time for the market as whole (top to bottom) adopt shader HW, and I'd suspect DX8 is just now becoming the LCD (of course, correct me if I'm wrong, GF4MX could still be a dominant force) for AIBs. Even DX8 allows developers to do so much more than < DX8 stuff, not considering DX9 & things like FP shaders. I can say that I've got a 9800XT and therefor I want developers to target DX9 (SM2.0) because my card will be utilised to the max; but that isn't reality. ATi and NVIDIA could unveild top to bottom market lines of SM3.0 cards and it'd still take quite a long time for SM3.0 to become the LCD.

On the topic of 3Dc, I hope it picks up support. If developers target it as well as DXT5 in the meantime, 3Dc could pick up IHV support that it will become the LCD for normal map compression. Until that time, it'd be nuts using a feature exclusively, that just one IHV supports.
 
What about PS3.0 effects? You know many things that can be coded for PS3.0 can also be done for PS2.0, perhaps more expensively, or slightly lower quality.

Yes, and I know other things can be done on 3.0 that could work out to be slower than the equivalent in 2.0 as well.

Do you advocate developers not provide fallbacks for 2.0 and go straight to adding effects which only 3.0 cards can support, even if 2.0 cards *COULD* support them?

No, I advocate whatever makes sense. I certainly don't think that if Developers support 3Dc, it doesn't make "automatic sense" to also try and support other inferior compression schemes. It'll be a case by case situation.

I'm thinking you got into this discussion without reading ATI's original Normal Map compression paper..

And I'm thinking you got into this discussion to try and pooh-pooh another ATI only-feature. Let's see...so far...Temporal FSAA...check....3Dc...check....6X MSAA...check...

Have I left any out? ;)
 
joe emo said:
I should hope developers target both PS2.0 and 3.0 and 1.4 and 1.1. Think about it, we've had shader HW on the market for several years, dating back to the the GF3. It's taking a damn long time for the market as whole (top to bottom) adopt shader HW, and I'd suspect DX8 is just now becoming the LCD (of course, correct me if I'm wrong, GF4MX could still be a dominant force) for AIBs.

The #1 supplier of graphics (Intel) is still not shipping dx9 parts. So there will be a lower target for quite some time yet.
 
jvd said:
i'm pretty sure its dx specific but remember we are most likely a year off or more from dx10/ next .

Ati prob wanted to get it into hardware but since it was finished inbetween dx versions they figured if we get it out there and get an installed base of cards running it , it will stand a better change of being put in.

Nvidia has many opengl extensions and caps but unlike ati they charge other companys to use them .

First many people blasted NV for supporting stuff that wasn't in the spec. I don't care either way though.

Second nvidia doesn't charge now, that is past history.

Third isn't 3dC minimally different from other compression methods already in dx9?
 
dksuiko said:
I believe his main point was how much better is 3Dc than DXT5?

Isn't the answer to that kinda implicit in that no-one is using DXT5 for normal mapping?
 
Joe DeFuria said:
Yes, and I know other things can be done on 3.0 that could work out to be slower than the equivalent in 2.0 as well.
Why would they? PS2.0 is a subset of PS3.0. People will use PS3.0 features where they count, and not some silly red herring like using dynamic branching on everything where a conditional or predicate would suffice.

No, I advocate whatever makes sense. I certainly don't think that if Developers support 3Dc, it doesn't make "automatic sense" to also try and support other inferior compression schemes.

Number one, there is no proof that it is inferior in a way that even matters in the majority of cases. Secondly, both methods require 2 pixel shader instructions that differ only by source modifiers. Third, even ATI claims the quality loss is miniscule.

And I'm thinking you got into this discussion to try and pooh-pooh another ATI only-feature. Let's see...so far...Temporal FSAA...check....3Dc...check....6X MSAA...check...

I'm thinking your trying to deflect the argument because you have no technical base to stand on. (BTW, Temporal FSAA is not "ATI" only either, it can be done on any card)

Fact is, I'm calling for developers to support both methods. You're the only one who's "pooh-pooh"ing anything Pooh-Boy.
 
DaveBaumann said:
dksuiko said:
I believe his main point was how much better is 3Dc than DXT5?

Isn't the answer to that kinda implicit in that no-one is using DXT5 for normal mapping?

That's like saying "isn't the answer implicit, no one supported offset/parallax/virtual displacement mapping in the past" or polybump.

No one thought of using DXT5 for normal map compression Dave, by hacking a component into the alpha channel. These techniques require developer evangelization. ATI just published its paper on the technique last October. Like all shader tricks, it will filter down over time.

Why don't you perform a comparison between DXT5 compressed normal maps (with alpha channel trick) and 3Dc compressioned normap maps of equivalent resolution on B3D to settle the matter, as B3D has done in the past for other compression methods (VQ vs S3TC vs FXT1).

Fact is, both techniques require special developer support (e.g. pixel shader instructions to be added to all shaders), so neither 3Dc nor DXT5 are transparent in the way that compression was in the past. Developers will need to be educated on the special conditions needed to support the format.
 
Back
Top