Pre-order X800 Pro - ?NDA? - 8 extreme / 12 normal PS pipes

Damnit, why do I always find myself caught in symantic arguments I don't even care about.. (maybe I shouldn't post shit at 6AM unless I slept sometime in the previous days)

991060 said:
I dont see why they're unrelated.

You want fp normal map compression, right? OK, since it's compression, how do we save space/bandwidth? We use fewer bits, that's exactly how ERGB works: converting fp16 to int 8 and you're done with a 2:1 ratio.

?o_0?

Few things.

A) Who ever said in that last post that I was converting between 16-bit floats and 16-bit ints?

B) What does any of this have to do with the 3Dc format?

You said that 3Dc compression could only be applied to normals that were already in integer formats, I said this was a rather arbitrary limitation for ATI to impose (if they are at all) since a normal in either floating-point or integer format could be trivially converted to the other without any user input at all, and it's all going to end up being decompressed to a float anyway. They're all just normals after all.

Even if the compressor only worked with integer formats and the driver didn't do automatic conversions, the application supporting 3Dc will just do the conversion itself..... point being - How does any of this in any way affect how useful 3Dc is or say anything at all about 3Dc period?

All that matters is that:
sizeof(3DcCompress(normal)) < sizeof(normal)
and that 3DcUncompress(3DcCompress(normal)) ~= normal

Whether or not you have to convert 'normal' into the internal format the compressor expects before sending it to the driver is something I'd care about while writing support for the format, but is completely irrelevant in a discussion about the quality of said format.
 
Ilfirin said:
You said that 3Dc compression could only be applied to normals that were already in integer formats, I said this was a rather arbitrary limitation for ATI to impose (if they are at all) since a normal in either floating-point or integer format could be trivially converted to the other without any user input at all, and it's all going to end up being decompressed to a float anyway. They're all just normals after all.
True, but if I already know the limitation is there, why would I spend time on the un-supported format which requires automatic conversion?
Ilfirin said:
Even if the compressor only worked with integer formats and the driver didn't do automatic conversions, the application supporting 3Dc will just do the conversion itself..... point being - How does any of this in any way affect how useful 3Dc is or say anything at all about 3Dc period?
Do we let the art people creat fp textures and DXTC them after automatic conversion?
Ilfirin said:
All that matters is that:
sizeof(3DcCompress(normal)) < sizeof(normal)
and that 3DcUncompress(3DcCompress(normal)) ~= normal
True.
Ilfirin said:
Whether or not you have to convert 'normal' into the internal format the compressor expects before sending it to the driver is something I'd care about while writing support for the format, but is completely irrelevant in a discussion about the quality of said format.
According to what I know, the 3Dc compressor can only accept input in int8 format, if you can convert fp data into such format without major quality loss, you win. This is how the routine works:
(date in whatever format)----->(data in int8)----->compressor------>GPU---->decompressor----->(data in fp format).
 
Ilfirin said:
991060 said:
According to what I know, the 3Dc compressor can only accept input in int8 format

:oops: EAK! Really?

Can't even use it with R16G16?

I'm afraind that's the case, so I said it's quite limited. ;)

If you're interested in more detail, pm me your email.
 
DemoCoder said:
I don't like the idea of a compression scheme just slipping into the standard unilaterally. That happened with DXTC when it was chosen over VQ methods. The 3D industry needs a JPEG/MPEG-like expert group to define the next best compression formats.

Edit: Link for those interested in the Java3D normal compression technque

First published in Deering, Michael. "Geometry Compression." Computer Graphics Proceedings, Annual Conference Series, 1995, ACM SIGGRAPH, pp 13-19.

There's a really good reason that S3TC was chosen over VQ. VQ stinks when it comes to a hardware implementation. There are a bunch of known problems: The look up table has to be bounded if it's not going to suck -- and thus the compress and/or quality will be bound. Also, you can't use the same look up table for the mip maps. Well you can, but the colour space will not be correct.

FXTC was not adopted because the image quality was below that of S3TC.
 
I think Simon might disagree with you, in that, the problems are resolvable, but still yield a better result than DXTC. Anyway, we need compresson formats for HDR data and generalized FP16 formats.
 
DemoCoder said:
Anyway, we need compresson formats for HDR data and generalized FP16 formats.
Multiresolution representations! but there is still a lot of research to do in this field..
 
DemoCoder said:
I don't like the idea of a compression scheme just slipping into the standard unilaterally.

3Dc may or may not be the best solution possible at this time, but I don't see how it could be fairly called "unilateral". Assuming the standard we are talking about here is DX --which is the unstated assumption I *think* we are all using-- then unless it was developed by MS it can't be unilateral. One would like (and possibly I'm living in a fantasy world here) to think that MS would have solicited comment from others as well, and that no one offered a better idea that they were willing to have part of DX (i.e. non-proprietary) at this time.
 
Maybe unilateral is the wrong choice of words, perhaps "open" is better. I'd like to see a process over the long term take place similar to *PEG. I posted a message 2+ years ago on B3D decrying lack of progress, and in 2+ years, there's been little progress. I don't think it should be left up to the IHVs alone. They need Academia and professional organizations involved.

3Dc looks like a fairly straightforward tweak of DXT5, so it will probably be adopted due to how easy it is to implement in the existing pipeline.

I've just been hoping for more that's all. It looks like a good stopgap method, but the industry seriously needs to get together and do something forward looking, especially with HDR formats on the rise.
 
DemoCoder said:
I think Simon might disagree with you, in that, the problems are resolvable, but still yield a better result than DXTC. Anyway, we need compresson formats for HDR data and generalized FP16 formats.

I didn't say that VQ couldn't have better quality, but I am saying that in a practical application it's hard to do it while maintaining the performance. There was a paper on this a while back that made a similar conclusion (can't remember what it was called). PVR has limited the size of the LUT, which is the compromize you have to make to prevent the performance from going through the floor:

http://www.boob.co.uk/docs/powervr2dc_features_wince.pdf

I'm sure Simon and Co. have done an exemplary job in optmizing the encoder, but there are limits (a 2K LUT in this case).

However, I agree that float texture compression is needed. You might be able to use DXTC to get around that. For example, if you had a common exponent per block, you may be able to encode R and the exponent, and G and B as the mantissas. On second thought, you would probably want to encode the sign and exponent on the green channel given you have one more bit. You should be able to do this in a shader today...
 
CET.

And before you ask it s Central European time, which is the time unit for most european countries and is GMT +1 or + 2 depending of winter/summer time.
 
Back
Top