Are Multi TMUs per pipe really outdated?

BTW I remember reading a paper that was discussing how a normal represented as 3 floats had enough accuracy to hit a sub-centimetre sized target on Mars from the Earth!

Not sure if that is precise enough for the next Derek Smart game....
 
Joe DeFuria said:
BTW I remember reading a paper that was discussing how a normal represented as 3 floats had enough accuracy to hit a sub-centimetre sized target on Mars from the Earth!

Not sure if that is precise enough for the next Derek Smart game....

It would also be waaayy too imprecise to point a vector to his Ph.D.
According to the latest information we've got, the last proof of its existence is around Uranus and it got reduced too a 0.01cm size by blood eating aliens.


Uttar
 
Simon F said:
Assuming my "back of the envelope" calculations are correct, then if those 65k are spread 'evenly' on a sphere then there is angular difference of <1 degree between the points.
The last time I tried to work that out, I overflowed the envelope.
 
Dio said:
Simon F said:
Assuming my "back of the envelope" calculations are correct, then if those 65k are spread 'evenly' on a sphere then there is angular difference of <1 degree between the points.
The last time I tried to work that out, I overflowed the envelope.

I've got approx. 0.8 degree.

Without overflow. :)
 
Hyp-X said:
Dio said:
Simon F said:
Assuming my "back of the envelope" calculations are correct, then if those 65k are spread 'evenly' on a sphere then there is angular difference of <1 degree between the points.
The last time I tried to work that out, I overflowed the envelope.

I've got approx. 0.8 degree.

Without overflow. :)
Bingo. However, I think we'd have to assume that any packing would not spread the points perfectly evenly so I rounded up a bit.
 
Dio said:
There seems to be somewhat of a misconception here, namely that 32-bit texture performance is all that matters.

As a well-known texture compression advocate, I personally think that compressed texture performance is what really matters. There's no reason that current applications can't compress at least 50% of their texture sets, and in the vast majority of cases 80%+.

But I'm not sure compression will have any effect on texture filtering performance. It is my understanding that textures are always decompressed before filtering.
 
Simon F said:
BTW I remember reading a paper that was discussing how a normal represented as 3 floats had enough accuracy to hit a sub-centimetre sized target on Mars from the Earth!

If we're talking about a normalized vector of 32bit floats, then that's true*.

(*)If you move Mars to < 700km away from Earth. (All approximations to your benefit.)
 
Simon F said:
I must admit I didn't specify how you should pack them! I was just thinking that 16bits gives 65k possible vectors. Assuming my "back of the envelope" calculations are correct, then if those 65k are spread 'evenly' on a sphere then there is angular difference of <1 degree between the points.

Only thing is, I don't believe the GeForce FX supports 16-bit ints, and I imagine 16-bit floats would be more problematic for such an application.

Anyway, there's an nVidia white paper that states that 16-bit floats won't be good enough for some situations:
http://www.nvidia.com/docs/lo/2310/SUPP/TB-00625-001_v03_Precision_112502.pdf
 
Chalnoth said:
But I'm not sure compression will have any effect on texture filtering performance. It is my understanding that textures are always decompressed before filtering.
You mean decompressd before loading into the texture cache? It doesn't necessarily have to be that way.
 
OpenGL guy said:
Chalnoth said:
But I'm not sure compression will have any effect on texture filtering performance. It is my understanding that textures are always decompressed before filtering.
You mean decompressd before loading into the texture cache? It doesn't necessarily have to be that way.
Agreed. Since OGLguy has just posted this, I suspect that perhaps NVidia may be the only vendor who is decompressing into the texture cache.

I suppose the advantage of that is it probably decreases the decompression logic, however it's hard to see how much this saves given the the simplicity of DXTC decompression. The disadvantage is that it decreases the effectiveness of that cache (i.e. storing compressed data in the cache effectively makes the cache N times larger).

I suppose you could do some of the texture filtering before decompression if you were really keen.
 
Simon F said:
I suppose you could do some of the texture filtering before decompression if you were really keen.

I've also toyed with that idea. But wouldn't it only be useful in some very special cases?
 
Basic said:
Simon F said:
BTW I remember reading a paper that was discussing how a normal represented as 3 floats had enough accuracy to hit a sub-centimetre sized target on Mars from the Earth!

If we're talking about a normalized vector of 32bit floats, then that's true*.

(*)If you move Mars to < 700km away from Earth. (All approximations to your benefit.)

How did you come up with that figure? Bear with me as I haven't had enough caffeine yet this morning, but I would have thought that if the 3 floats were normalised that that would be equivalent to having 2 independent values between -1 and 1 for each 1/6th of a sphere. These, I think, can be encoded with ~62 bits. I would think that'd give an angle ~ 1 part in 2^31.

Chalnoth said:
Simon F said:
I must admit I didn't specify how you should pack them! I was just thinking that 16bits gives 65k possible vectors. Assuming my "back of the envelope" calculations are correct, then if those 65k are spread 'evenly' on a sphere then there is angular difference of <1 degree between the points.

Only thing is, I don't believe the GeForce FX supports 16-bit ints, and I imagine 16-bit floats would be more problematic for such an application.
As I said, I didn't say how it had to be packed. Dreamcast, for example, had a 16bit normal format.
 
The concise version:

Cartesian coordinates.
Just to make it easier, I think of a normal in the XY plane.
Normal = (1, 1, 0)/sqrt(2)
Both components would have the same exponent (mantissa in the range [0.5 1.0[ ).
1+24 bit precision in the mantissa.
Smallest step is 0.5*2^-24 ~= 3e-8
The part of this step that is orthogonal to the normal is 3e-8/sqrt(2) ~= 2.1e-8
If this step represent 1cm, then the whole vector represent
1/2.1e-8 cm = 4.7e7 cm = 470 km

I made more approximations last time, but I'm still being nice in these calcs.

So Mars would need to be moved to <500km away from Earth.
 
Fair enough.

Hmm. I wonder how Deering calculated his figures in his "Geometry Compression" paper.
 
Simon F said:
I wonder how Deering calculated his figures in his "Geometry Compression" paper.
Answering my own musings here: I went and looked at the paper and Deering was simply saying that if you used 96 bits you'd get that level of accuracy.
 
Back
Top