Pre-order X800 Pro - ?NDA? - 8 extreme / 12 normal PS pipes

maybe the 12 pipelines mentioned refers to #of ROPs, and 8 refers to the number of pixel shading units.
 
As I alluded to in the other thread, the two numbers just don't seem to go together.

A 6GP/s fillrate indicates 12 pipes at 500MHz. I'm not even an asic newbie, I'm an asic premie, but I'd find it hard to believe ATi can make R420 16 pipes--8 uber, 8 regular--and have it degrade so gracefully into a fully functional XT, a Pro with only four regular pipes broken, and an SE with what I'd imagine would be at least 4/4 (if not 8/0).

There, only about four days before this post is proven hilariously wrong. ;)
 
a complete shot in the dark.

First I see a 48-1 compression for z buffer. What is the current compression for the r-300?
On the pipelines. I don't know much about 3d architecture and what I am guessing at is prob impossible but here it is. This comes from the gamecube. If I remember right the people who designed the gamecube did the r300. I would venture that they would have a hand in the r420. On the gamecube there is a block of TUs or TEV. My stab in the dark is that the 8 extreme pipelines are a block of 8 mini-alu's. It was prob a typo though.
 
"3Dcâ„¢ is slated to become the industry standard . . ." That sounds awfully. . .official.
 
geo said:
"3Dcâ„¢ is slated to become the industry standard . . ." That sounds awfully. . .official.
Well, they are in Xbox 2....

Re: Flow control. Does that encompass static and dynamic branches? Can ATi claim flow control with static branches? (I'm guessing no, as it wasn't one of the R3x0's claims to fame, but I'd like to be sure.)
 
Pete said:
geo said:
"3Dcâ„¢ is slated to become the industry standard . . ." That sounds awfully. . .official.
Well, they are in Xbox 2....

There was stuff in XBox1 that didn't make it into the standard. And let's not forget FXT1.

Although normal map compresson is a long way in coming (does 3dc with FP format normal maps BTW?), I'd like to see something like Sun's Java3D normal compression. This can pack a 96-bit IEEE FP32 normal into 17-bits.

I don't like the idea of a compression scheme just slipping into the standard unilaterally. That happened with DXTC when it was chosen over VQ methods. The 3D industry needs a JPEG/MPEG-like expert group to define the next best compression formats.

Edit: Link for those interested in the Java3D normal compression technque

First published in Deering, Michael. "Geometry Compression." Computer Graphics Proceedings, Annual Conference Series, 1995, ACM SIGGRAPH, pp 13-19.
 
DemoCoder said:
Pete said:
geo said:
"3Dcâ„¢ is slated to become the industry standard . . ." That sounds awfully. . .official.
Well, they are in Xbox 2....

There was stuff in XBox1 that didn't make it into the standard. And let's not forget FXT1.

Although normal map compresson is a long way in coming (does 3dc with FP format normal maps BTW?), I'd like to see something like Sun's Java3D normal compression. This can pack a 96-bit IEEE FP32 normal into 17-bits.

I don't like the idea of a compression scheme just slipping into the standard unilaterally. That happened with DXTC when it was chosen over VQ methods. The 3D industry needs a JPEG/MPEG-like expert group to define the next best compression formats.

Edit: Link for those interested in the Java3D normal compression technque

First published in Deering, Michael. "Geometry Compression." Computer Graphics Proceedings, Annual Conference Series, 1995, ACM SIGGRAPH, pp 13-19.

That Java3D normal compression seems very interesting. Thanx DC
 
I don't see what else it could mean.

And I can't think of any good reason why they would impose some arbitrary limitation to only integer or only floating point normal maps.. everything has to decompress to floats in the pixel shader anyway, and any floating point normal map can be transparently converted to an integer format upon compression (or vice-versa) if that's what the format wants.
 
I think they mentioned the 2 components thing to state the fact that 3Dc can be used to compress any data, not necessarily normal map, that's what they wanted to say. And the decompression is done in dedicated hardware, not in the pixel shader( otherwise any ps2.0 hardware can support 3Dc, not necessarily R420). The compression/decompression algorithm itself is too simple to handle fp data. I admit it's possible to compress fp data into integer data format ( nVIDIA had an format called ERGB to do that ), but it requires additional information such as the base value to be stored, 3Dc just doesn't have these features. For now, it's an integer-only thing.
 
991060 said:
I think they mentioned the 2 components thing to state the fact that 3Dc can be used to compress any data, not necessarily normal map, that's what they wanted to say. And the decompression is done in dedicated hardware, not in the pixel shader( otherwise any ps2.0 hardware can support 3Dc, not necessarily R420). The compression/decompression algorithm itself is too simple to handle fp data. I admit it's possible to compress fp data into integer data format ( nVIDIA had an format called ERGB to do that ), but it requires additional information such as the base value to be stored, 3Dc just doesn't have these features. For now, it's an integer-only thing.

None of that made any sense.

I never said decompression was done by the pixel shader, only that the result of decompression had to be available in the pixel shader and, hence, end up in a floating point register (in floating point format).

Conversion between integer and floating point normal map formats is just a matter of multiplying by 2^n-1 and converting (i.e. intNormX = (ushort)(65535.f*(floatNormalX+1.f)*.5f); for float to 16-bit unsigned integer normal map conversions). Even if the compressor could only deal with integer formats, I see no reason why the driver couldn't transparently convert anything the developer fed it to the appropriate internal format (assuming that what the developer fed the driver was actually a normal), on the fly.

Also, them stating that it only works on 2-channel data formats implies that they're generating the 3rd component through sqrtf(1.f-x^2-y^2) and hence the compression wouldn't make any sense for anything other than normalized 3D vectors - not too many uses for that other than normals (or tangent spaces in general).. unless they are leaving the 3rd component extraction to the user (programmer, whatever).

This is all speaking totally hypothetically of course and assumes they aren't just leaving everything dealing with compression to the user (in which case saying it only works for some integer or floating point format is completely meaningless since you're going to be doing all conversions yourself and, hence, every conceivable source format you want to support is supported).
 
You're confused, Ilfirin. The data format in pixel shader has nothing to do with how the original data is stored, you can send whatever format data to the GPU and they'll all end up to be in fp format. This is nothing new.

Since it's all about compression, what you want is to consume less space and bandwidth. Converting fp16 to int16 saves nothing in this regard although it's very easy to do as you mentioned. You can take a look at how ERGB's encoding/decoding works.
 
I'm not the one confused. You're just throwing words in my mouth, then throwing a lot of other words that are completely unrelated (such as ERGB).
 
I dont see why they're unrelated.

You want fp normal map compression, right? OK, since it's compression, how do we save space/bandwidth? We use fewer bits, that's exactly how ERGB works: converting fp16 to int 8 and you're done with a 2:1 ratio.

This is not about loss-less conversion, rather than we're talking about compression here.

Moreover, ATi needs to think how many developers out there are using fp normal maps, if there's a big market, making an fp 3Dc is ok. Otherwise I think they'll just stick with integer format.
 
Back
Top