2 New Hi-Rez Ruby pictures

kemosabe said:
FP32

And it looks like the Inq 0.11u story was BS, again. :rolleyes:

Edit: Is that ETA May 7 or July 5?

There is no way they could have held that secret till now. It looked like BS when I read it this morning. Further, I think that the low-k is a great marketing item for ATI as well since Jen thinks it is so "dangerous".
 
This thread is going to have been the last thing I read before going to bed. Mr Reynolds and Mr Wanderer, if either of you feature in my dreams then I will not be happy. :p

Having said that, I'll actually be disappointed if you DON'T turn up in my dreams tonight :devilish:.
 
Pete said:
I spoke too soon. Still not as nice as benches, though. ;) And a bit puzzling, what with the 12 and 8 "extreme" pipes. Whadda they got, pipes within pipes now?

:oops:
fruit.gif
 
PenguinJim said:
This thread is going to have been the last thing I read before going to bed. Mr Reynolds and Mr Wanderer, if either of you feature in my dreams then I will not be happy. :p

Having said that, I'll actually be disappointed if you DON'T turn up in my dreams tonight :devilish:.
I don't think I'll be sleeping at all tonight now actually... :?
 
digitalwanderer said:
PenguinJim said:
This thread is going to have been the last thing I read before going to bed. Mr Reynolds and Mr Wanderer, if either of you feature in my dreams then I will not be happy. :p

Having said that, I'll actually be disappointed if you DON'T turn up in my dreams tonight :devilish:.
I don't think I'll be sleeping at all tonight now actually... :?

Would you have slept better if ATI had taken my suggestion for a "Ruben" instead? ;)

Not that Ruby's not nice, but just one demonstration of the back-hair shader would shut any Nalu fan up (and most probably anybody else as they recover from shock).
 
Psikotiko said:
and encoding and decoding of many video standards, including MPEG1/2/4, Real Media, DivX and WMV9
Like the NV40........Nice 8)
Not quite. It states that it uses the shaders for that encoding, and thus it's probably won't accelerate the encoding quite as much as nVidia's solution.
 
The most expensive part of video encoding is motion estimation, which is most likely precisely the part which is not efficiently HW accelerated by the R420.
 
DemoCoder said:
The most expensive part of video encoding is motion estimation, which is most likely precisely the part which is not efficiently HW accelerated by the R420.


so how does that r420 you have perform? :rolleyes:
 
I don't have one, I am speculating ("most likely") I expect that if motion estimation is accelerated at all, it will come in the AIW version and be supported via a separate chip.
 
I think it's best to just say "we don't really know anything for sure." I'd say there's more rumors out about the r420 than any gpu in history; honestly, the r420 seems to become a new chip every day with each new round of rumors.
 
Interesting.......pretty much what I expected except for 2 things - FP32 and the price.....$499..... FOR THE PRO! That sucks!

DC, why do you always look at ATI's spec's with a raised eyebrow, and get down on your hands and knees and pray to nVidia's? :rolleyes: ;)
 
martrox said:
DC, why do you always look at ATI's spec's with a raised eyebrow, and get down on your hands and knees and pray to nVidia's? :rolleyes: ;)

Same reason why all the ATI lovers grab and hold on for dear life to any new rumor that increases R420 specs. If we compare the first R420 rumors to the latest ones we'd be at R600 by now :D
 
I don't see anything there pointing at FP32 processing.
But one interesting thing is that they mention 32 bit per pixel FP. I'd like to know some more about that, that's an interesting external format.


About floating point normals:
I don't see any reason why it should be supported (except that the final result of course is FP, since all registers are FP). If you want to compress a normal, the first thing you remove is the length, since they usually are normalized. And the use for an exponent is gone.
 
Basic said:
About floating point normals:
I don't see any reason why it should be supported (except that the final result of course is FP, since all registers are FP). If you want to compress a normal, the first thing you remove is the length, since they usually are normalized. And the use for an exponent is gone.


The minimum precision needed to avoid a visual loss of quality is about log_2(100000) This is calculated by distributing normals on the unit sphere and noting that a distance between normals of 0.01 radians is not distinguishable, which yields about 100,000 normals. Thus, the perfect "lossless" normal compression would start with FP32, chuck the exponent via normalization, quantize to 17 bits by looking up in lookup table.

If you use an 8-bit format, you're storing both more bits than are needed, but using a compression technique that is probably more lossy and prone to artifacts (if it is based on DXTC style block compression) Depends if you're willing to accept potential artifacts (based on the source) for an additional 2:1 compression factor vs peace of mind. You could switch to tangent space (two 8 bit components) The D3DFMT_CxV8U8 format also works (but is "talked down" by ATI in their paper on compression)

But for two component formats, you are using 16-bits uncompressed per normal, but if you're going to use 16-bits, you can almost achieve perfection without worrying about high frequency changes per-block. ATI's technique is modest improvement on DXTn for normal compression and cheap to implement, but I just fear that a) polybump style games may have high frequency normal maps in them unless artists are careful and b) at this point and time, we need IHVs to put in alot of effort on a next-gen format (alal Talisman TREC). Something that can handle HDR imagery, FP lookup tables, etc
Compression wise, I feel like we're trodding along like we did going from DX7->PS1.1->1.3->1.4. We need a "big jump" to something like PS2.0 for compression.
 
gsgrunt said:
What does this mean?

128-bit, 64-bit & 32-bit per pixel floating point color formats

External formats. They can render to them, and use them as textures, and most likely they're all converted to/from an internal 4xFP24 when loading/storing. Just like with R3x0. (Or do you suggest that R420 will introduce FP16 calculations for ATi. ;))


DemoCoder:
I haven't seen the exact standard, but I assume that the 2x16 bits that are used for two base colours, are instead used for two normals. I think the most reasonable coding would be a few bits to select a facet on an aproximation of the unit sphere, and the rest of the bits for 2D coordinates on that facet. And then normalize the end result.
(If the unit sphere is approximated by a unit octahederon, you'd get a very simple decompression.)

<...looking at Java3D normal compression>
And it seems like that's a similar approach to represent the normals. But the mini-palettes aren't there.

<...looking at ATI paper on DXTC for nomals>
Oh, it seems like they've gone for tangent space normals only. That way you coud almost squeeze in the quality from Java3D in 16 bits.


You don't have to start with FP32, FX16 is more than enough. But I agree that it can be good to start with a higher quality format than the final. What I meant in the last post was that you don't need any floating point in the compressed normal format.
 
Heathen said:
Looks like GZeasy are claiming they've got a copy of the Ruby Demo.

Can anyone confirm?
No, but I can invalidate it. GZeasy just has the two screenshots/wallpapers of Ruby that KainFrost posted up at Rage3D yesterday and W2S made a mistake on the chinese translation.

The demo is NOT at GZeasy, it has not been leaked.......yet. ;)
 
Back
Top