Industrial Light and Magic support GFFX features!

cellarboy

Newcomer
From www.openexr.net , ILM release a new, HDR file format compatible with GeforceFX features:

OpenEXR is a high dynamic-range (HDR) image file format developed by Industrial Light & Magic for use in computer imaging applications.

OpenEXR has already been used by ILM on 4 major motion pictures -- Harry Potter and the Sorcerer's Stone, Men in Black II, Gangs of New York, and Signs -- and is also being used on several other movies currently in production.

OpenEXR's features include:

Higher dynamic range and color precision than existing 8- and 10-bit image file formats.

Support for 16-bit floating-point pixels. The pixel format, called "half," is compatible with the half datatype in NVidia's Cg graphics language and is supported natively on their new GeForce FX and Quadro FX 3D graphics solutions.

Multiple lossless image compression algorithms. Some of the included codecs can achieve 2:1 lossless compression ratios on images with film grain.

Extensibility. New compression codecs and image types can easily be added by extending the C++ classes included in the OpenEXR software distribution. New image attributes (strings, vectors, integers, etc.) can be added to OpenEXR image headers without affecting backward compatibility with existing OpenEXR applications.

Hmm, looks to me like ILM will be packing the Quadro FX's into their Linux boxes quite soon!
 
When the specs for the CineFX architecture were released wasn't there a big discussion about it uses for offline rendering and previewing models? I seem to remember somebody pointing out that it would be a wise move for nVidia to get a foot hold in that market as it was worth ~$3billion dollars a year :?:

I think nVidia are doing some real fancy footwork behind the scenes here as they prepare for the FX's launch. I get a feeling they want all their bases covered.
 
NVidia also acquired ExLuna which was the producer of Entropy and BMRT, two of the industry's better RenderMan renderers (sadly, taken off the market by a dastardly Pixar suit), so you can imagine that Nvidia has ExLuna's engineers working fulltime on GFFX-assisted offline-rendering.
 
It's also interesting to note that they feel 16 bit is enough and 32 bit is kinda overkill:

Conversely, 32-bit floating-point TIFF is often overkill for visual effects work. 32-bit FP TIFF provides more than sufficient precision and dynamic range for VFX images, but it comes at the cost of storage, both on disk and in memory.

So, will it now come down to a 16 bit performance vs a 24 bit quality flamewar between NV30 and R300? ;)

BTW: What is the 'preferred' FPU bit precision in DX9?
 
It's also interesting to note that they feel 16 bit is enough and 32 bit is kinda overkill:

That was actually pointed out in NVIDIA's own marketting documents - read the precision one. The FP16 format that GFFX uses is the same as the format used by the likes of Pixar. I'm not sure why everyone suddenly adopted > 64bit formats really.

DX9 minimum req in the PS is 96-bits AFAIK.

ATI also have their hand in on the 'Cinematic Rendering' side as well. They don't go into many details, which is a shame becuase it would be nice to hear what both companies are actually doing here.

http://www.beyond3d.com/articles/9500/index.php?p=2#cine
 
I scanned some medium format color negatives over the weekend on a high-end negative scanner at 4800 DPI. 240mb using 8-bit, almost 500mb using 16-bit per component. 35mm slide film scanned out at 80mb 8-bit, 160mb 16-bit.

At 24fps, 60 secs would eat up over 200gb of storage with no compression. At 32-bit, it would eat up 400gb of storage. That's alot of storage! A whole full length movie would chew 24 terabytes at this resolution. But if the fidelity improvement is marginal, then you're spending a huge amount of cash for a marginal improvement that could probably be better served by buying more computation.


For "real time" or near real time preview, 16-bit is nice for the speed. It's also "nice" to have the option to go to 32-bit if you absolutely need it, but I bet in the majority of cases, 16-bit will be used.
 
That must be a much higher resolution than most people use. Each frame in LotR only takes 12 MB.
 
I believe 16-bit precision would not be fair competition between the FX and 9700, being that the FX executes 2 half-floats in the speed it takes to calculate a full-float. Being that it supports full float natively within the pipeline alongside 1 floating point op per cycle performance, it is pretty safe to say it will outperform the 9700 in the realm of 16-bit precision.
 
Luminescent said:
I believe 16-bit precision would not be fair competition between the FX and 9700, being that the FX executes 2 half-floats in the speed it takes to calculate a full-float.

Why is that unfair competition? Both companies have made a design decision - as they do with a lot of other things and specs. You loose precision with 16 bit over 24 bit but gain speed.
 
Yeah, it's over 8000x8000. LoTR is presumably only 35mm. These negatives are 6x6 and above medium format. If you think that's big, try to imagine what a 16-bit 5000DPI large format negative would eat up. :)

Sounds like the LoTR images are 2048x2048 res OR they are being compressed.
 
I'll point out that we have yet to know how fast GeForce FX 64-bit shaders are relative to R300 shaders--they're going to be faster just from the clock rate difference, but we don't know just how they'll compete clock for clock.

Does it really make a difference that a file format supports a graphics card's pixel format directly? Isn't converting pixel formats easy enough?
 
It is not unfair in that sense Lestoffer, but why intentionally compare performance on 16-bit code and rant (in reference to reviewers) on the results when it is fairly obvious who would win out (if real world performance matches the specification). There is no myst to cloud the outcome, a reason why the comparison would not be so interesting. However, a better indicator of performance would result from pitting both processors running code at their respective full precision, to determine which has the more robust shader processing implementation. We know the NV30 has more advanced control over packing and data formats. Who has more shading power clock for clock is the more cloudy question. It seems the NV30 is similar to the R300 when pixel shading clock for clock (1 floating point color op, 1 address op, and 1 texture interpolation per clock), but only time will tell.
 
antlers4 said:
Does it really make a difference that a file format supports a graphics card's pixel format directly? Isn't converting pixel formats easy enough?

Agreed, It's nice that they've made this format public, and maybe it'll become a defacto standard for 16-bit floating point images, but who gives a flying frog with regards to graphics hardware. It's a file format!
 
Luminescent said:
It is not unfair in that sense Lestoffer, but why intentionally compare performance on 16-bit code and rant (in reference to reviewers) on the results when it is fairly obvious who would win out (if real world performance matches the specification).

It makes sense to me if it turns out that the vast majority of pixel shaders need no greater than 16-bit precision for optimal visual quality, and even more sense if they are used significantly more often in games than 32-bit floats.

Another thing of concern is precision. Is 24-bit really enough? I've said in the past that I thought it was, and I still think it probably is for almost anything, but it is a valid question regardless. It may, for example, be inadequate for calculating z values in the pixel shaders.

As a side note, the R300 will also be quite a bit faster with 16-bit floats due to memory bandwidth constraints (though not nearly as much faster than 32-bit as the NV30).
 
What if GeForce FX could for example create a texture (or any other) surface with this format? It would be nice to have 2:1 lossless compression on 16 bit float surfaces (so you don't need more memory then now with 32 bit RGBA surfaces)... ;)
 
MDolenc said:
What if GeForce FX could for example create a texture (or any other) surface with this format? It would be nice to have 2:1 lossless compression on 16 bit float surfaces (so you don't need more memory then now with 32 bit RGBA surfaces)... ;)

Basically, what you're saying is that the GFFX Color Compression algorithm would be applied to textures too? So that FP16 textures ( which would be 64BPP ) would only take 32BPP.
It would be nice indeed. But would the GFFX be able to systematically do it 2:1 lossless?
Also, that would assume the GFFX Color Compression system is able to do more than simply finding 100% identical colors ( and it probably is able to, but there's no concrete proof of that )

Finally, the biggest drawback to that would be transistor count. Currently, the GFFX is able to compress/decompress the framebuffer in real time. But that actually takes a fair bit of transistors ( that's why the NV34 is rumored to no longer have Color Compression )
And if you apply it to more than simply the frame buffer, it would take even more transistors.
So that's the type of thing which would be nice to see on a NV35


Uttar
 
Nothing in cellarboys quote implys that the 2:1 lossless compression has anything to do with GeFX color compression.
 
The compression has nothing to do with the GeforceFX. The file format proposed is like Quicktime in that multiple different codecs can be used for compressing the actual pixel data down for more reasonably sized storage.
 
Luminescent said:
I believe 16-bit precision would not be fair competition between the FX and 9700, being that the FX executes 2 half-floats in the speed it takes to calculate a full-float. Being that it supports full float natively within the pipeline alongside 1 floating point op per cycle performance, it is pretty safe to say it will outperform the 9700 in the realm of 16-bit precision.
So when/where was it stated that the GeForce FX could do 1 32-bit FLOP per cycle and 2 16-bit FLOPs per cycle?
 
Back
Top