Just to clarify some things
Renderman renderers perform shading computations at 32-bit precision per channel (3 channels for color and 3 for opacity, 192-bit total). Usually the final image is saved at 64-bit Tiff.
Certainly OpenGL 2/DX9 class hardware can accelerate some portion of the shading, but rendering production-quality frames at near-real-time is only a dream.
Actually, I can’t see how offline and real time rendering can eventually converge.
Sure, they will use the same hardware, but since offline rendering doesn’t care much about rendering time, big animation houses will continue to use huge renderfarms, and they will use the extra power of the new cards to accelerate previously impractical algorithms, like true area lights and Global Illumination. So, offline rendering will always have better quality.
Renderman renderers perform shading computations at 32-bit precision per channel (3 channels for color and 3 for opacity, 192-bit total). Usually the final image is saved at 64-bit Tiff.
Certainly OpenGL 2/DX9 class hardware can accelerate some portion of the shading, but rendering production-quality frames at near-real-time is only a dream.
Actually, I can’t see how offline and real time rendering can eventually converge.
Sure, they will use the same hardware, but since offline rendering doesn’t care much about rendering time, big animation houses will continue to use huge renderfarms, and they will use the extra power of the new cards to accelerate previously impractical algorithms, like true area lights and Global Illumination. So, offline rendering will always have better quality.