3D CGI rendering Vs Consumer gaming 3D

Freelance

Newcomer
Hi there guys.

I'm starting to ramp up my research for a magazine article looking at the world (hardware and software) of CGI rendering, everything from the humble hobbyist Quadro to the monstrous rigs used for production quality film CGI etc. There will also be a bit of a step-by-step look at how some of the cinematic intros for popular PC games get created.

What I'm particularly interested in, is the architectural and/or driver differences between a 'gaming' GPU and a 'workstation' GPU. What makes the Quadro or FireGL so much better for rendering stills or frames of 3D than your average gamers 3D card. Side-by-side, what are the unique properties of these cards aimed so squarely at different markets?

Also - if anyone can recommend some good reading material on any of the above subjects, either on this site, other places on the net - or hardcopy, I'm interested.

The more technical and in-depth the information the better. I'm looking to source architectural diagrams, hardware specs, the works.

I know from browsing Beyond3d in the past that the wealth of knowledge here is fairly rich, so go nuts guys and feel free to be verbose if you can help out :)
 
Offline rendering usually isn't accelerated by GPUs. There are exceptions (Gelato can use a GPU, for example), but usually that is for more of a preview quality level than the final product.
 
Offline rendering usually isn't accelerated by GPUs. There are exceptions (Gelato can use a GPU, for example), but usually that is for more of a preview quality level than the final product.

Tim, thanks for your response. So are you saying that for CGI rendering, cinematics etc - the real power lies in the CPU, whether a single dual-core solution, or some sort of rendering 'farm' linked by infiniband etc?

Can you expand a bit more if you have the time? (either here or by email?)
 
Games need lots of perfomance. Quality is not so important.

Workstations need lots of geometry performance (displaying lots of likes and wireframed triangles). Texturing/andvanced shaders are not so important. Stability is important. This is what the professional cards are offering.

CGI need quality, quality and more quality. Performance is not so important. This is why there is no real need for dedicated hardware. General-purpose CPUs are more flexible then dedicated hardware. Performance can be increased through parallelism.
 
Games need lots of perfomance. Quality is not so important.

Workstations need lots of geometry performance (displaying lots of likes and wireframed triangles). Texturing/andvanced shaders are not so important. Stability is important. This is what the professional cards are offering.

CGI need quality, quality and more quality. Performance is not so important. This is why there is no real need for dedicated hardware. General-purpose CPUs are more flexible then dedicated hardware. Performance can be increased through parallelism.

Ok - fair points, so can anyone explain 'how' the workstation cards offer this 'stability' over a gaming GPU? I mean, what are the architectural differences if any? Is a larger frame buffer a factor? What actually makes a Quadro a Quadro? or a FireGL a FireGL? What's at the core of it being a separate and wholly more expensive offering? :) And - if these cards aren't used for the 'big' CGI type stuff, then what's their role?

Thanks so far for the info guys - interesting stuff, and the link provided to the Lucasfilm article was tops. Cheers.
 
They usually have higher AA levels (32x at 1920x1080 and similar), drivers that are optimised for triangle-heavy scenes with simple shading, loads and loads of memory for textures (you can easily load up 8K textures with those, plus 3D texture handling is better). Some things are usually running at higher accuracy (to avoid pixel holes etc.).

Most important, the drivers have been certified by the DCC vendors, so the artist can be sure that there is no corruption going on etc.

The workstation cards are used by the artists for basically everything until the final scene description gets exported to the renderer - i.e. creating the models, rigging, animation, setting up lightning, staging, etc. except rendering.
 
Also, the professional cards are usually the same hardware as the consumer cards, the difference is only in drivers. For example, the usual GeForce renders a line as a triangle with two vertices being the same, while a Quadro renders a real line.
 
Ok - fair points, so can anyone explain 'how' the workstation cards offer this 'stability' over a gaming GPU? I mean, what are the architectural differences if any? Is a larger frame buffer a factor? What actually makes a Quadro a Quadro? or a FireGL a FireGL? What's at the core of it being a separate and wholly more expensive offering? :) And - if these cards aren't used for the 'big' CGI type stuff, then what's their role?

The workstation cards are quality tested by the AMD/Nvidia themselves instead of independent vendors (eVGA etc.). They usually run at slightly slower clock speeds to ensure stability. Frame buffers are larger to larger textures and/or vertex buffers.

In addition, the workstation cards have hardware accelerated anti-aliased line drawing, which is very important for the architectural/mechanical CAD crowd. Better clipping support. Better multiple display support. Ultra high resolutions for video walls. Genlocking, which can sync your video card signal to an external source: very important for broadcast graphics generation. Stereo view support. Nvidia now has their rack mounted units, which is important for large deployments.

Obviously, it's all the same chip but those special hardware features are disabled (by fuses) for the consumer model.

And, as others have mentioned, much better driver support for the top openGL programs. (AutoCAD, ProEngineer, CATIA, Maya, etc.)

Have a look here and here.
 
The actual GPU is the same for both consumer and professional boards. The rest of the board (eg. amount of RAM), and the driver level optimizations used are very different but the actual graphics processing unit is, for all intensive purposes, the same.
 
Tim, thanks for your response. So are you saying that for CGI rendering, cinematics etc - the real power lies in the CPU, whether a single dual-core solution, or some sort of rendering 'farm' linked by infiniband etc?

Can you expand a bit more if you have the time? (either here or by email?)
While currently it is the case that CPUs are used for rendering GPU manufacturers would like to see GPUs used in the future. Gelato is a step in that direction.

Some things are usually running at higher accuracy (to avoid pixel holes etc.).
Interestingly 3dlabs had higher precision (36-bit) for the vertex units over the pixel units (32-bit). I never saw any comparisons to see if the extra precision made a difference.
 
So - if I'm understanding the general response thus far, a grunty network of CPU rendering nodes is what gets the final production quality stuff out the door, while the Quadros are used for preview quality rendering that you want done fast - and that it's board and driver differences which allow the Quadro to work the magic it works?
 
well to be honest quadros are used more for cad than previewing cgi

specs for a silicon graphics computer (it makes me cry)
0f96047fda.jpg


ps: geforce vs quadro technical brief
http://www.nvidia.com/object/quadro_geforce.html
 
Last edited by a moderator:
There is also a huge difference in the used rendering techniques. Offline rendering is most often done using micropolygon rendering (REYES - Renderman) and for some parts perhaps even raytracing is used (but not as much as you might think).
 
Last edited by a moderator:
There is also a huge difference in the used rendering techniques. Offline rendering is most often done using micropolygon rendering (REYES - Renderman) and for some parts perhaps even raytracing is used (but not as much as you might think).


So if I was to try 'watering' down the process to the most generic development pipeline possible, would I be looking at something (loosely) like


Preview mode (done with Quadro setup or similar mid-range machine(s) setting up rigging, lighting etc >> Final render (production/film quality, done on SGI or similar high end rendering nodes)
 
So - if I'm understanding the general response thus far, a grunty network of CPU rendering nodes is what gets the final production quality stuff out the door, while the Quadros are used for preview quality rendering that you want done fast - and that it's board and driver differences which allow the Quadro to work the magic it works?

No, we only use the workstation's GPUs for viewing and editing 3d models, scenes, animations. Fast animation playback is particularly important, though it also depends on the CPU to calculate the rigs used in skeletial animation (various constraints, IK limbs, expressions and other controls). It's usually not involved in any rendering that goes into the actual product; although you can sometimes use hardware rendered particles.

It's also worth noting that through the years most 3D applications have been tuned to run better on the GF/Radeon series of cards. Back in the `90s, they've relied on OpenGL extensions that were not supported on these GPUs and thus some features were very, very slow, usually involving overlays.
 
There is also a huge difference in the used rendering techniques. Offline rendering is most often done using micropolygon rendering (REYES - Renderman) and for some parts perhaps even raytracing is used (but not as much as you might think).

Well, the only REYES renderer in mass production is PRMan, and although it is usually part of every feature film VFX, Mental Ray is also used more and more, and that's not a REYES renderer.

Also, raytracing is used fairly often nowadays, although still carefully. Even Pixar had to fully integrate it into PRMan for Cars, and added Global Illumination techniques for Ratatouille.
Transformers is another good example for raytraced reflections and refractions on the robots - though it also took the most rendering time per frame at ILM.
 
So if I was to try 'watering' down the process to the most generic development pipeline possible, would I be looking at something (loosely) like


Preview mode (done with Quadro setup or similar mid-range machine(s) setting up rigging, lighting etc >> Final render (production/film quality, done on SGI or similar high end rendering nodes)

It's not really like that. Think about the workstations as windows through which we can look into the 3D scene and interact with it. Hardware rendering, in realtime, is usually not good enough even for previewing anything but animation.

Also, I'll quote myself from another forum...

Original post by ravyne2001
As other's have said, even dedicated hardware fails to reach the point that truly fools the human eye. Only high-end, off-line rendering even approaches that level of quality. To give you an idea, Peter Jackson's WETA effects studio has a computer cluster with ~4500 CPUs and many terabytes of RAM last I heard, and yet it still can take several minutes to render even a single frame of film.

This is a common misconception about offline rendering. We don't usually distribute a frame across multiple render nodes for various reasons; we use many computers to render many frames in paralel instead. A simple 1 minute sequence is 1440 frames long, so if you can throw ~700 CPUs at it then it'll take about two frames' rendertimes to get the entire sequence.

Another reason to have a large renderfarm is to be able to render every scene many, many times during production. CGI is a very iterative process, you want to put together the shot as early as possible to see what you need to work on, where to increase detail, polish animation, textures etc. A tyipical shot can be completely re-rendered 50-100 times, with full quality settings.

The average rendering time for a frame, by the way, is usually around 30-60 minutes on a single CPU, and remained so for more then a decade. We simply increase the detail and quality as CPUs get faster. This way a movie VFX studio can work on a few dozens of shots at the same time, and have them all render a new iteration during the night. Then in the morning the producers, department leads and the director can re-view yesterday's progress.
 
Laa-Yosh, you seem to have your finger on the pulse of the CGI/3D game.

Any chance I could bug you further about some of this stuff? You seem to have a handle on the type of information I'm after.

Do you have a contact email or something I can reach you on?
 
Back
Top