The move towards CGI. What consitutes 'the look' and how close are we getting?

Shifty Geezer

uber-Troll!
Moderator
Legend
After two rather ugly threads on this topic, it's time to try afresh. There's a fair amount of the use of the phrase 'like CGI' when talking about some games, which is an ill-defined phrase that seems to cause plenty of controversy when people find themselves using the term to express their perception of a game, and it'd help conversation if we can come to a 'definition', at least by comparison if not by a technical definition.

Starting with a definition of 'CGI' as a non-realtime computer-generated animation for the purpose of passive viewing, categorised by an 'unlimited' rendering budget* allowing 'perfect' lighting, geometry, animation, etc., the approach of this thread is to recognise the progress of game visuals towards 'CGI' by giving examples and personal interpretation of what it is about the game visuals that makes it appear like a CGI to those to whom it does so. The format is to post some material from your game and discuss what you think makes it stand apart from looking like a computer game. Obvious aspects to consider are :
Lighting, shading, geometry detail, IQ, optical camera simulation (motion blur and DOF through to post effects), stylised rendering, animation, camera angles.

It'd also be valuable to include near-misses and games where the 'CGI-like' moniker doesn't work due to a clearly definable quality, such as jaggies/shimmer.

The game doesn't need to look like CGI 100% of the time. There might only be some moments. It'd be good to consider what sets those moments apart when they happen.

* CGI of course comes with a budget and there's many different qualities from students up to Weta et al. For the purposes of games, I think it enough to set the bar as it not being obvious to you whether you're looking at realtime graphics or not.
 
Last edited:
I was thinking a lot about it. Certainly IQ, asset detail, materials and physics play their role. But except from the fact that offline rendering doesnt have the limitations of real time hardware there is another major advantage that offline rendering has:

The fact that artists have complete control of the scenes in CGI movies. They can set the camera angles, the position of the models, the lighting and camera mimicking effects (such as DOF) manually where the image composition can make the best impression to the viewer. In other words they can manually direct every aspect of every scene to maximize the visual impact with the available resources. Techniques of cinematography can be incorporated.

Unlike movies games are unpredictable. Hence the scenes will often not be at the optimal settings to highlight the visuals at their best. Camera placement and other effects should not interfere with the gameplay so again the developers need to be careful.

Games that manage to mimic a CGI'ish impression I have noticed are those that maintain a very consistent flow (as if there are no performance hick ups) and the developers manage to achieve such scene set ups that all elements maintain consistently a high quality composition. Also the best examples manage to incorporate gameplay in such way that elements of cinematography may be mimicked without braking the fun factor. If these are achieved satisfactory other elements such as context sensitive behavior of characters and unpredictable/realistic physics farther add to the scenes believability and surprise. The less predictable and interfering are the various visual elements to each other, the less the graphics feel like they belong to a game.
 
DriveClub

Image1.jpg
Image2.jpg
Gameplay

CGI animation for comparison

DriveClub nails that 'nearly photoreal' look extremely well. The lighting, shaders, and IQ produce something very comparable to some of the products of offline renderers. The weather effects are of course its highlighting feature, looking damned convincing. Any shortcuts (screen space reflections) aren't obvious in general viewing.

It's not as photorealistic in the dry though, suggesting the wet-shaders and overcast lighting model are important.

Image3.jpg
I reckon the clean IQ is very important here. Shimmers and jaggies would easily reveal this as a compromised realtime production rather than an offline render given time to produce clean results.

Note: I'm only going from web content I've seen. I have played the game very little.
 
I think games and graphics industry people should really study this based on how average (or core gamer) view this things. I belive that original Watch_Dogs teaser is one great example. The Witcher 3 is another. I belive Order 1886 debate (full game this time) will rage on years too. Let's say that open world game has simulated 24 hour day and various lighting and weather patterns. You can have same people say that time x with weather y looks CGI while same scene looks like last gen low budget game (yes, we have to remember to use hyberpole) on different time/weather settings. While I've read various 3d "title here" magazines since the late 90's, I don't remember many articles that really dive into how to fake something that people find pleasing, rather then bruteforce or use latest techniques and rendereres to get whatever is there state of art result at the time.

We remember, for better or worse Toy Story (I never liked it much as I like nice textures and it had crappy), liquid metal from Terminator 2, Jurrassic park dinos, Davy Jones face, subtle CGI from Fight Club, way over the top and now out dated Matrix fight sequences ect ect. If there's Watch_Dogs 4, will we still talk about the original trailer and how game still don't looks as good?

For some games, lock the time, lock the weather (yeah, it's probably going to be rainy, moody noir with light mist on ground level, about 10pm and neon lights shining from the distance) and see how consumers react. Then release "weather patch", like Driveclub and see if the tone changes, or will everyone accept changes because base game looked so good. Driveclub looks good, but it looks great when it rains :p
 
I find game that can hide its flaws the best would most likely have that CGI look. Flaws including aliasing, shimmering, low poly edges, low res textures, blob of hair, waxy skins, over abuse of Normalmap and over bright lighting are the most typical culprits to break the immersion. Obviously on consoles you have a limited processing power so you use post processing effects to hide those flaws while at the same time to look cinematic. The reason why DriveClub looks so good in heavy precipitation is because the onslaught of rains drops, distortions, motion blur, fog and muted color scheme are constantly locking your attention away from what other flaws that might be present. Same as The Order, a slew of post processing on top of already high quality lighting, textures, models cloth physics and realistic shaders blend in seamlessly which hide whatever sore spots so flawlessly. This is more effective than having a couple more fancy techniques such as tessellation, HBAO or uber light shaft.
 
The environments in Alien Isolation have a kinda CGI look to them.uploadfromtaptalk1423931741027.jpg

The characters and animation are pretty awful, but I could just get lost in the scenery. Smoke coming out of pipes and the lighting were the most atmospheric aspects of the game for me.
 
I recently finished Alien Isolation, and I agree. Tremendous visuals, with sound and lightning. I really like those scenes when dark corridors suddenly light up with array of crackling neon lights, with smoke flowing from ceiling to floor.

Shame that shader aliasing ruined so many of the scenes. This game deserves to have Sony 1st pary AA magic tech.
 
Well, with a bit more resolution and less compression, they definitely would look like real photos, to me - I know this is not a limitation of the game itself but mostly of the capture and hosting online. But it's always easier to make a car look photorealistic than pretty much anything organic.
 
I think "CGI look" for me means that "real-time" considerations have been sacrificed and a large computation budget has been spent on achieving the "best possible within $$$" desired look.

I find it hard to read more into it that that. And as the power available and algorithms used improve, "CGI look" continues to be a moving goal post.

The gap between "real time" and CGI and especially "what we think of as CGI " seems to be shrinking though, as we move further along the diminishing returns curve.
 
The fact that artists have complete control of the scenes in CGI movies. They can set the camera angles, the position of the models, the lighting and camera mimicking effects (such as DOF) manually where the image composition can make the best impression to the viewer. In other words they can manually direct every aspect of every scene to maximize the visual impact with the available resources. Techniques of cinematography can be incorporated.

This is I think as close to the key differences that you guys got so far :) but there's more.

The main difference to understand here is that in CGI, the approach is not about assets and environments and such, it's all about the final shot. We think about the results as a series of imagery and we focus our efforts on getting the individual results as good as possible.

Sure, it has to start with the assets - we build characters, sets, we animate and so on. But the "budget", whether you think about it as dollars, man-days, or computing power, is mostly spent on a limited set of self-contained cases, be it a hundred shots for a short or thousands of shots for a movie.

We put together the shot as soon as we can, we throw in the assets, the lights, the basic animation, and we start to view the results in the morning. The VFX or CG supervisors analyze the results, point out the weaknesses and the room for improvement, and then the artists are given tasks that would lead to improvements on those elements. The key here is that the majority of the work is focused on getting the most only in that shot (although if other shots can benefit, it's always a good thing) and the way to implement the changes is always preferred to be the most efficient one. In other words we cheat as much as we can - if it can be just painted, we'll not gonna bother with modeling or lighting, if the texture doesn't work in another shot we don't care, if the environment isn't technically consistent with the next shot it doesn't matter.

Another good example, in my specific field of work, is character deformations. You obviously heard a lot about muscle sims and such, but the reality is that no studio would ever aim for a solution that works all the time. Even in a movie project with hundreds of shots, it's still better to just get a more or less working rig, and then bake out the deformations to a point cache (every vertex position is saved for each frame) and then let your artists refine the results with manual sculpting, making it work just for the camera. Tweak the silhouettes of the shoulders, add some muscle flexing, and so on - it's just easier to directly put the vertices where you want them, instead of running endless passes of expensive sims or tweaking the rig's general behavior.

What this gives us is an efficiency that no interactive project can ever hope to achieve. A game asset has to be reworked again and again until it works in every pose from every angle in every lighting condition, and if you have to spread the amount of available resources on this you'll inevitably have to stop well before you've exhausted the possibilities, you'll have to compromise.

So basically, CGI has both a greater budget for computing resources, and an ability to focus human and computer resources much better.
 
One of the arguments that I'm trying to grasp, and if anyone can help that would be much appreciated..

Brad Wardell and the rest of the Oxide Team working on the "Nitrous Engine" are constantly selling their engine as a true CGI games engine, that it pretty much achieves "RenderMan" realtime like graphics ...

"The Nitrous engine is basically a real-time version of renderman (i.e. the CGI you see in movies) so the way light and materials work is radically different"

I know what RenderMan is, question really is what of the Renderman pipeline makes it something these guys and their next gen engine wants to achieve?! What's so different about it's pipeline compared to what current AAA engines and their pipelines do in their games?!
 
One of the arguments that I'm trying to grasp, and if anyone can help that would be much appreciated..

Brad Wardell and the rest of the Oxide Team working on the "Nitrous Engine" are constantly selling their engine as a true CGI games engine, that it pretty much achieves "RenderMan" realtime like graphics ...



I know what RenderMan is, question really is what of the Renderman pipeline makes it something these guys and their next gen engine wants to achieve?! What's so different about it's pipeline compared to what current AAA engines and their pipelines do in their games?!

They use conservative rasterization and shading decoupled from rasterization. Great motion blur and Depth of field effect. But in another tweet he speak about 8xMSAA. They can't use scholastic supersampling too costly and they don't have other Renderman advantage complex geometry.
 
Thing is, Renderman used to be like that about 10-20 years ago, but in recent times Pixar had no choice but to re-architect the whole thing in order to catch up with all the younger path-tracing focused renderers - as these took away a lot of their previous market share because of their ease of use. I can see however how such rasterization-focused features can be an advantage in games today, but to be honest that space shooter video just wasn't that convincing. Maybe they need a better looking project to really sell the tech?...
 
Maybe they need a better looking project to really sell the tech?...

Star Swarm was always just a demo proof of concept ..

The real game built for Nitrous will be shown off at GDC


http://www.oxidegames.com/2015/01/21/prepping-gdc/

The team is hard at work on a series of new technology to show off at GDC as well as a big new game we’ve been working on for the past 2 years.

Microsoft and AMD are scheduled to demonstrate our tech at their booths.

I have discussed online how important DirectX 12 and Mantle are going to be. But talk is cheap. Being able to demonstrate a new game that can display thousands of light sources simultaneously (as opposed to say, 8 like you currently get on your console). Or light sources that can illuminate particle effects. Or having thousands of individual moving objects on screen simultaneously (as opposed to a dozen).
 
Well, I can't wait to see it. But still I believe the Renderman parallels are a bit over the top - MB and DOF are almost certainly not 3D calculated but probably just a 2D effect, and the scene complexity is not likely to be even close to the first Toy Story...
 
A lot of pressure. IIRC oxide was the company that helped AMD start mantle. I could be wrong though. I hope their game impresses.
 
To be honest, I think this whole discussion would me more accurate if instead of CGI we said un-gamey, because I think that's more of what it all is. When ps360 gen started, full of (horribly aproximated) post processing solutions and normal mapped models with dynamic shadows, per-pixel-lighting and shaders every where, it had the same "looks CGI-like" effect games like Ryse, The Order and ACU have today. Of course, those looked like cheaper CGI of the 90's, when both tech and artists know-how was much less developed. Hell, even ps2 had a bit of that effect, coming from the terribly low-poly world of n64, or the messy pixel-fest of psone, the high-res ps2 games, with geometry that actually resembled what it was supposed to represent, and roundish (on a 480p display) geometry looked sooo cloooose to toy-story level to my naive eyes of back then...
We all know true CGI production are much more technically sophisticated than those titles, just like we knew ps2 games didn't really look like toystory, but to see a real time game pulling off all the effects they are now, to this quality level, is something we are not accustomed to just yet, but will indeed get to become the norm with the years. This whole looks like CGI talk will probably be dead by next year, and we'll have raised our expectations of real-time graphics accordingly, just to go back to use similar hyperbole once the next technological leap is made.
 
Silicon is starting to see diminishing return on moving to smaller transistor nodes for cpus. The diminishment is likely less on gpus, though still undoubtedly present. No longer are there teh 70% jumps from Geforce7 to 8 series. Hardware wise until perhaps the distant future (eg +30yrs) with graphene or silicene processors or quantum computing coming to consumer devices we'll never be able to render some of the scenes in 2003's Animatrix Flight of the Osiris in real time.

You are talking about rendering a frame that took a 2003 renderfarm hours to do per frame in 1/30th of a second. We probably dont even have the equivalent tflop performance of the renderfarm used in that Animatrix CGI in a liquid cooled 780ti paired with the new octocore intel.
 
Back
Top