Visual differences between realtime and cutscene visuals

DieH@rd

Legend
More and more games [especially from big studios] are switching to setup in which cinematics are rendered in real-time inside the engine, enabling devs to instantly switch from gameplay to cinematics without any harsh transitions.

However, while the geometry details and textures can remain the same between these two modes, many games are naturally pushing much more detailed visuals inside cinematic sections when the game engine is not tasked calculating enemy AI, procedural animations, physics, etc.

Here we can discuss what changes happen between these two rendering modes. Here's few examples:

Order 1886 cinematics:
theorder_1886_2016052byjhr.png

the_order_1886_1-1.jpg~original



Gameplay:
theorder_1886_2016052z2jro.png

theorder_1886_2016052wwjqw.png


Uncharted 4 cinematics
XEjb3i.png

K4Wc5L.png



Uncharted 4 gameplay:
aqx30pi1k95.png

kPdFf4.png
 
SSS in Uncharted/TLL is not good in some lighting conditions.

A good video made by a member of this forum with better lighting conditions :

 
Last edited:
There wasn't really any different "rendering mode" for cinematics in The Order, at least at the engine level. Our engine didn't even know if it was currently running a cinematic or gameplay. The main difference is that during a cinematic you know exactly where the camera and characters will be, and you can design your lighting in such a way that you maximize the benefit of your lights while staying within your performance constraints. In fact we had a whole scripting system for what we called "shot lighting", where we would set up lights for each shot that would change on each camera cut. The end result is a lot like what you have on movie sets: there's a lot of "fake" lights that make the characters look good and convey artistic intent, and the whole thing would look ridiculous if the camera were to be in a different spot. Here's a quick image that I found on google that shows you what I'm talking about:

e176a7f0fa4bcc2ee0a1ccc0fef3a250--lighting-setups-interior-lighting.jpg


Notice how the window is the one "real" light source (which in reality is probably being lit by an artificial light source outside the window), while there's multiple artificial light sources and diffusers that would be off-camera during the shot. The extra light sources will make the people look better, and help avoid harsh shadows on their faces and bodies.

If you watch this video from SIGGRAPH 2015 you can see our lead lighting artist edit the shot lighting on a scene, and at one point he moves the camera around so that you can see how the lighting is really only set up for the original camera and character position. He also edits some of the shot lighting on the fly, which shows you how they were able to quickly tweak lighting until it looked really good for a particular scene. This presentation also talks about shot lighting a bit towards the end.

In a gameplay sequence you obviously can't set up perfect lighting everywhere, since the player is free to move the player and camera around in ways that you can't predict. So you have to rely more on your general environment lighting, which in our case was comprised of dynamic light sources as well as lighting from pre-computed probes. By nature the probe lighting is less precise than light sources with detailed occlusion from a shadow map, and so you can end up with lighting that's missing the subtle shadowing that you would expect to see in real life (or that you can get from high-quality offline rendering). Or in some cases the lighting might just end up a bit too bright, or a bit too dark, because you can't hand-tweak every possible scenario to avoid these situations (which you *can* do in cinematics).
 
The Order still has some of the best visuals I've ever seen on PS4. Extremely impressive.

I'd love to be (or have been!) a fly on the wall in discussions between Sony and developers on what they want to have in PS5 hardware.
 
I noticed this especially often in RPGs. Random NPCs are not lit well when they are in front of the player during gameplay but as soon as a dialogue is sarted there is suddenly a light new source.

Therefore, the quality of cutscenes should never be compared to those of gameplay. And in gameplay one can always find badly lit faces.
 
An ideal rendering solution will still look realistic even when badly lit. Indeed, I'd go as far to say the best renderer is best measured by its worst scenes. Anyone can look beautiful after loads of make-up and lighting and post-touchups. A truly beautiful person looks good climbing out of bed after a late night. A truly good graphical engine will produce good-looking (authentic) people in all lighting conditions.
 
An ideal rendering solution will still look realistic even when badly lit. Indeed, I'd go as far to say the best renderer is best measured by its worst scenes. Anyone can look beautiful after loads of make-up and lighting and post-touchups. A truly beautiful person looks good climbing out of bed after a late night. A truly good graphical engine will produce good-looking (authentic) people in all lighting conditions.

Then it would be called the Karamazov Engine.
 
So if I understand everything correctly, in cutscenes you don't really load higher quality shaders, more accurate lighting model or higher res shadows, instead you just get more light sources that's hand tweaked and stage rigged on top of hand tweaked animations to get a better looking image right? But if that's the case then how come some of the cutscenes have lower fps than the gameplay counter part despite the former not computing any AIs, physics and etc? Does the cutscene utilize any extra power that's available at all or does it still compute all those in the background?
 
So if I understand everything correctly, in cutscenes you don't really load higher quality shaders, more accurate lighting model or higher res shadows, instead you just get more light sources that's hand tweaked and stage rigged on top of hand tweaked animations to get a better looking image right? But if that's the case then how come some of the cutscenes have lower fps than the gameplay counter part despite the former not computing any AIs, physics and etc? Does the cutscene utilize any extra power that's available at all or does it still compute all those in the background?

Typically AI, physics, and other gameplay elements are run strictly on the CPU, not the GPU. So not having to do those things during a cutscene can free up CPU time, but generally won't give you more GPU time push higher graphical quality. In some cases having more CPU time can let you push more draw calls and this achieve higher visual quality, but in a cutscene you're almost always going to be limited by the GPU, not the CPU.

As for cutscenes having a lower FPS, it really depends on the game and what it's doing. I don't want to comment on other developer's games since every game is different, but for The Order one aspect that I frequently dealt with was the fact that our characters used the most expensive shaders and materials in the game. The characters were basically a combination of specialized cloth, hair, and skin shaders, all of which were generally more expensive than the standard shader model we would use for things like metal and concrete. The thing about these kinds of shaders is that their overall cost is tied to the number of pixels that use shaders. This means that when your cutscenes have a close-up on a characters head, you may end up spending more GPU time than usual shading the scene since now you have over 1 million skin pixels to shader instead of the few thousand that you would have during gameplay (where the camera is further away from the player). Our game used the same character models for cutscenes and gameplay, however they had many LOD tiers that activated based on distance to the camera. So during cutscenes the characters would almost always be at their highest LOD tier, which meant that they were drawn with their highest possible polygon count. This could get particularly expensive for the hair which was all transparent and this suffered from overdraw. This made the Lycans extra painful, since they were obviously covered in lots of hair polygons. In the end we managed to optimize things enough that we kept it in framerate, but it was tough! :smile:
 
Typically AI, physics, and other gameplay elements are run strictly on the CPU, not the GPU. So not having to do those things during a cutscene can free up CPU time, but generally won't give you more GPU time push higher graphical quality. In some cases having more CPU time can let you push more draw calls and this achieve higher visual quality, but in a cutscene you're almost always going to be limited by the GPU, not the CPU.

As for cutscenes having a lower FPS, it really depends on the game and what it's doing. I don't want to comment on other developer's games since every game is different, but for The Order one aspect that I frequently dealt with was the fact that our characters used the most expensive shaders and materials in the game. The characters were basically a combination of specialized cloth, hair, and skin shaders, all of which were generally more expensive than the standard shader model we would use for things like metal and concrete. The thing about these kinds of shaders is that their overall cost is tied to the number of pixels that use shaders. This means that when your cutscenes have a close-up on a characters head, you may end up spending more GPU time than usual shading the scene since now you have over 1 million skin pixels to shader instead of the few thousand that you would have during gameplay (where the camera is further away from the player). Our game used the same character models for cutscenes and gameplay, however they had many LOD tiers that activated based on distance to the camera. So during cutscenes the characters would almost always be at their highest LOD tier, which meant that they were drawn with their highest possible polygon count. This could get particularly expensive for the hair which was all transparent and this suffered from overdraw. This made the Lycans extra painful, since they were obviously covered in lots of hair polygons. In the end we managed to optimize things enough that we kept it in framerate, but it was tough! :smile:
Thanks for the detailed reply, that demystifys a lot of things regarding the whole cutscene vs gameplay quality :). I love smart LOD transition that makes good prioritize on the close ups and blends everything so seamlessly. The game still looks insanely good and man, I can only imagine the kind of things you guys would do for a Pro patch:mrgreen:
 
Back
Top