I could imagine that a company like Laa-Yosh's could start working on real-time and in-game cinematics with this kind of power? It's quite the difference ... !
It looks great, don't get me wrong, but the stuff we do just can't work with this engine.
Our scenes are still more detailed in terms of geometry, texture sizes, and number of assets. May I point out the hundreds of characters in WatchDogs city shots? Individually modeled bricks in the wall, or the pieces of the ships in AC4 and so on.
We can use a divide and conquer principle here as we don't have to render everything at once, we can composite layers of stuff together in 2D.
So, most of our assets are made to very different specs, we can actually model a lot of stuff that a game would still replace with normal maps.
The lighting and shadows is all raytraced using the same methods, there are no separate elements like pre-calculated GI and shadows mixed with realtime lights and shadow maps or anything. This gives a huge quality jump that's very hard to match, and you can still see this difference.
Also, we don't need to compromise in image quality, we can calculate a lot of samples for each pixel, unlike games which have very little supersampling AA, if any at all.
We can also do offline processing for physics simulations like cloth, hair, and any kind of FX. We're not limited by how much processing power is available for 1/30th of a second, and we can take shortcuts because we only need stuff to work for the camera.
So, again, this is a very good looking demo, the tech is impressive and the results are of a very high quality level. It will also make us work even harder to try to differentiate our work from ingame graphics and keep a competitive edge.
But the engine still takes a lot of shortcuts, uses approximations or replaces systems with 'fake' solutions, and compromises image quality so that it can run in real time. Some of these trade-offs impose limits on asset production workflows and standards too. It still can not be used to create the trailers that we're asked to produce, at the complexity and quality levels expected from us (and Blur and Axis and the others).