Fafalada said:
That's kinda besides the point - even you guys still use precomputation on occasion when it saves a buttload of rendering time.
You are right, but our case is quite different because it's not interactive. We can precalculate ambient/reflection occlusion, SSS, bent normals etc.*, even 'bake' all the deformations and cloth animation, and then only re-render lighting, shading, texturing changes very quickly - but only as long as the camera and the animation does not change. With a game, you don't have this advantage, except maybe the cinematics.
We also have the advantage of almost unlimited storage place, because we can store everything on disk, and only a single frame's data will have to fit into memory at any given time.
(*: we usually don't store the radiance data itself, but rather render out all the components into separate image sequences that can be combined and further manipulated in compositing; most apps are now capable to add image-based lighting as well, using an enviroment map and a 'surface normals' pass)
Are we gonna argue that using PRT to supplement ambient term would somehow make a fully dynamic lighting model not completely realtime, but before when you used constant color as ambient, it was?
No, I fully agree with you here...
Anyway as for actual practical usage - without knowing the dataset sizes used for the demo it's difficult to call it one way or the other - we can only speculate.
If data can be compressed reasonably well (researchers that know more about this then I do, seem to believe it can) it isn't out of the question to be used, at least for cutscenes and the likes.
I think that it's reasonable to assume that a fully animated character in an interactive enviroment would require an amount of data that would be far from practical. Even if you'd only store progressive changes in PRT, SSS or such kinds of data, the increasing expectations for character animation mean so many keyframes, that even a single character becomes too big to fit in memory. This is why I've pointed out that the SSS for the Molina head is probably cached out - it certainly looks amazing as a techdemo, but I don't think that it's even remotely practical for an interactive application. Any serious deformation would break the SSS effect, and preparing for each possible combination of facial expressions (ie. angry smile, happy smile, sad smile...) would increase the dataset drastically.
And if there would be a reasonably good looking alternative to the time consuming rendering of each frame with raytracing, then the VFX industry would probably jump on it as soon as possible
I think that on this generation, skin shaders that look sufficiently different from metal and plastic would already be a great advancement; and physically 'correct' SSS will be great on PS4