The DX11 tech is in the UDK, not the demonstration.
Ok so no preview samples? It sounded to good to be true the be honest ;(
The DX11 tech is in the UDK, not the demonstration.
On slide 19, it says the screen space subsurface scattering takes Depth and Normal input into account. So why doesn't it work with ears? And why was I led to believe for the past few years that the engine should somehow be capable of rendering this?Looks like a presentation about the tech demo:
http://www.nvidia.com/content/PDF/GDC2011/Epic.pdf
It woud also need the information about the backfaces on ears.On slide 19, it says the screen space subsurface scattering takes Depth and Normal input into account. So why doesn't it work with ears?
Looks like a presentation about the tech demo:
http://www.nvidia.com/content/PDF/GDC2011/Epic.pdf
They make so much fuss about it being real time and then don´t release a real time demo executable. Sigh. I´ve seen videos before. There is nothing impressive about a video. I can make a video. My mother can make a video. Come on, just "leak" it if you´re afraid of having to support it.