LAIR Thread - * Rules: post #469

Status
Not open for further replies.
To me downscaled is *essentially" the same models if you're using the high detail models as a source target. You're downscaling an existing high detail model, you are not building a low detail model from the ground up.

The end result is the same, no matter how you want to call it.

Where did you get 15 million from?:LOL:
...
ILM modelers/animators use models with millions of polys for their CGI VFX all the time.

Not exactly true.
Digitized/scanned data is easily that dense, XYZRGB can go under milimeter scale so you can get 20-40 million polygons in such a mesh. But it is unusable in that form, you really can't UV map it, skin it or do anything like that.

The preferred workflow in movie VFX is to build a lower reoslution model that's manageable; it can range from a thousand polygons to 100K or whatever the production requires. Then software tools are used to compare the low res model and the high res model and store the differences in a displacement texture map (or rather, a bunch of maps for such a very detailed model). At rendering time, the low res model is subdivided into smaller polygons and the resulting millions of vertices are then displaced using the texture. All skinning, dynamics and other tasks are perfomed on the low res model, before subdivision.

For games, you can do the same but create a normal map instead of a displacement map. This introduces significant differences, like lack of silhouette detail - the edges will still be low-poly, and also the shading will be a lot more rough and generaly lower quality, especially in close-ups.

As far as I know Factor 5 has some dragon maquettes from Phil Tippet (ex-ILM, creator of Draco in Dragonheart) which they are using for the normal maps. Most of their models are created entirely in the computer though.
 
Id suggest you guys stop bickering about the meaning of "target render" and all variances of definitions (btw, Capeta, target renders have been stated as using in game assets, not source models). The thread is about Lair. I smell a lock.
 
"Target render" is a phoney-baloney term dreamt up by gaming press editors and company PR agents in order to bolster interest in a product when nothing substantial can yet be shown.

In reality, there is no such thing as a "target render." Those who claim their projects to be "targeting" what you see are saying crap to keep you interested by remaining vague in what aspect(s) of the video they're actually "targeting" (e.g. animation quality, gameplay flow, physics, destructibles, atmosphere, etc.).

I dunno, I think there are some valid target renders, for example Heavenly Sword form E3 2005, the realtime frames were stitched together to show what they hoped would be the final product 2 years down the road.

But Nesh is right on this, if it was not stated explicitly as a target, then it's nothing but a CG movie, which almost every game under the sun has used at one time or another, and should not at all be considered a target.
 
But Nesh is right on this, if it was not stated explicitly as a target, then it's nothing but a CG movie, which almost every game under the sun has used at one time or another, and should not at all be considered a target.

I don't think it has to be explicitly stated as a target render to be a target render. If the goal is the same using it as a target then it's still a target render. Both the KZ2 CGI and Motorstorm CGI were target renders so I don't see why the LAIR CGI cannot be considered a target render.
 
Well, you may have seen such videos, but I don't see it even to the slightest in that particular screenshot. I am aware, though of some prior Lair videos that have not been all realtime.


I found the video!! ShootMyMonkey I made a gif of the 3-4 seconds of the video that shown the fake (but good looking) SSS.

http://i7.tinypic.com/2it61cj.gif


So how do you think they did it? Is it that TCA that _phil_ talked about? Laa-Yosh you still don't think this form of faked SSS looks good? Curious.
 
I found the video!! ShootMyMonkey I made a gif of the 3-4 seconds of the video that shown the fake (but good looking) SSS.

http://i7.tinypic.com/2it61cj.gif


So how do you think they did it? Is it that TCA that _phil_ talked about? Laa-Yosh you still don't think this form of faked SSS looks good? Curious.

Interesting. I don't know how to judge if it is good or not, but it looks convincing enough. At least it puts to rest the notion that its just a lightly coloured texture :D
 
So how do you think they did it? Is it that TCA that _phil_ talked about? Laa-Yosh you still don't think this form of faked SSS looks good? Curious.
Laa-Yosh was saying roughly the same thing I was -- the screensnap that pakotlar posted didn't look like it was doing any fake SSS, or if they were, it definitely didn't look too good. Additionally, the GIF you posted is small, artifacty, and moves fast enough that I can't really say for sure whether that was from a clip of CG or realtime.

That said, if it is realtime, here's the most straightforward guess I can surmise from those few seconds --
float4 transCol = tex2D(transTex, Input.UV);
float strength = saturate(dot(LightVec, -ViewVec));
float bright = 0.5 * (dot(-LightVec, normal) + 1);
float transOut = (transCol.rgb * bright * strength * strength) * transCol.a;
.
.
Output.Color = (other color components...) + float4(transOut, 0);

I especially get that looking at the right wing more so than the left, but I admit to being unsure which term is biased and which is squared. I don't think they're both squared as the highlight might come off a little too narrow.

I dunno, I think there are some valid target renders, for example Heavenly Sword form E3 2005, the realtime frames were stitched together to show what they hoped would be the final product 2 years down the road.
I don't recall those being referred to as "targets" nor does anybody really use the term "target render" for anything that uses realtime frame captures -- it's basically become the universal phrase that is used to refer to any offline pre-rendered FMV for a game. If you wanted to call fixed time-step frame dumps "targets", then every game trailer you've ever seen in your life is a target render.
 
Last edited by a moderator:
TCA was a joke ;) . Because i think it's just a simple alpha map (transparency ) with post-processed Bloom in HDR space...
 
I don't think it has to be explicitly stated as a target render to be a target render. If the goal is the same using it as a target then it's still a target render. Both the KZ2 CGI and Motorstorm CGI were target renders so I don't see why the LAIR CGI cannot be considered a target render.

Look, man, a target render is a target render because the developer of publisher says it's what the final game is supposed to look like. The methods of its creation are basically immaterial to the classification. No one is saying the KZ2 or Motorstorm trailers aren't target renders because we had Sony saying, "look what the final game will look like when it's done!" I don't think there were any similar claims about the prerendered Lair footage being discussed from the developers or publisher, unless you can point them out to me. Lots of times cut scenes and cinematic are released to promote games before and after their release, but it would be silly to call them all "target renders".
 

I like this shot a lot. Artistically it is pleasing to me. Nice :D


Neat little video, hopefully the finished game will look something like this, especially with a ton of dragons on screen. Having a ton of dragons on screen shouldn't be a problem if Kameo is any indication (great shot here too).
 
I found the video!! ShootMyMonkey I made a gif of the 3-4 seconds of the video that shown the fake (but good looking) SSS.

Mckmas, the very nature of subsurface scattering requires the light to bounce around inside the material. A leather wing is just not thick enough for that so it's not SSS, okay? It's some simple transparency/translucency hack and that's all.
 
Mckmas, the very nature of subsurface scattering requires the light to bounce around inside the material. A leather wing is just not thick enough for that so it's not SSS, okay? It's some simple transparency/translucency hack and that's all.


I know it's not real SSS. I was asking you what the fake method was. So thanks for telling me. I like the hack. I looks close enough for me. :smile:
 
I know it's not real SSS. I was asking you what the fake method was.
You're not quite right in your thinking. Laa-Yosh wasn't describing a hack to simulate real SSS. He was describing a technique for that look, which doesn't involve the principles of SSS at all. Subsurface scattering is relevant in rendering translucent volumes. Light enters and bounces around inside the volume, with some bouncing back out the illuminated side, and some passing out the back where the volume is thin. SSS is impossible in realtime but you can use some hacks to simulate it, as you know.

Where a volume is negligable, such as paper, or a thin bat's wing, you don't need to calculate the internal illumination because there pretty much is none. The light enters one side and a portion comes out the other side. You can thus just use a basic translucency illumination without any SSS-type hacks that record volume interactions. To get a realistic look, you'd have a blurry texture so the translucent illumination has soft detail hiding bumps etc. That's it.

It's wrong to ascribe the term SSS to the process as that implies complexity that just isn't present. It's very simple stuff. The only reason you don't see it more often is there's not many situations where you get thin translucent objects. Perhaps if there were more dragon and bat games, you would ;)
 
What confuses me is why dragon's wings are considered to be thin enough to allow so much light through, almost solely on the basis of bats having similar wings (but of course being much smaller and sso the wings are far thinner). Wouldn't these dragons' wings have to be pretty thick, especially if they are expected to withstand the sheer quantity of arrows being fired at them and combat with other dragons?
 
I'm not sure I agree with you guys, you don't need a thick surface to have subsurface scattering effects.
I'm sure my little finger is way thinner than any dragon skin out there (wanna test? :) ) though it can generate nice SSS effects..;)
Dunno how they implemented that effect on their dragons but one could do it via SSS, no doubt about that.
 
I'm sure my little finger is way thinner than any dragon skin out there.
Did you get an anvil dropped on your hand some time ago? :D

The nature of the wing is membranous. Even if the minimum thickness is a cm, relative to the scale of the rest of the dragon, it's membranous and the results of SSS would be comparable to that of a bat's wing - negligable volume. It'd be daft to spend considerable resources on SSS solutions when the same effect can be attained with minimal effort via simple translucency, the same as jungle foliage. If they are using an SSS hack, it's not obvious in the final rendering, so one questions why choose that route!
 
Did you get an anvil dropped on your hand some time ago? :D
LOL! :)
Anyway..have you ever tried to get your fingers very close to a bright object? ;)

The nature of the wing is membranous. Even if the minimum thickness is a cm, relative to the scale of the rest of the dragon, it's membranous and the results of SSS would be comparable to that of a bat's wing - negligable volume.
It does not matter, what matter is what happens to the light that penetrates into it, and it seems to me that a membranous object is a perfect candiate for SSS.As I already wrote I'm not say that they used SSS or they should have used it, I'm questioning the fact that such a wing wouldn't be affected by the same class of phenomenons that cause SSS
 
Status
Not open for further replies.
Back
Top