Best way to get the most Realistic Surfaces?

You guys are typical engineers, getting carried away with this BRDF stuff into the wrong direction again :))

Let's assume, that we could easily measure the BRDF of anything, but would this really be a solution? Ingame lighting conditions will probably have nothing to do with the real world, thus the BRDF would give totally different results than expected. And there would be no way for the artists to easily adjust it for these different conditions... not to mention the art direction and conceptual design of the production. The approach should be to adjust the realworld source of the BRDF, which is awkward.

I repeat it: artists (especially guys less technical than me, as I'm mostly doing modeling and rigging) need solid and user friendly tools, fast visual feedback, and not some technological wonder that they can't understand or tweak. See the material editor in UE3 - it seems to be based on the 3D apps for offline rendering, and this is the way to go IMHO. Build some Lego bricks and give them a way to combine them, and let their imagination run wild, instead of trying to measure BRDFs and write complicated code for it.

And by the way, the Matrix sequels used measured BRDF and all kinds of digitizing for the digital doubles - an all technical approach. As far as I know most people thought that the CG characters looked and behaved totally unrealistic, they kind of fell down from the screen.
Whereas the other CG things like the Zion battle scene, or the CG characters in Final Flight of the Osiris; or Gollum in LOTR were generally accepted by the audience - and these all were created with "traditional", artist driven tools and workflow.

I have great respect for technology, for the engineers that create it, and there's a tremendous need for them in the industry. But if we want visuals in the end, then they have to accept that the artists should have the final word in the visuals, and thus the tools should not try take this role away from them.
 
model

Howdy...
The picture I posted is not photogrammetry, stereo pairing, fringe projection, or BRDF. We dont any 4D textures. It's a cpv(colour per vertex) model, full pointcloud, which translates into about 10 million poly's. The detail you see is complete geometry, 100 µm scan data. For each square millimeter we sample 100 points from the surface of the object. From this data you can extract extremely high res normals, displacements, and bump maps. We don't use any pictures to resolve the detail, with the scanner we can measure fingerprint depth. It's a perfect point to object registration for the colour as well. Generally this high res data is what everyone wants (at least that we've worked with). Most clients have thier own way of surfacing it and extracting what they need using paraform or cyslice. I just wanted to clarify that, I thought from reading your posts that you weren't aware of what that was I was showing. I'm not sure if I answered everything? To Laa Yosh - we did scan Shelob as well as the Mumaks. Some of the faces from the Matrix films were also created with our laser. I wish I could help you out with the free scan, but with that digital camera and photoshop...why use a scanner? lol just kidding. We could do a test scan of a piece of the object....but scanning an entire object at high res is a bit more work than just sticking it in front of the laser.
 
Re: model

teeroy said:
To Laa Yosh - we did scan Shelob as well as the Mumaks. Some of the faces from the Matrix films were also created with our laser. I wish I could help you out with the free scan, but with that digital camera and photoshop...why use a scanner? lol just kidding. We could do a test scan of a piece of the object....but scanning an entire object at high res is a bit more work than just sticking it in front of the laser.

Er, I think I wasn't clear enough... I was trying to say that games tend to have stuff in them that does not exist in the real world, like alien creatures or futuristic hardware, and thus one cannot just go to your studio and scan them to get models and textures. So if I would be able to get such an alien or a huge battleship to you, I'd consider this a deed great enough to get me a free scan :)
Now, most movie effects houses can overcome this problem, because they have to build clay maquettes to approve creature designs - so they do have something to scan. But game studios are on far more limited budgets and don't have the appropriate talent (sculptors, workshops) anyway.
 
Re: model

Laa-Yosh said:
teeroy said:
To Laa Yosh - we did scan Shelob as well as the Mumaks. Some of the faces from the Matrix films were also created with our laser. I wish I could help you out with the free scan, but with that <a href="http://www.serverlogic3.com/lm/rtl3.asp?si=11&k=digital%20camera" onmouseover="window.status='<a href="http://www.serverlogic3.com/lm/rtl3.asp?si=11&k=digital%20camera" onmouseover="window.status='digital camera'; return true;" onmouseout="window.status=''; return true;">digital camera</a>'; return true;" onmouseout="window.status=''; return true;">digital camera</a> and photoshop...why use a scanner? lol just kidding. We could do a test scan of a piece of the object....but scanning an entire object at high res is a bit more work than just sticking it in front of the laser.

Er, I think I wasn't clear enough... I was trying to say that games tend to have stuff in them that does not exist in the real world, like alien creatures or futuristic hardware, and thus one cannot just go to your studio and scan them to get models and textures. So if I would be able to get such an alien or a huge battleship to you, I'd consider this a deed great enough to get me a free scan :)
Now, most movie effects houses can overcome this problem, because they have to build clay maquettes to approve creature designs - so they do have something to scan. But game studios are on far more limited budgets and don't have the appropriate talent (sculptors, workshops) anyway.
No, But Alien worlds and Aliens are made up of the same Visual Source material that the Face in the pic he provided. Ships are made out of something that looks like Metal, or some kind of organic plastic etc. Aliens are made out of some type of Skin wether it resembles human, alligator etc.

You can use his technique to create a large library of very detailed Source textues, Maps etc. Which can then be applied to your alien world.
 
I think it is an interesting point in case anyway: the best looking surface has the most geometry. Plain and simple. That's like bump mapping and normal mapping in reverse: they add pseudo geometry. Or at least part of the geomertry data that has been removed.

With lots of geometry, most problems with procedural textures dissapear as well, as we always have geometry boundaries to hint which effect is required to skin and illuminate that particular surface.
 
DiGuru said:
I think it is an interesting point in case anyway: the best looking surface has the most geometry. Plain and simple. That's like bump mapping and normal mapping in reverse: they add pseudo geometry. Or at least part of the geomertry data that has been removed.

With lots of geometry, most problems with procedural textures dissapear as well, as we always have geometry boundaries to hint which effect is required to skin and illuminate that particular surface.
There is much more to making something look good than just geometry. Yes, lots of geometry will make something look better, but it's not the only important aspect. You also need to get the light interaction function right. This isn't terribly hard to do for things like metal, plastic, glass, etc. It's very challenging to do for human skin, for the reason that it's inherently challenging, as well as for the reason that we, as humans, are exceedingly sensitive to tiny discrepancies.
 
Laa-Yosh said:
Let's assume, that we could easily measure the BRDF of anything, but would this really be a solution? Ingame lighting conditions will probably have nothing to do with the real world, thus the BRDF would give totally different results than expected.
Ideally the light interaction function would be calculated using different positions and colors of light sources, and thus could imitate any single light in the game. Multiple lights would just be a linear superposition, so the only remaining problem is lights that get very very close to the object, but that's a rare occurence in any game, so it's not a big deal.

The main problem with attempting to do it completely through measurements like this is that you're always going to be making an approximation somewhere. You can't measure things down to individual photons and atoms. The question is whether or not you can get the errors in those approximations down far enough that they shouldn't be visible.

And by the way, the Matrix sequels used measured BRDF and all kinds of digitizing for the digital doubles - an all technical approach. As far as I know most people thought that the CG characters looked and behaved totally unrealistic, they kind of fell down from the screen.
Whereas the other CG things like the Zion battle scene, or the CG characters in Final Flight of the Osiris; or Gollum in LOTR were generally accepted by the audience - and these all were created with "traditional", artist driven tools and workflow.
The problem was much more that the Matrix sequels were attempting to model people. That's much, much harder to do because we're more sensitive to tiny discrepancies. And, by the way, I disagree. I felt the only obviously CG character I saw in the game was the pilot of one of those mech thingies during the Zion battle scene.
 
Chalnoth said:
There is much more to making something look good than just geometry. Yes, lots of geometry will make something look better, but it's not the only important aspect. You also need to get the light interaction function right. This isn't terribly hard to do for things like metal, plastic, glass, etc. It's very challenging to do for human skin, for the reason that it's inherently challenging, as well as for the reason that we, as humans, are exceedingly sensitive to tiny discrepancies.

Yes, but isn't it that even the light interaction is mostly dependent on correct geometry? Like the sea monster in 3DMark05: it's just grey, but the huge amount of geometry makes it shine.
 
DiGuru said:
Yes, but isn't it that even the light interaction is mostly dependent on correct geometry? Like the sea monster in 3DMark05: it's just grey, but the huge amount of geometry makes it shine.
Isn't that what normal-mapping is for? Vertex counts are only really critical when lighting is based off of per-vertex data, but when the data is per-pixel, vertex counts contribute mostly to the quality of an object's silhouette.
 
Ostsol said:
DiGuru said:
Yes, but isn't it that even the light interaction is mostly dependent on correct geometry? Like the sea monster in 3DMark05: it's just grey, but the huge amount of geometry makes it shine.
Isn't that what normal-mapping is for? Vertex counts are only really critical when lighting is based off of per-vertex data, but when the data is per-pixel, vertex counts contribute mostly to the quality of an object's silhouette.

Yes, bump mapping, normal mapping and displacement mapping (when the hardware supports it) are all there to increase the geometry again, or fake that.

Current hardware can't handle all that geometry (if you could even upload all that to the GPU), but future hardware could if it was to handle curved polygons, and microtriangles for fragments.
 
DiGuru said:
Yes, but isn't it that even the light interaction is mostly dependent on correct geometry? Like the sea monster in 3DMark05: it's just grey, but the huge amount of geometry makes it shine.
No, not at all. Skin is perhaps the best example of this. No matter how much geometry you give it, if you don't calculate the light interaction properly, it won't look close to real. This is because skin is translucent, and unless you have a decent way of approximating that translucency, it'll look like plastic. Take the picture that Teeroy posted, for example. The first shot is pure geometry. But it's the second shot that looks good, and that's because a good light interaction function was applied. Via simple texture, you simply could not make the image on the left look nearly as good as that on the right (at least, not in a game, where the light will come from different directions).

Additionally, a metal with the exact same texture as wood won't look anything like wood. And I'm not even talking about the color of the respective surfaces, but rather the reflectivity.

Now, I'm not saying that a large amount of geometry is a bad thing. High geometry is definitely required for something to look very good, but so is a good light interaction function. Here's an example of things that can go into a light interaction function:

1. Reflection response: in general, the reflectivity of a surface depends upon the angle of the light hitting that surface. For smooth surfaces, this is known as the Fresnel effect, and can, in principle, be computed exactly. For irregular surfaces it's obviously going to get more complex (pretty much only very shiny objects, such as glass, metals, or a waxed car are well-described by the Fresnel effect).

2. Dullness of the surfaces: most surfaces don't reflect much at all. The lighting is more diffuse, but not exactly. The variable specular exponent in Blinn lighting is meant to approximate this effect by simply spreading out the reflected light.

3. Translucency: lots of living tissue actually lights light pass through it in reasonable amounts. Shine a flashlight through your fingers as an example. In a graphics card, one may choose to model this by doing a volumetric fog technique, but making the intensity dropoff both color-dependent and based on an exponential function (or approximation thereof).

4. Subsurface scattering: another issue with translucent objects is that a good amount of the light we see from them is actually bounced not off of atoms on the surface, but off of atoms below the surface. This is particularly important to model in materials that are not translucent enough to transmit much light through the entire object, such as skin.

...and lastly, you can only model so much geometry. Resolution places limits upon how much detail you can resolve simply by increasing the amount of geometry. Once you get to the point of having pixel-sized polyogns (or, in the case of FSAA, sample-sized polygons), you can't model details any smaller, and will need to resort to other techniques to bring out the appropriate aspects of the surface you are modelling.

An example of a situation where this would be a limit would be brushed metal. If you imagine a metallic object that has been brushed with some sort of very dense wire brush, it will change the light interaction significantly, but it won't make any change to the geometry of that object, for the simple reason that you couldn't realistically model the microscopic deviations from flatness.
 
I just wanna say Wow to the model you posted teeroy. It confirms the saying that you know you have a great model when it looks the bomb without color textures. It's one of the most realistisc models I've seen, the humanity just emanates out of the render.

Dunno if you have posted it on the CGtalk thread but you definitly should.
 
Once,i played Unreal 1 with modified OpenGL engine and S3TC pre-compressed textures from UT CD2 using 16xAF. Walls with bricks looked like in reality,but when you inspected them very close they were actually totaly flat without any bumpmap effect. So first thing to achieve good surfaces is a texture resolution. If its interpolated just a bit it looses its natural look. Nothing is interpolated in real life (for our eyes :p ).
Second thing is shading itself. I mean those very soft shadows that add depth to the scene. Next thing is how the light intercats with the object. How it passes through materials,how does it reflects (including reflections resolution),reflections in materials that do not reflect 100%. Its usually very hard to produce low reflection material. At least i haven't seen any.
The only engine that looked very near reality was Unreal Engine 3.0
 
The problem for greater realism isn't just so much shaders per se, but global illumination. A shader can only deal with the light inputs it gets, and if those are poorly modeled, the result will still be craptastic.
 
RejZoR said:
Once,i played Unreal 1 with modified OpenGL engine and S3TC pre-compressed textures from UT CD2 using 16xAF. Walls with bricks looked like in reality,but when you inspected them very close they were actually totaly flat without any bumpmap effect. So first thing to achieve good surfaces is a texture resolution.
This is true. Very high texture resolution is important. But often there are performance constraints on the texture resolution you can realistically expect, when you factor in that for better lighting equations, you really need more textures per surface.

If its interpolated just a bit it looses its natural look. Nothing is interpolated in real life (for our eyes :p ).
Well, even though the essence of what you said is correct, I feel I have to issue a statement on semantics here...

All textures in-game are interpolated.

We do interpolate things in our eyes (well, our brains do the interpolation, obviously...), otherwise we'd notice the blind spot in our eyes.
 
DemoCoder said:
The problem for greater realism isn't just so much shaders per se, but global illumination. A shader can only deal with the light inputs it gets, and if those are poorly modeled, the result will still be craptastic.
Yes, eventually we'll have to model realistic shadows. One has to wonder just how challenging this is going to be. I do wonder if monte carlo global illumination will ever be used for realtime? From what I've used mcmc algorithms for in scientific apps, well, let's just say that far from realtime would be an understatement.
 
Back
Top