Best way to get the most Realistic Surfaces?

I am wondering with the Huge push towards shaders.

What is the best techiques to get the most realistic looking materials?

Examples

Wood that looks like Wood

Concrete that looks like Concrete

Rusted pipes that look like rusted pipes

Mud that looks like Mud

So far all i have seen Shaders used for is to make things look all Shiny and Sparkley. Even when the natural object would never be Shiny and Sparkley. The exception is some nice water effects with shaders. It seems to be that ultra high resolution Textures are still the basis for making the most realistic looking world objects. perhaps Combined with bump mapping, normal maps or some type of Shader based effect.

-Can someone point to or give an example where the most realistic world objects can be made without the use of textures at all?

-Can someone give me an example where Shaders are used to actually enhance the NATURAL apperance of a World object other than water?

{given that i am correct, which i may not be}
-Can someone Give me a reason why Texture processing is taking a back seat to shader processing in GPU's Dispite the fact that Textures are still the most important basis for realistic looking objects?
 
As soon as artists are given more complete control over shader generation without the help of a programmer (ie: either programming the shader themselves or -- more preferably -- by using a tool to generate shaders), visual quality should vastly improve.
 
Most the things look like what they do due to how they reflect light and why a BRDF lighting model would really help out.

As long as we just continue to use generic phong lighting expect most materials to just look like plastics. Blinn shading will help some things look more natural but of course it won't do everything.
 
don't know if this is what you're looking for but i've always liked this guys surface/materials work.

http://neilblevins.com/cg_education/satin/satin.htm

http://neilblevins.com/cg_education/glass_chrome_pottery/glass_chrome_pottery.htm

http://neilblevins.com/cg_education/complex_mats_falloff/complex_mats_falloff.htm

the last one does use a color (texture) map but as you can see it's only ONE out of MANY important components.

http://www.cgtalk.com/showthread.php?t=145071

(cool thread on skin shading)

a color map will only go so far. if a 3d (non-realtime) artist only had access to straight up texture maps, with no shader controls (even simple stuff like specularity) along with other maps like bump maps and specularity i'm sure he'd be one pissed off individual.

it's very possible to make convincing surfaces without a texture. but you can't do the same with a texture but without shaders.

if you look at really good cg files, you'll find that the texture file is generally very basic (w/ the exception of maybe naturally textured materials/trees). the real work is in the shader and lighting and geometry.

problem with shaders in real-time situations is the gee-whiz feature. if we can making something shiny, let's make it REALLY shiny! once these features are standard and more artists become familiar with them common sense will prevail once more.

cheers!
steve
 
chumps said:
http://www.cgtalk.com/showthread.php?t=145071

(cool thread on skin shading)
Definitely cool! Some of the results look so good, they're creepy -- especially the ones with no eyes. ;)
 
Interesting reading how artists look at doing lighting for skin.

Someone really needs to define a more general lighting equation that can be modulated more and easily with parameters that make sense to artists.

Edit: Also I like that satin and glass (the chrome and pottery didn't do much for me or that last one)

Also I have to say I like the look of the velvet texture at the link below created with a BRDF generated from microgeometry.

http://graphics.stanford.edu/~smr/cs348c/surveypaper.html
 
Hellbinder said:
-Can someone point to or give an example where the most realistic world objects can be made without the use of textures at all?
Well, you can always use procedural textures (which, obviously, aren't really textures at all), but these only work on some specific surfaces (marble, wood, typically just surfaces that have a blend of chaotic and patterned behavior).

-Can someone give me an example where Shaders are used to actually enhance the NATURAL apperance of a World object other than water?
BRDF's. nVidia's got tons of demos on various types of surfaces you may want to have in-game, besides just shiny stuff. ATI probably has a number of 'em too.

The main problem that we're facing right now is that developers have only just gotten a hold of shader technology, and they're going a little overboard on some of the aspects. Within 1-2 years we should see a few games that honestly try to model different surfaces properly. There's already been tons of research on how various types of surfaces actually interact with light, and even some techniques to figure out how surfaces interact with light by taking pictures of objects, so games just need to tap into this research and apply it to make their art look better.

-Can someone Give me a reason why Texture processing is taking a back seat to shader processing in GPU's Dispite the fact that Textures are still the most important basis for realistic looking objects?
Back seat? Not really. In the NV4x and R3xx the load balancing is roughly 1:1 in texture reads to math ops (typically this is more of a worst-case scenario for either architecture, though). The only problem is that this is only true when simple bilinear filtering is enabled. Texturing performance will obviously drop significantly with trilinear filtering and anisotropic filtering enabled.
 
Rough materials like wood, concrete, stone etc. need very good textures for color and bump channels; and maybe a more advanced shading model than Lambert.

Metals can either use a complex shading model, like Ward's anisotropic, or a Cook-Torrance; or simply a good set of color, bump, specular maps with a fine-tuned specular highlight (size, intensity); or replace the specular with (enviroment) reflection maps. It somewhat depends on what kind of metal you want - rusted, polished, and so on. It's good to pay attention to the Fresnel effect - surfaces facing towards the viewer tend to be less reflective/shiny.

Water and glass are somewhat similar to metals, with the added complexity of refraction. Textures are the key here, and with water, the dynamic part which is usually handled with (vertex) shaders.

Organic stuff is going to be the hardest to get right. Apart from very complex surface patterns that require good textures (again), it also needs special translucency effects, because most of the living tissues are tiny cells with liquid in them.
Skin usually needs all the features of specularity above the translucency/SSS shading.
Fur and hair needs geometry to look right - a MASSIVE amount of geometry. Think tens of thousands of little tubes; and that with realistic motion. Hair is one of the heavily researched areas in offline rendering at the moment, see the Pixar guys freaking out on the women in Incredibles ;).

Fire, smoke, dust, and other atmospherics usually only need more complexity and trickery. You can find pretty things in some of today's games already; cool dust in some racing games, nice clouds in FS and so on. Movie effects usually rely on textured particles just as games do (like the Balrog in LOTR); however in some cases, true volumetric shading is required, which might not be easy to replicate in realtime apps.

Most of the times, it's about the artist. A simple Blinn shading model and a few extra layers of textures can get you very very far.
Also note that reflective surfaces are heavily influenced by their enviroments. For example a car looks very different at night with many spotlights; in a clear summer day; or in a photo shooting enviroment with lots of white walls surrounding it.


So to sum it up...
We've already seen solutions for most of the above; we only have to wait until hardware gets fast enough to allow longer shaders for each pixel so that more features can be implemented in each material. Some other fields need heavy research to develop realtime solutions; and some problems may need many years and a lot faster hardware to overcome.
 
Oh, and before I forget - lighting is just as important as the shaders. 2 spotlights are usually not enough to bring out surface qualities... But adding lots of lightsources will probably not be a viable solution for games, because 1. they require a lot more performance 2. they can require a lot of artist time to place and finetune for an interactive enviroment.

Realtime global illumination with whatever solution (monte carlo or photon mapping or whatever) might be good but has a disadvantage in their lack of artistic control. I have yet to see proof that it can be done fast enough, either - all those cycles are probably better spent on something else.

Half-life2 has an interesting trick which also sounds like the very clever texture based-lighting in MotoGP. The general concept is basically using a few hand-placed key lights, and generating some kind of static lighting data from the (already built) enviroment for secondary (fill) lighting. Combine it with precalculated ambient occlusion, and we have fake radiosity :) I personally think that this is a better direction than radiosity, because it's fast and easy and 'art directable' - but we'll see how it works out in practice. And Doom3 has proven that a fully dynamic lighting model is cool, but rarely needed...
 
Chalnoth said:
Well, you can always use procedural textures (which, obviously, aren't really textures at all), but these only work on some specific surfaces (marble, wood, typically just surfaces that have a blend of chaotic and patterned behavior).
Actually, the vast majority of surfaces at their basic detail level, especially non-artificial, can be described as such.
 
no_way said:
Actually, the vast majority of surfaces at their basic detail level, especially non-artificial, can be described as such.
Except textures are often used to encode much more complex information than just how one type of surface looks. That is to say, imagine a texture of a soldier. His uniform may contain a bandolier, some rank insignia, metal buttons, etc. That is, it can have much more than just cloth. And you can model all of this with just textures. Or, you could equivalently model the different materials on the soldier with geometry and appropriate shaders. I think that for some time to come, it'll make more sense to just use textures (note: it's also definitely possible to use one or more textures that give material properties, such that even if modeled with nothing but textures, that metal button will still look metallic, and cloth still look like cloth).
 
Once again, artistic control is important with textures, and procedurals aren't really good in this. Then there are many larger details that aren't chaotic, like color changes of the skin on the face, or the varying effects of wear and tear, that would require a large texture map to mask out parts of the object from the procedural layers - but if you already pay the memory for a bitmap, there's no reason to spend time on procedural approaches.
Also, most of the non-artificial surfaces (rocks, trees, skin etc.) have many different "patterns" and would require many layers of procedurals that'd soon make them overly complex and slow to calculate.
All in all, procedurals aren't going to solve texturing problems, although they are a good tool. But it's better to use them to generate layers for a bitmap texture that you'll finish in Photoshop :)
 
Chalnoth said:
(note: it's also definitely possible to use one or more textures that give material properties, such that even if modeled with nothing but textures, that metal button will still look metallic, and cloth still look like cloth).
Well, for realism, that would be the goal. IMO the standard for level of detail in current games has gone up enough so that metallic buttons _should_ actually look like metallic buttons, not just like differently colored patches on uniform.

Once again, artistic control is important with textures, and procedurals aren't really good in this
Depends on the implementation and content editing tools. A good engine should be capable of seamlessly compositing procedurally generated content and artist-made.

Then there are many larger details that aren't chaotic, like color changes of the skin on the face, or the varying effects of wear and tear
My approach would still be to encode the "material tweaks" as varying skin tone and wear of artifical materials into the mesh.

Also, most of the non-artificial surfaces (rocks, trees, skin etc.) have many different "patterns" and would require many layers of procedurals that'd soon make them overly complex and slow to calculate.
There are always tricks like caching,lookups and LOD to partly overcome speed problems. But true, to get good procedural materials you need quite some functions hooked up. Even the simplest realistic-looking DarkTree materials usually take at least three or four blocks that in itself can already be quite complex mathematically.

All in all, procedurals aren't going to solve texturing problems
IMO, to realize their benefits procedurals should be always looked at as complete 3D materials. As simple textures, they really dont have much use.
 
no_way said:
Depends on the implementation and content editing tools. A good engine should be capable of seamlessly compositing procedurally generated content and artist-made.
Sure, but the problem is that the only really compelling reason to go for procedural content is performance. And today, memory bandwidth demands aren't close to high enough to offset the additional processing power that procedural textures demand.

Then there are many larger details that aren't chaotic, like color changes of the skin on the face, or the varying effects of wear and tear
My approach would still be to encode the "material tweaks" as varying skin tone and wear of artifical materials into the mesh.
The problem with this approach is that you'll need to have multiple materials in the shader, and branch between them. This would be prohibitive performance-wise if you have more than about two materials, at least for current hardware.
 
no_way said:
Once again, artistic control is important with textures, and procedurals aren't really good in this
Depends on the implementation and content editing tools. A good engine should be capable of seamlessly compositing procedurally generated content and artist-made.

The overall problem is that adjusting numbers or sliders to change an image is totally nonintuitive. An artist prefers to go in with a brush and paint it as he sees fit.
I have to add that such things happen in movie effects all the time. Many many things are still "painted in", sometimes with animated tools tracking certain spots on the image, sometimes with frame-by-frame manual correction; things like shadows, highlights, color changes that could be changed by adjusting parameters, lighting, textures or whatever - but most artists prefer a hands-on approach.
So, this approach isn't easy to fit into procedural textures, although there is some research on the subject.
 
3d scanning

Not sure if you've tried 3d scanning, I work for a high res 3d scanning company XYZ RGB and we extract all our maps (normals, bump, colour etc.)from the scan data, including skin, wood, brick etc. I can say that it's the best data I've ever seen, (trying to be as unbiased as possible.) Alot of gaming companies/VFX companies are using our scanner for that very reason.

I apologize if this seems like a shameless plug, but the topic caught my eye.
take a look at this.
adrian.jpg
 
I've heard good things about XYZRGB... you guys did Shelob's scans for ROTK, right? It certainly is a viable method for some things, however...
1. lots of stuff can be done with a good digital camera and some Photoshop, which altogether is cheaper
2. if I can bring you a Covenant Elite warrior and a terran battleship (both from Halo), will you scan it for free? ;)))
 
Re: 3d scanning

teeroy said:
Not sure if you've tried 3d scanning, I work for a high res 3d scanning company XYZ RGB and we extract all our maps (normals, bump, colour etc.)from the scan data, including skin, wood, brick etc. I can say that it's the best data I've ever seen, (trying to be as unbiased as possible.) Alot of gaming companies/VFX companies are using our scanner for that very reason.
Right, but the problem is that to do this well, I believe, you would need to take multiple pictures of an object with the light source at different places. This would mean that for every point on your object, you would have a 2D texture representing lighting interaction, or a 4D texture for the entire object. This sort of map will be essentially perfect for any number of light sources, as long as they are each sufficiently far away from the object.

Typically when you want to reduce the amount of data required, you use some amount of the physics of the interaction of light with the given surface. I doubt you could do 3D scanning in general and get less than a 4D texture if you don't do this.

Thus, for games, I claim it would be more efficient to have some preset materials, and allow artists to modify parameters. Now, as Laa-Yosh pointed out, artists may not wish to do this. So, perhaps artists would rather have a system where, in the simplest case, the artist only has to edit the base texture (the normal map would be generated from geometry), tweaking it until it looks right with the chosen shader applied.

In a slightly more complex case, the artist would create an additional "mask" texture that would set different "shaders" to run on different portions of the texture (these different shaders may in fact be the same shader with different parameters: this would be up to the developer tools to decide).
 
Hehe well ideally they can take those 2d textures for light representation (in another words a BRDF) and determine a subset that can be interpolated between for the entire object. Example being most the skin will be the same except for places like ears where the subsurface scattering plays a larger part (isn't it nice of subsurface scattering being able to be encoded into a BRDF).

So teeroy can you extract a BRDF easily for each texel element? If you can that would be extremely nice.
 
Cryect said:
Hehe well ideally they can take those 2d textures for light representation (in another words a BRDF) and determine a subset that can be interpolated between for the entire object. Example being most the skin will be the same except for places like ears where the subsurface scattering plays a larger part (isn't it nice of subsurface scattering being able to be encoded into a BRDF).
That is true, I suppose. Many BRDF's could be highly compressible. It's more complex objects, particularly translucent ones, that won't be terribly compressible. That is to say, skin may look smooth, but the subsurface scattering and translucency of the skin will make the function change significantly from place to place on the body.

Oh, and I guess I do have to say that I forgot one effect of the lighting interaction: the outcoming distribution of light. I suppose if you want to be completely general in the lighting interaction, you really should use a 4-D texture for each point on the surface (two dimensions store the incoming light rays, the other two store the distribution of outgoing light rays). So it's really not a simple thing in general.
 
Back
Top