Could PCI-Express enable "animated lightmaps"?

No, it's about suspension of disbelief. In any game, you have to simply accept some aspects that are obviously unrealistic, and if the game is consistent elsewhere, it becomes easier to retain that suspension of disbelief.

Inconsistencies tend to disengage this suspension of disbelief, halting the player from properly experiencing the game.
 
Chalnoth said:
No, it's about suspension of disbelief. In any game, you have to simply accept some aspects that are obviously unrealistic, and if the game is consistent elsewhere, it becomes easier to retain that suspension of disbelief.

Inconsistencies tend to disengage this suspension of disbelief, halting the player from properly experiencing the game.

This is still a subjective manner. First of all it depends on the game itself ofcourse, some games may have very obvious unrealistic parts, where others are very subtle. I suppose bumpmapping is a very subtle form of unrealistic rendering. The silhouettes are wrong, but I don't think anyone was ever bothered enough by it to disable bumpmapping, since the alternative is far less realistic, although it would be consistent.

Secondly, I think it depends on the person playing the game whether the inconsistencies are actually hampering his or her experience of the game or not.
 
Daliden said:
However, on a slightly different note, I'm a bit ambivalent about "unified" solutions. I mean, what's wrong with pre-computing as much as possible? After all, the levels must be carefully designed (for playability purposes) in the first place -- why not "fake" as much as you can? It might mean more work for modders, but it would almost certainly mean an end result that is both prettier and faster than anything with a unified solution.

I agree on that too. Unified lighting isn't so much a goal in itself as I see it. The day computers are fast enough for it it may be an option for the lazy, sort of. Like most of us don't optimize stuff we don't need to. But we're not quite there yet, or anytime soon. I don't see lightmaps going away for instance. Not in the near future anyway. Static geometry will certainly remain, as will static lights, and therefore lightmaps will survive for years to come. There's no good reason to use say shadow mapping instead of lightmapping where lightmapping works just as well or probably even better.
 
I'm pretty sure general idea of a 3D game world is to reflect reality in terms of physics (of both lighting and mechanics) as accurately as possible. Therefore, unifying as much as possible in a 3D engine should be of paramount priority, and while it's tempting to sacrifice unification in favour of moving into new styles of rendering that give us a brilliant result (such as facial animation and detail levels in Half-Life 2, with the sacrifice of a unified lighting model to sometimes an embarrassing extent), that can be passed off as perhaps nieve ambition.
However, I hope developers do not unify on this unification issue any time soon as we'll have overlapping technology development in that every engine will look more or less the same (for they'll be going for the same level of unification possible each hardware generation). As nieve as it can sometimes seem, it's good that we have several different directions occuring simultaneously with current engines, so (ironically enough) engine technology develops uniformly, and perhaps one day we may unify on both this issue and our rendering direction, which will no doubt be photorealism and essentially the end of engine development.
 
Something that I haven't seen mentioned yet, is that Doom3 actually unifies the 'wrong' thing, namely the surface shading.
If you look at what they are doing in Hollywood with offline rendering, you will see thousands of shaders being used, basically one shader for each type of surface. This makes sense because every surface responds to light differently, and trying to unify that would mean you end up with a huge ubershader with tons of parameters.
Doom3 just gets away with it because everything is metal, and the shading is still simple (no reflections or refractions or anisotropic light, or anything). But this is not the direction we should be heading for more realistic lighting, at least not anytime soon, because that ubershader is not going to fit in hardware anytime soon, on a decent level of realism.

We have already seen that UE3.0 is not taking this route though, they use different shaders to achieve different surfaces. HalfLife2 is doing pretty much the same (and has nice things like water, fire and refractions), but on a smaller scale.
 
Something that I haven't seen mentioned yet, is that Doom3 actually unifies the 'wrong' thing, namely the surface shading.
Except that most surfaces could be modelled effectively just by editing a few parameters in the same shader. Unless you want something completely different (i.e. reflection, refraction, distortion, etc.), one shader with adjustable parameters and/or different textures describing the material should be enough, most of the time.

So, for performance reasons (it's easier for the game developer and/or IHV's to optimize for fewer shaders), it makes sense to try to use one shader as much as possible. The only reason why you wouldn't would be if significant amounts of math could be skipped if you customized each shader on a per-surface basis (then again, branching could be used to take care of that problem, so even this may not be an issue).

Granted, Doom 3's unified surface shading (which, by the way, isn't completely unified: there is a material system that allows arbitrary shaders to be applied to any surface) is simplistic, but it really doesn't need to be.
 
Chalnoth said:
Unless you want something completely different (i.e. reflection, refraction, distortion, etc.), one shader with adjustable parameters and/or different textures describing the material should be enough, most of the time.

But that is the entire point. Look around you, and see that most surfaces reflect in one way or another, then there are things like subsurface scattering, anisotropic light reflections etc...
So if the future is more realism, we will see more shaders (as I already said, you could create parameters for everything in a ubershader, but that is not a realistic goal anytime soon, given the hardware limits. You also can't or don't want to use textures/parameters for everything because procedural routines may look better/be faster).

In fact, if you only want to use one shader on everything, you might aswell give up programmable hardware and optimize that one shader in the pipeline again.
 
Scali said:
But that is the entire point. Look around you, and see that most surfaces reflect in one way or another, then there are things like subsurface scattering, anisotropic light reflections etc...
Most surfaces in my room aren't reflecting much. Most could easily be approximated with decent specular lighting. Even the CD cases in front of me could be done well with simple specular (since most reflections are so smudged out or faint, specular lighting works quite well, unless you're talking about a waxed car, clean glass, water, or something similar).

And, of course, subsurface scattering and anisotropic light reflections would require different shaders (since not many surfaces have much subsurface scattering to speak of, nor do many have much in the way of anisotropic reflections), or at least, different pieces of shaders. Much of the shader could well be the same.

But this is little more than a "mix and match" style of shader. You don't need to develop entirely new shaders for most objects, and can simply link pre-programmed pieces together. I don't see this as being very different than the idea of a unified shader (since the only difference is in implementation, and thus performance characteristics, not in art development). I believe this is, in fact, what UE3 does.

As for procedural effects, what did you have in mind?
 
Chalnoth said:
Most surfaces in my room aren't reflecting much. Most could easily be approximated with decent specular lighting.

I don't think "let's approximate everything in a simple way" is the way of the future. We approximate those things with specular now, but I don't expect us to still be doing this 10 years from now. I am trying to look forward, not back.

But this is little more than a "mix and match" style of shader. You don't need to develop entirely new shaders for most objects, and can simply link pre-programmed pieces together. I don't see this as being very different than the idea of a unified shader (since the only difference is in implementation, and thus performance characteristics, not in art development). I believe this is, in fact, what UE3 does.

Depends on what you call a shader. In light of the underlying hardware, a shader is still a single independent program. So whether you link them together and compile at runtime or not, you are using multiple shaders to enhance the diversity of material.
Ofcourse these shaders can be partly recycled.
But Doom3 went the other way, restrict the artists to pretty much use one material everywhere. Which was nice on early shader hardware, but I don't see this as the solution for the not-so-distant future. It seems like most engines, among which UE3.0, 3DMark05, HalfLife2 and the Artificial Reality engine are going for the path of using many shaders to create a large diversity of materials.

As for procedural effects, what did you have in mind?

Depends on how powerful the hardware is. Currently we can already do procedural wood and marble textures, and you'd be surprised how useful noise or trig functions can be in all kinds of surface (or volume) emulations (fire, water, smoke, clouds, you name it). As hardware gets more powerful, we can implement more sophisticated functions. Ideally we can model the entire world around us procedurally.

The work that Henrik Wann Jensen did on subsurface scattering with his photon mapper is interesting aswell, he actually modeled the material procedurally (one function for each material, which would translate to one shader per material), so there were all kinds of microscopic bumps and things in which the light would reflect and illuminate the surface. That's the sort of thing we may be doing on programmable hardware in the future.
 
Scali said:
Chalnoth said:
Most surfaces in my room aren't reflecting much. Most could easily be approximated with decent specular lighting.
I don't think "let's approximate everything in a simple way" is the way of the future. We approximate those things with specular now, but I don't expect us to still be doing this 10 years from now. I am trying to look forward, not back.
But to do this well would require a move away from rasterization. I don't think it's useful to talk about this sort of thing in the context of game engines being produced today.

But Doom3 went the other way, restrict the artists to pretty much use one material everywhere. Which was nice on early shader hardware, but I don't see this as the solution for the not-so-distant future. It seems like most engines, among which UE3.0, 3DMark05, HalfLife2 and the Artificial Reality engine are going for the path of using many shaders to create a large diversity of materials.
Doom 3 has a materials system that is capable of almost completely general shader effects (multipass effects or effects requiring rendertargets other than the framebuffer are probably not possible without modifying the engine).

The restriction to one type of material for most surfaces in the game was based upon a desire to support older hardware, and it was deemed impractical to do many variations.

But now that we have good programmable hardware, it is possible to quite accurately model the majority of surfaces we encounter in our day to day lives with just one customizable shader. Then you'd have tack-on effects for other surfaces, such as very shiny surfaces or translucent ones. I think that this is the direction that, for example, UE3 is headed. Most surfaces use the same basic type of shader (depending on the lighting system, the shader may have to be recompiled frequently), with modular shader pieces that can be "plugged in" to model some of the more expensive or rare effects.

Procedural in general, though, just isn't realistically going to happen. It takes far too much time to make it work for general surfaces, both in development and in computing time. No, we're moving towards one shader for everything, with configurable parameters, not away. As you recently stated, of course, every object has some degree of subsurface scattering, translucency, and reflection. So one could just model all of these for every object with the same basic shader in some future engine (not anytime soon, though).
 
Chalnoth said:
But to do this well would require a move away from rasterization. I don't think it's useful to talk about this sort of thing in the context of game engines being produced today.

We can start by using environment maps. Either statically or dynamically (depending on the situation and amount of reflection on the surface). It can already increase realism considerably. Get rid of that plastic look.

Doom 3 has a materials system that is capable of almost completely general shader effects (multipass effects or effects requiring rendertargets other than the framebuffer are probably not possible without modifying the engine).

Obviously, that is trivial in any half-decent engine.

The restriction to one type of material for most surfaces in the game was based upon a desire to support older hardware, and it was deemed impractical to do many variations.

Yes, although contemporary engines seem to make other decisions there. Doom3 is rather unique in this case.

Procedural in general, though, just isn't realistically going to happen. It takes far too much time to make it work for general surfaces, both in development and in computing time.

In many cases, arithmetic is already faster than texture lookups. This will only become more common as computing power increases. Memory simply doesn't scale that quickly, so textures will become relatively slower in the future.

No, we're moving towards one shader for everything, with configurable parameters, not away. As you recently stated, of course, every object has some degree of subsurface scattering, translucency, and reflection. So one could just model all of these for every object with the same basic shader in some future engine (not anytime soon, though).

Since even Renderman isn't using that approach, I find it highly unlikely that game engines will. For now, we will just optimize the shaders to include the most visually striking features of a material, and remove all insignificant processing.
 
Scali said:
We can start by using environment maps. Either statically or dynamically (depending on the situation and amount of reflection on the surface). It can already increase realism considerably. Get rid of that plastic look.
Environment maps have been around for a long time. There's a reason they're not widely used. They cost too much to generate dynamically much of the time (even one environment map can way more than double processing power requirements for the scene, since in general you'd want to render a cube map), and they don't often look all that great when created statically (if you can see the reflection of the entire room in a vase, but not your own, it will look kind of hokey).

In many cases, arithmetic is already faster than texture lookups. This will only become more common as computing power increases. Memory simply doesn't scale that quickly, so textures will become relatively slower in the future.
Except that it can't be used for everything. There are a specific few set of materials that it works well for (wood, marble, etc.).

Since even Renderman isn't using that approach, I find it highly unlikely that game engines will. For now, we will just optimize the shaders to include the most visually striking features of a material, and remove all insignificant processing.
There's a huge difference. Renderman is used for animations that will be done once. A scene is created, tweaked until it looks good enough, then sent to a final render. Games have to work in a variety of scenarios. The same techniques just won't always work when comparing the two media.
 
Chalnoth said:
Environment maps have been around for a long time. There's a reason they're not widely used. They cost too much to generate dynamically much of the time (even one environment map can way more than double processing power requirements for the scene, since in general you'd want to render a cube map), and they don't often look all that great when created statically (if you can see the reflection of the entire room in a vase, but not your own, it will look kind of hokey).

We said exactly the same thing about shadows or dynamic per-pixel lighting etc. It's been around, but wasn't feasible for realtime use. Well, now it is. And envmaps will also be feasible. In some cases they already are. Take Need For Speed: Underground for example.
Anyway, you can get away with static envmaps on surfaces with low reflections. You'd vaguely see something reflecting, but you can't really tell if there's anything missing or not entirely correct.

Except that it can't be used for everything. There are a specific few set of materials that it works well for (wood, marble, etc.).

You are thinking in the wrong direction. You don't have to model entire materials, but you can model certain properties of materials procedurally. Perhaps properties that we are not modeling at all today, such as subsurface scattering.

There's a huge difference. Renderman is used for animations that will be done once. A scene is created, tweaked until it looks good enough, then sent to a final render. Games have to work in a variety of scenarios. The same techniques just won't always work when comparing the two media.

I don't see how surface shaders fall into the category of techniques that won't work in game scenarios.
 
Is a single surface shader the best approach for games, really? I can see how it would be an *elegant* solution, certainly. But do all surfaces have all the same properties, only in different quantities? If, for example, the shader is rendering material X, it has to check all the different parameters, even if their value is 0. Matte surfaces don't have a specular component, that kind of thing. And what about translucent surfaces?

Anyway, individual shaders for individual surfaces should be the faster and more flexible solution -- at least if switching between shaders does not produce too much overhead.

Another thing to consider is modifying the engine. If THE surface shader only supports certain parameters, that's what you have to work with. What about a modder who wants to add a nifty new kind of surface? Well, basically that modder is outta luck without hacking the engine a lot more than necessary.
 
Anyway, individual shaders for individual surfaces should be the faster and more flexible solution -- at least if switching between shaders does not produce too much overhead.
I'm more approaching this from the aspect of content creation. It makes much more sense to me to not have the artist do any shader programming, but rather give them a simple set of parameters that they can tweak to get the look they're going for. The engine may cut out shader code for pieces that are set to zero, for optimization purposes.
 
Chalnoth said:
I'm more approaching this from the aspect of content creation. It makes much more sense to me to not have the artist do any shader programming, but rather give them a simple set of parameters that they can tweak to get the look they're going for. The engine may cut out shader code for pieces that are set to zero, for optimization purposes.

I think the most realistic way is to provide the artists with a library of shaders. Perhaps write or modify shaders on request when the artists find something lacking (or even tools that allow the artist to modify shaders themselves, I've seen a few of those).
Obviously most shaders have parameters that can be tweaked, through various textures or constants. That does not mean that you can realistically have one shader that can be tweaked to do everything.

In Doom3 the library was pretty much 1 shader large, and you can see the effect, the artists tried hard to tweak that shader to do everything, but especially the humans are rather bad. You can't really tell the difference between a zombie and a living person. And things like teeth really fail completely.
Even if they only had 2 or 3 shaders, it could have made quite a difference.
 
But you definitely would not want to have a library of full shaders. You'd want to have a modular library. A set of pre-programmed shader pieces.

Keep in mind what I'm trying to contrast this with. Think of water rendering in games, for instance. Since the early days of water rendering in games, it's been treated uniquely. Early-on, water was one of the only alpha-blended surfaces in games. Later, it became the only reflective surface. Today we still have techniques of handling water that require an entirely different rendering algorithm (not just shader) than any other type of surface.

I claim that in the long term, we'll have the performance and techniques for robust enough reflection that you could render the reflection from the surface of water in the exact same way you'd render the reflection from a polished car. This may require ray tracing or some similar technique, I don't know. But I definitely think it will happen.

Similarly, it may become useful for artists in many situations to use textures to determine surface parameters. An example may be a car which is rusting. It may still have shiny paint, but rust right next to it (I'm looking at a car out my window that looks just this way....). You'd want to be able to render both materials with the same shader, using a texture that contains the material parameters, for a smooth transition between the two.

So I definitely think that unified (parameter-based) shading is the future of game development. It makes content development easier. It allows for transition between materials much more easily. It's just a matter of time.
 
Keep in mind what I'm trying to contrast this with. Think of water rendering in games, for instance. Since the early days of water rendering in games, it's been treated uniquely. Early-on, water was one of the only alpha-blended surfaces in games. Later, it became the only reflective surface. Today we still have techniques of handling water that require an entirely different rendering algorithm (not just shader) than any other type of surface.

I claim that in the long term, we'll have the performance and techniques for robust enough reflection that you could render the reflection from the surface of water in the exact same way you'd render the reflection from a polished car. This may require ray tracing or some similar technique, I don't know. But I definitely think it will happen.

Yes, but it will be VERY long term, I suppose. As I said before, even in Hollywood they're not using this approach, while in their case, performance is only of secondary interest. The primary interest is quality, and if a unified shading system gives more quality, because it's easier for the artists, or such... they would be using it, I suppose. But I suppose it is not feasible yet, even with offline renderers.
Another possible reason could be that such a shader will be so complicated to finetune for artists, because of all the parameters and things, that they have trouble getting decent results... Think of it as the linux-effect :) With that I mean, linux is highly configurable, but unless you are a computer specialist, you're not going to get far with it. Remember that artists are not scientists, physicists, programmers or anything.
 
Well, with a system with many configurable parameters, you'd want to have a robust example system that shows (visually) what the various parameters do. For example, you'd have an example that does your basic waxed car, stone, skin, painted plaster, plastic, and other relatively common materials, with notes on how to modify them for things like dirt, dust, rust, etc.

And, of course, the best way to express the variables would be to have them in terms of, say, a generalized BRDF. Since you can look up BRDF's for many surfaces, such a thing should make it relatively easy for developers to research the look they want if the surface they're going for isn't in the examples shipping with the engine.
 
Back
Top