So umm, what the heck is a "shader"?

Rangers

Legend
I always get the impression shaders are an icing on the cake type of thing. That they do small effects, little flashy things, side effects, heat haze coming off a gun ala Gears of War, nothing major seems to be attributed to them, only minor things, so why are they pretty much the centerpiece of GPU's today then?

Once I heard shaders reffered to as a way to "fake" textures, without actually using textures. This made sense to me. Because this was something very big and important, at least, to justify building GPU's around them.

And another thing, I have heard shaders described as simply a way to color pixels. This makes me wonder, since pixels are so small, you know, they could only possibly be one or two colors, that you could actually discern, so it just doesn't make sense, that you could spend massive amounts of power on coloring a pixel.
 
The shortest way of saying it is that they're (usually small) programs that perform calculations on individual elements and produce some information related to that element. An "element" can be anything from a vertex or a pixel or a single polygon. The reason we need lots of power is simply because these elements can be really small and there can be a lot of them.

In the context of GPUs, it's a little more accurate to refer to them as vertex and fragment programs (as opposed to vertex and pixel shaders), which is the OpenGL terminology, because the term "shader" can have some alternate meanings in the offline rendering world. That, too, Unreal engine in the era prior to programmable hardware used the word shader much in that same sort of meaning.

I always get the impression shaders are an icing on the cake type of thing. That they do small effects, little flashy things, side effects, heat haze coming off a gun ala Gears of War, nothing major seems to be attributed to them, only minor things, so why are they pretty much the centerpiece of GPU's today then?
Umm... actually everything you see on the screen is run through a handful of shaders -- each and every pixel on every frame, including many you can't see. All illumination, all transformation, all shadowing, all post-processing, etc. What you mentioned is just an example of a post-processing effect, but it's hardly the most outstanding example. Technically speaking, this was always true, it's just that it used to be the case that all the "shaders" were predetermined, and we as programmers could only do things like adjusting the parameters of these shaders. Nowadays, what we have is the ability to explicitly program them to our liking, though this programmability isn't without limitations, and we pretty much don't do without them whenever rendering matters. With further generations, the flexibility increases as does the power.

And the example you bring up is the sort of thing to come up in discussion a little more often because so many other things are very status quo. For instance, you have normal mapping, which is a feature that rests on per-pixel illumination, for which pixel shading is an absolute requirement... but it's one of those things that is pretty much expected of every game that will ever come out, so nobody's going to say that somebody has just revolutionized normal mapping.

Once I heard shaders reffered to as a way to "fake" textures, without actually using textures. This made sense to me. Because this was something very big and important, at least, to justify building GPU's around them.
While that's theoretically possible, no GPU is powerful enough to make this really worthwhile for a wide variety of cases. They're not that flexible, either, so again, there's not much you can do procedurally that would hold a candle to an artist's skill. Pretty much 100% of all realtime running-on-GPU shaders you find in practical use utilize texture images (when they are of concern -- obviously, there are cases where textures don't matter) to acquire at least some of their information.

In the offline rendering world, though, you'll see a lot more procedural texturing because they have the flexibility to use more complex and better-looking models to simulate certain things.

And another thing, I have heard shaders described as simply a way to color pixels. This makes me wonder, since pixels are so small, you know, they could only possibly be one or two colors, that you could actually discern, so it just doesn't make sense, that you could spend massive amounts of power on coloring a pixel.
That's exactly WHY you want power to color pixels. Because they're small, yes, you can only output one color (not that the data location for a "color" needs to actually hold a color)... but of course, you can gather information about pixels of all sorts, and use that to drive how you color each pixel. And the point of coloring something that small is that it's the smallest thing you can color (sort of)... so you're able to perform operations that allow you to modify color at the finest possible level of detail. The massive power is there not so you can compute the color of *a* pixel, but ALL the pixels on the screen. And the more you want to put into that computation, the more power you need to do it because there are millions of pixels to color several dozens of times a second.
 
In a slightly different direction, I'll just push on what ShootMyMonkey was saying


In the old days there were no shaders. However we still had textures. Back then, the hardware could render a surface with a texture on it. Each vertex stored a coordinate to lookup the texture. If we wanted to do something fancy with the texture, we could enable (say) sphere mapping, or we modify a matrix that effected texture lookups. This is how you built an effect. Thats why there used to be loads of swirling, scaling and scrolling 'effect' textures used. Quake3 used these everywhere.

We also started combining multiple textures. Light maps are a good example. You have the surface texture (decal), and then you have a lightmap. Black in the lightmap = black onscreen, white in the lightmap = bright on screen. This is called modulation, basically, multiplication.
You would set texture unit 1 to your decal, texture unit 2 to your lightmap, and then set a texture unit blend mode to modulation.

Now days we have shaders, which we write. All this legacy hardware has gone.


So, this is how the previous situations change:


If we want to lookup a texture, the 'flat and bright' look, we can simply do this:

(*SIMPLIFIED*)

vertex shader:
Code:
output.textureCoordinate = vertex_input.textureCoordinate;

pixel shader:
Code:
return tex2D(texUnit0, textureCoordinate.xy);


That will run very fast.
But... We want to do some fancy scaling texture like Quake3 did?

Just modify the vertex shader:

Code:
output.textureCoordinate = vertex_input.textureCoordinate * scale;
(where scale is controlled in the game code)


how about lightmapping?

pixel shader:
Code:
return tex2D(texUnit0, textureCoordinate.xy) * tex2D(texUnit1, lightMapTextureCoordinate.xy);

So at each pixel, texture 1 is loaded (decal), then multiplied by texture 2 (the light map). It's just code.
We aren't turning on special features anymore.

In this case, the possibilities of what can be done with the hardware increase massively.

Think of any game that does character animation. In the past, you had to either use the CPU do animate the character, or you needed to use *dedicated* hardware functions to do this. Now, you can simply write some code in a shader that can do it. That code would be moderately complex, but still much much faster than the CPU.


I'll use water as another example. For really good looking water, you want to have two special textures - that you render at runtime - a reflection, and a refraction texture. Basically a texture with what is above and below the water. In that screenshot, the top left shows these two textures.

Now, in the past, those textures had to be mapped onto the water geometry. This was hard, because you had to construct the previously mentioned matrix that effects texture lookups in such a way as to project that texture such that it lines up with the screen. So a vertex at 0.5,0.5 on screen gets a texture coordinate of 0.5,0.5.
Now, you can do that easily in a vertex shader, as you already have all that data... textureCoordinate = position;

Furthermore, the reflection looks dull if it is static. It needs ripples, etc. In the past, faking this was crazy hard - Unless you had dedicated hardware features like EMBM or whatnot.
With a shader it is just maths, so when you lookup your texture, just offset the lookup with a 'ripple value'. Eg tex2D(texUnit0, textureCoordinate.xy + ripple.xy);
How do you generate that ripple value? Thats up to you entirely. Use another texture, use sin/cos, use vertex values, whatever. It's just maths now.
Want to make the reflection stronger at grazing angles? - more reflection in the distance? bigger ripples in deeper water?? Maths!

Shaders get very, very complex when crafting the effects seen in games. If you can think of something (in mathematical terms), then you can do it with shaders, since they are just maths.
Or even something very simple, say you wanted something to be twice as bright when the mouse hovered over it... Well, have a different shader - just with
* 2.0;
doubling the result. Even such a simple example would actually be really hard without shaders.

So basically shaders give you more control over how the scene looks and is constructed.
 
Last edited by a moderator:
I'd like to highlight this as it is the most important point in all the posts I've read discussing shaders/programs for the video card.

People want to write effects that do anything their imagination lets them but cannot do it because they lack the ability and knowledge to transform their imaginative concepts into a mathematical model.

If you plan on doing this sort of stuff, you must be very good with mathematics.

On a similar point, OpenGL shader commands under the OpenGL API allow you to very easily put together a bunch of strings and use that as source code. What this means that on a higher level(your program) you can create data structures that store the mathematics and commands and combine them to form a shader.

What this means is that you can create very complex effects with a variety of basic effects.

It's just maths now.
 
Wow, thanks a lot for the lengthy and simple to understand explanations guys. I've also been trying to figure out what the hell's a shader, and most of the articles out there makes my head spin. You guys gave detailed explanations that are relatively easy to follow.
 
Back
Top