Question on shaders?

sancheuz

Newcomer
Let me ask you geniouses a couple of questions. I am a rookie 17 years of age, i know alot about computer hardware, ect, but not programming. I've been hearing alot about shaders. What are shaders? Also, can you guys tell me how the 3d design and animation business is going. Is it a good career for me to go into?
 
Simple answer:
A shader is a small program that's being run on the graphic card either for each vertex (Vertex shader) or a pixel (or actually a fragment) (Pixel shader). The output from a vertex shader is the input for the pixel shader.
 
A shader can also be run in software -- for example Pixar Renderman uses software shaders. Software shaders are much slower than hardware shaders.

This is a 3D hardware/game programming/game playing site, so I don't think we're the best place to ask about 3d design and animation.

Currently the best place to ask a question like that is probably:

http://www.cgtalk.com/

good luck!
 
Can a pixel shader program run on a general purpose vector processing unit or does it require the dedicated hardware on a graphics card?

Where does rasterization come into it? Is it an integral part of the pixel shader operation or merely a step that happens after the pixel shader has done its work? When does the texturing occur, or is that effectively where the pixel shader is at work? This is where I get confused- where all these different steps come together.
 
Any shader program could theoretically run on any general-purpose processor. Vector-based processors would be better than your standard CPUs that are scalar-based, and GPUs are still faster, due to the dedicated hardware.

Currently, you can run pixel shaders/fragment programs through software emulation in certain scenarios. One such situation is with an nVidia graphics card and the 40.41 drivers, you can run NV30 shaders through software emulation. This emulation is far too slow and poor-looking to realistically use in any situation other than attempting to design for DX9 hardware before owning it.

Vertex shaders/vertex programs have always been able to run in software through Direct3D/OpenGL. This emulation is fairly fast right now, and is not out of the question to use in games.
 
Sounds good. Now where does the output of the pixel shader go? Does it then go to a rasterizer? Can a pixel shader be described as a programmable, procedural sort of texturing unit, or is pixel shading and texturing entirely 2 different things?
 
Well, with the NV30 software emulation, it goes wherever the software wants it to go, which usually means the screen.

For high-end rendering, the result of rendering is usually saved to the hard drive for later processing (such as, for example, placing into a video stream for a movie).
 
Very interesting! Thanks.

So in games that aren't using pixel shading effects, are the frames being created on the videocard by conventional texturing and rasterization units or by the same pixel shaders running in a sort of "dumb" mode?
 
I believe that the way it essentially works is that when the shaders aren't used, the drivers send a shader program to the hardware that does the same job as the fixed-function code, whenever possible.

I'm pretty sure, however, that there is a fair amount of fixed-function instructions that do not translate properly to the pixel shader, and are run on their own special hardware. This may change in the future as GPUs become more programmable (and may already be the case with DX9 GPUs).
 
The rasterizer is basically the part of the rendering engine that computes the final color for a pixel. Everything that comes before essentially deals with getting the data for the rasterizer that it needs, such as texture coordinates.
 
The "raster" in rasterization is the 2D array of pixels that makes the framebuffer. So rasterization is the part that do the final convertion to "raster format". This is actually not only the pixel shader, but include a part before it, and a part after it.

Taking the per vertex values and interpolate them over the triangle to get the input to the pixel shader (or fix function equivalent) is also a part of the rasterization.

Taking the output of the pixel shader (or fix function equivalent) and blending it with fog and/or previous frame buffer value is also a part of the rasterization.
 
So when a videocard gives a certain poly/sec rating, can this base rating be quite different whether you are talking about a game that is using pixel shader output vs. one that uses conventional rasterizer output? Can a game be using the output of both within a single screen frame or does it typically just use one or the other? ...Or is this more determined by the T&L throughput, with the fill rate being the more direct metric for the pixel shader or rasterizer?

So how does this correspond to a typical GF4 product? When they give the 100 some odd poly/sec output, is this under the context of conventional rasterizer output? Would the native output be much different (considerably less?) for pixel shader output?

THIS HAS BEEN A GREAT DISCUSSION FOR THE BEGINNER! :D
 
I'm reasonably certain that vertex computing power is utterly independent from the computing power of the rasterizer.

That said, there's still only a limited amount of bandwidth, and one part of the video card is going to remain idle much of the time (usually it's the vertex processor part, as most any game is fillrate-limited...which will not change for high-end video cards until higher-order surfaces come into widespread usage).

Now, the number of vertices/sec a GPU can process can vary wildly depending on how much processing it needs to do per vertex. And note that the proper term in describing the performance of a GPU is "vertices per second" as "triangles per second" will vary largely depending on how those triangles are sent to the GPU. In fact, it is possible to have a model that has roughly twice as many triangles as it does vertices...if the GPU can manage to cache all of those common vertices, and not recalculate any of them, then it will manage to have a triangle processing rate double its vertex processing rate. Conversely, with no caching, the triangle rate could be as low as 1/3 the vertex processing rate.

As a side note, for optimized meshes, I believe it is more normal for the triangle rate and the vertex rate to be roughly the same.
 
How about lighting? Does lighting fit in a vertex shader like texturing fits in a pixel shader? Or are there lighting calculations that don't simply alter/ create vertex data, but have to do with just the light sources themselves before vertices at all enter the ballgame? (So with PS 2.0 or 3.0 or whatnot, do you still need dedicated lighting hardware to "feed the vertex shader"? -- I'm asking because I don't really understand hardware lighting very well, if at all, that is, and I'd like to understand in a detailed but still layman way why some chips handle multiple light sources better than others...) TIA for any clarifications!
 
Basicly vertex shader replaces entire fixed function t&l (transformation, lighting, texture coordinate generation,...). So if you are using vertex shaders you must also do the lighting by hand. The way this industry spins is that in the future (DX9) we will be doing more work on per pixel level (in pixel shaders) then on per vertex level (vertex shader). Vertex shaders will handle stuff like animation, transformation (from 3D space to 2D screen space) and texture coordinate generation (each texture coordinate is just a parameter that you send to pixel shader) for per pixel lighting. Pixel shaders will handle stuff like lighting, procedural textures, filtering, texture compresion,...
 
Lighting can be done either in the vertex shader or pixel shader.

If the lighting is done in the vertex shader, it basically means that those lights are calculated at each vertex, and interpolated in between. If the lighting is done in the pixel shader, then it is calculated at each pixel.
 
Back
Top