DX, SM, OGL, and why you would bother.

Frank

Certified not a majority
Veteran
I had to write a short article about why you should or shouldn't buy a certain graphics card and what the visual difference would be, for my work. I tried Google and the wikipedia first, of course. But no luck. Do other people except us 3D geeks actually have any idea what's it all about? How would they know? A year ago, I didn't have any idea either, and I have a job in the IT...

So, wouldn't it be a good idea to post a sticky to an article here that explains it all, so all those other people have some idea what it's all about? So they can participate in our discussions without feeling stupid.

I'll make a draft if you would like, if you all promise to give feedback and correct my mistakes.

And it might even convince people to buy nothing but a SM2.0+ card, so we would all see better graphics in the future, if the percentage of people with a SM2.0+ capable chip would rise a bit.

:D
 
This isn't the place for it--I don't think you get the kind of people who buy 256 meg 5200s here.
 
The Baron said:
This isn't the place for it--I don't think you get the kind of people who buy 256 meg 5200s here.

That depends. But I think it should be at least somewhere on this site.
 
First, let's start with some definitions.

Frames
A frame is a single picture that is displayed on the monitor.

fps
Frames Per Second, the amount of frames you can generate in a second. Not FPS, which stands for First-Person Shooter!

Graphics rendering
That is the process in which you generate the frames.

OpenGL
This is the "Open" variant of the SGI (a pioneering company in 3D graphics) Graphics Language. It describes all the steps a graphics rendering engine has to take to produce nice 3D graphics, and a computer language to make them happen. The "open" part implies that the standard is open for anybody to implement.

DirectX
Actually Direct3D for 3D graphics, this is what Microsoft designed to make nice graphics on chips that use the SGI GL method. And as it is the most used graphics model, it is used to class graphics hardware as well.

IHV
Independent Hardware Vendor, generally the companies that make the chips, and to a lesser extend the boards.

Pipelines
A pipeline is like a factory: it performs a sequence of actions on something to create a product.

Pixel pipelines
A pixel pipeline is a pipeline that calculates the color (value) of a pixel. Most (all?) pixel pipelines nowadays are combined to the size of a quad: a rectangle of 2x2 pixels. Better graphic chips have up to 16 of them (or four quads), so they can calculate 4*4=16 pixels at the same time.

Shaders
Shaders are the programs that run on small processors in a pipeline to calculate special effects, but it is used to describe those processors as well. It would be better to talk of shader programs versus processing elements or ALUs for the processors themselves.

Pixel shaders
Pixel shaders are the processors in pixel pipelines and/or the programs that are executed by them. They enable the programmer to calculate special effects for each pixel.

A wire frame
That is a set of points, connected by triangles or polygons, that make up the outside of an object you want to display in 3D.

Geometry
The shapes that are made out of the wire frames.

A vertex
That is a point that is used in a wire frame. Multiples are called vertices.

Skinning
To be able to display solid-looking objects from wire frames, you need to pull something around them. That is called skinning.

Textures
They started with being the pictures you use to skin a wire frame. But they are used for a lot of other purposes nowadays, as they represent one of the few ways to store data in a graphic chip.

A scene
That is the collection of all elements (vertices, textures and shader programs) that is needed to render a 3D frame.

Avatar
The virtual 3D representation of a person.

Transform and lighting
If you have your 3D objects, you need to put them in the scene. That requires that you rotate, scale and place them. But you might want to animate things as well, like move the limbs and facial expressions of your avatars. And when that is done, you want to calculate the light values of the triangels that make up your 3D objects to represent the lighting of the scene.

Vertex shaders
They allow the programmers to manipulate the geometry according to a set of rules. While this offers many interesting possibilities, as long as there is no good way to use textures and create new geometry (like smoothing an object by creating much more and smaller triangles that follow the outline), they are of limited use.

Shader model
The shader model (SM) describes the programmability of the hardware. The more programmability you have, the more life-like the graphics can look. It is the combination of the Vertex Shaders (VS) and Pixel Shaders (PS). This is mostly a Microsoft invention, as OpenGL doesn't make this distinction and only talks about functions.

In sequence:

DX 6 and lower
All these graphic chips are only able to generate pixels according to triangles, skinned with textures. Looks pretty bland and is not supported anymore. While it could do some nice tricks (dependent texture lookups, so you could use textures as general data storage), they weren't used at that time.

DX 7: Includes transform and lighting, the first model that allows any special effects. This is the bottom baseline nowadays.

DX 8.0: This includes a Hardware Abstraction Layer, so it was the first model that made it easy to program 3D graphics, and it introduced PS 1.1, which together with the ease of use made much better effects possible. But the programs were very tiny and had only a small range of possible values to work with, so the calculated colors tended to "jump" from one value to the next.

DX 8.1: The real start of programmable graphics hardware. PS 1.2 to 1.4 are introduced, which make those nice effects really possible.

PS 1.0: the first programmable pixel pipeline, never used in actual hardware.

PS 1.1: The DX 8.0 implementation from nVidia. It allows the developers some basic calculations in 8-bit integer values (+ 1 sign bit), which offers possibilities, but is too coarse (pixels "jump" from color to color) to make interesting effects possible.

PS 1.4: The DX 8.1 implementation form ATi. It allows a much finer granularity, and so is the first programmable model that is actually useful to create nice special effects.

PS 2.0+: This is the Good Stuff! Fully programmable, this allows just about any calculation. It makes truly superb life-like graphics possible. The only catch would be, that you really want at least 30 fps for a game, which makes it impractical to generate beautiful scenes that would take minutes to generate a single frame.

SM 2.0: This is the combination of VS 2.0 and PS 2.0+.

SM 3.0: At the moment, there doen't seem to be much difference between the SM 2.0+ and 3.0 implementations, except that the SM 3.0 implementations allow longer shader programs. It theoretically allows for conditional jumps, but the current implementation limits that to batches of 1024 pixels, of which both (all?) paths are calculated if they use more than one, so it isn't used in practice.
 
So, what do you all think? Shouldn't there be something like this on a site like this? Or is everyone who comes here required to know all that and more?
 
Umm...
DiGuru said:
OpenGL
This is the OpenSource variant of the SGI (the pioneering company in 3D graphics) Graphics Language. It describes all the steps a graphics rendering engine has to take to produce nice 3D graphics, and a computer language to make them happen.
OpenGL isn't open source by itself; the 'open' part merely implies that the standard is open for anybody to implement (you still need to pay for conformance testing, but there is no limitation on who might submit drivers for such tests).
Pixel pipelines
A pixel pipeline is a pipeline that calculates the color (value) of a pixel. Most (all?) pixel pipelines nowadays are the size of a quad: a rectangle of 2x2 pixels. Better graphic chips have up to 16 of them, so they can calculate 4*16=64 pixels at the same time.
Nope. The quad is normally considered a grouping of 4 pipelines rather than a pipeline in its own right; the top-end graphics chips still cannot draw more than 16 pixels per clock cycle.
DX 6 and lower
All these graphic chips are only able to generate pixels according to skinned triangles. Looks pretty bland and is not supported anymore.

DX 7: Includes Texture and lighting, the first model that allows any special effects. This is the bottom baseline nowadays.
Texturing has been around since DirectX3 and OpenGL 1.0.
DX 8.0: This is not much different from DX7, but it includes a Hardware Abstraction Layes, so it was the first model that made it easy to program 3D graphics, and therefore allows for better effects. It also (IIRC) allows the programmers more freedom with textures (like writing to them), so it makes more complex effects possible.
Pixel shaders have never been able to write to textures and most likely will never be. You can of course allocate an offscreen framebuffer and then use it as a texture after you are completely done rendering to it, but from your wording, that didn't seem to be what you had in mind.
DX 8.1: The start of programmable graphics hardware. PS 1.1 and 1.4 are introduced, the first allows for some basic special effects per pixel, the latter makes more complicated effects possible.
PS1.1 was introduced in DX 8.0, not 8.1
PS 1.0: the first programmable pixel pipeline, never used in actual hardware.

PS 1.1: The DX 8.1 implementation from nVidia. It allows the developers some basic calculations in 4 to 12-bit integer values, which offers possibilities, but is to coarse (pixels "jump" from color to color) to make interesting effects possible.

PS 1.4: The DX 8.1 implementation form ATi. It allows a much finer granularity, and so is the first programmable model that is actually useful to create nice special effects.

PS 2.0+: This is the Good Stuff! Fully programmable, this allows juat about any calculation. It makes truly superb life-like graphics possible. The only catch would be, that you really want at least 30 fps for a game, which makes it impractical to generate beautiful scenes that would take minutes to generate a single frame.

SM 2.0: This is the combination of VS 2.0 and PS 2.0+.

SM 3.0: At the moment, ther doen't seem to be much difference between the SM 2.0+ and 3.0 implementations, except that the SM 3.0 implementations allow longer shader programs.
SM3.0 also introduces conditional jumps, which theoretically increases programmability greatly, but has seen little practical use so far.
 
As far as I have seen, the term 'shader' has been widely used both for the shader hardware itself and the program snippets that are executed on a per vertex/pixel basis - it is usually clear from context which one you intend to refer to.
 
arjan de lumens said:
As far as I have seen, the term 'shader' has been widely used both for the shader hardware itself and the program snippets that are executed on a per vertex/pixel basis - it is usually clear from context which one you intend to refer to.

Yes, I made it reflect that. But it is a good observation from Reverend, and something that should be explained I think.
 
arjan de lumens said:
As far as I have seen, the term 'shader' has been widely used both for the shader hardware itself and the program snippets that are executed on a per vertex/pixel basis - it is usually clear from context which one you intend to refer to.

Well, if you're going to have all three "shader" and "pixel shader" and "vertex shader" in the same dictionary...
 
DiGuru said:
So, what do you all think? Shouldn't there be something like this on a site like this? Or is everyone who comes here required to know all that and more?
The site has a glossary, available from the front page. Unfortunately, the glossary seems not to have been updated in a very long time, limiting its usefuless as far as discussing modern 3d architectures is concerned.
 
arjan de lumens said:
The site has a glossary, available from the front page. Unfortunately, the glossary seems not to have been updated in a very long time, limiting its usefuless as far as discussing modern 3d architectures is concerned.

Yes. And I think it would make more sense to do it step by step, than just providing a dictionary.
 
DiGuru said:
DX 6 and lower
All these graphic chips are only able to generate pixels according to triangles, skinned with textures. Looks pretty bland and is not supported anymore.
Not precisely true, D3D 6.0 had API support for an EMBM renderstate - the first dependent texture lookup mechanism in DX to my knowledge. Kyro and Radeon supported this, much later Geforce 3(+) did as well.
DX 8.0: This includes a Hardware Abstraction Layer, so it was the first model that made it easy to program 3D graphics
The HAL device is older than Direct3D 8. Typically most D3D implementations provide two rendering devices - a HAL (basically an interface to your video card) and RGB emulation (a software renderer).
PS 1.1: The DX 8.0 implementation from nVidia. It allows the developers some basic calculations in 4 to 12-bit integer values, which offers possibilities, but is too coarse (pixels "jump" from color to color) to make interesting effects possible.
The OpenGL documentation for register_combiners(2) have the precision of combiner operations at 8 bits + 1 bit sign.
 
Having had a few beers :

Frames
A frame is a single screen that is displayed on the monitor.
This really sounds funny... a "single screen".

SGI (the pioneering company in 3D graphics)
A pioneering company...

Pixel shaders
Same as my comments about "shaders". We could be talking about hw or sw.

Hmm... should there be something about scenegraph?

Vertex shaders
They allow the programmers to manipulate the geometry according to a set of rules.
Hmm... now you're talking about sw when for pixel shaders you talked about hw...

Shader model
The shader model (SM) describes the programmability of the hardware. The more programmability you have, the more life-like the graphics can look. It is the combination of the Vertex Shaders (VS) and Pixel Shaders (PS).
May be good to say that this term is specific to DX (there really isn't any OGL equivalent) and iterations of it. Mostly abused by IHVs for marketing purposes for the IHVs :devilish:


Oh, and add in explanation for "IHV" :)
 
I am in no way an expert like most here, but I *thought* a feature of SM 3.0 is Gemometry Instancing. This feature is also supported on certain Radeon Cards also (X800, X700?, 9700/9800, and I think the 95xx/9600 also). Correct me if I am wrong, but I thought that was a benefit of SM 3.0 and is one of those "+" features in the Radeon 2.0+.

If I am wrong, Speedtree still looks cool! ;)
 
Back
Top