Direct3D programming question

Humus

Crazy coder
Veteran
Well, I usually do OpenGL, but I figure it could be useful to learn D3D too, so I've started to look into it. So far I've gotten a colored triangle on the screen :)
Anyway, do I have to use vertex buffers? Are there no equivalent to glVertex3f() or something like that? It seams like overkill and a waste of time to use vertex buffers for simple test apps. Also, the options how to create the vertex buffers seams limited to me, only interleaved arrays with predefined datatypes and predefined ordering as far as I can tell, even Glide were more flexible than that :/

Btw, do anyone know the rationale why Direct3D have backface culling and lighting enabled by default? It seams like a good way to waste the beginners time trying to figure out why nothing comes out on the screen. :rolleyes: Not until I changed the clear color to blue I saw that I actually got a triangle on the screen, but it was black instead of gourad shaded, had to actively disable lighting for it to work.
 
If you don't want to use CreateVertexBuffer and all the complication that goes with it you can use DrawPrimitiveUP & DrawIndexedPrimitiveUP and pass an array of vertices to them.

Code:
struct vert_t {
   float x, y, z, r;
   dword color;
} vertices[3] = {
   {50, 50, 0, 0, 0xFF}, {100, 150, 0, 0, 0xFF00}, {150, 100, 0, 0, 0xFF0000},
};

pD3DDev->SetRenderState(D3DRS_ZENABLE, D3DZB_FALSE);
pD3DDev->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE);

pD3DDev->SetVertexShader(D3DFVF_XYZRHW | D3DFVF_DIFFUSE);
pD3DDev->DrawPrimitiveUP(D3DPT_TRIANGLELIST, 1, vertices, sizeof(vert_t));

I think this is the most simple example I can think of doing in Direct3D.

Note that I used post-transformed vertices: coordinates in screen coordinates, no need to disable lighting, diffuse color is used directly. Of course if you go to 3D it's getting more compicated...
 
Humus said:
Also, the options how to create the vertex buffers seams limited to me, only interleaved arrays with predefined datatypes and predefined ordering as far as I can tell, even Glide were more flexible than that :/

DirectX7 had a "Strided mode" when you used user supplied arrays, but it was removed in DirectX8. You could have an array for the coordinates, one for the diffuse colors, and one for the texture coordinates.

The new option in DirectX8 is using multiple vertex buffers for the same effect. They are called "Vertex Streams". Don't ask me why...

I'd suggest against using multiple vertex streams unless you really need them. (Like when doing tweening.)

You don't even have to use vertex buffers while you are experimenting, or when your program is fillrate limited anyway.

Humus said:
Btw, do anyone know the rationale why Direct3D have backface culling and lighting enabled by default? It seams like a good way to waste the beginners time trying to figure out why nothing comes out on the screen. :rolleyes: Not until I changed the clear color to blue I saw that I actually got a triangle on the screen, but it was black instead of gourad shaded, had to actively disable lighting for it to work.

Don't search logic in the Direct3D API. You only waste your time... :D

And be glad that DX8 is simple and straight compared to DX7 :)
 
Assuming a certain render state is set is never a good idea, D3D Help isn't even clear about the default all the time. So to be sure its always best to explictly set the renderstates as you want them. Its a pain but the only way to play safe.

K-
 
Humus said:
Anyway, do I have to use vertex buffers? Are there no equivalent to glVertex3f() or something like that? It seams like overkill and a waste of time to use vertex buffers for simple test apps..
I suppose that it's for efficiency - there probably is no need to supply two different ways of doing the same thing when one is already cheaper. (Besides, small test programs have a strange habit of growing into large ones and it's easier to start the "right" way than to change it later)

I suppose if you really wanted to, you could write your own "wrapper" macros.
 
Hyp-X said:
If you don't want to use CreateVertexBuffer and all the complication that goes with it you can use DrawPrimitiveUP & DrawIndexedPrimitiveUP and pass an array of vertices to them.

Code:
struct vert_t {
   float x, y, z, r;
   dword color;
} vertices[3] = {
   {50, 50, 0, 0, 0xFF}, {100, 150, 0, 0, 0xFF00}, {150, 100, 0, 0, 0xFF0000},
};

pD3DDev->SetRenderState(D3DRS_ZENABLE, D3DZB_FALSE);
pD3DDev->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE);

pD3DDev->SetVertexShader(D3DFVF_XYZRHW | D3DFVF_DIFFUSE);
pD3DDev->DrawPrimitiveUP(D3DPT_TRIANGLELIST, 1, vertices, sizeof(vert_t));

I think this is the most simple example I can think of doing in Direct3D.

Note that I used post-transformed vertices: coordinates in screen coordinates, no need to disable lighting, diffuse color is used directly. Of course if you go to 3D it's getting more compicated...

Thanks for the reply. Yeah, DrawPrimitiveUP looks a little better, still by far not as easy to use as glVertex3f() calls in OpenGL though. Gonna check it out.


Hyp-X said:
Don't search logic in the Direct3D API. You only waste your time... :)

I was afraid of that ...
 
Kristof said:
Assuming a certain render state is set is never a good idea, D3D Help isn't even clear about the default all the time. So to be sure its always best to explictly set the renderstates as you want them. Its a pain but the only way to play safe.

K-

Yeah, I know, but for the absolute newbie it aint helping to need to actively disable stuff to get something on the screen the very first time he tries the API. In OpenGL everything is setup by default in such as way that it helps the newbie getting something out on the screen. Lighting is disabled, backface culling is disabled, clear color is black while vertex color is white etc.
 
Simon F said:
Humus said:
Anyway, do I have to use vertex buffers? Are there no equivalent to glVertex3f() or something like that? It seams like overkill and a waste of time to use vertex buffers for simple test apps..
I suppose that it's for efficiency - there probably is no need to supply two different ways of doing the same thing when one is already cheaper. (Besides, small test programs have a strange habit of growing into large ones and it's easier to start the "right" way than to change it later)

I suppose if you really wanted to, you could write your own "wrapper" macros.

Well, if developer productivity matters I think something like glVertex() should be there. Also, depending on what the aim of the app is it may actually be cheaper with a glVertex call. If the app doesn't use much polygons it may save memory by using glVertex() instead of setting up buffers and filling them, say D3D on a PDA/cellphone for instance (not sure if that exists?)

In 95% of the apps I've ever done the overhead of using glVertex3f() is insignificant. I don't have many small test applications that have grown. Whenever I want to learn a new 3d technique I start a new project trying to implement it. When things are working I post a demo and probably never go back to that app, instead I rewrite it in some "real" app.

I'll do a wrapper though, shouldn't take too much time I hope.
 
Yeah, you can't beat OpenGL for letting you get a spinning triangle on the screen with a minimum of hassle. (You can still screw things up by pointing the camera in the wrong direction, though, as I keep reminding myself.)

DX has gotten a lot better over the years, though -- around DX5/DX6 you needed about 1500 lines of boilerplate code, now with DX8 you only need about 100 lines.

While this is stupid, and while it frustrates people who appreciate elegant APIs, it doesn't really get in the way of writing large projects.

Most developers I know just copy one of the how-to-make-a-spinning-triangle tutorials from the SDK and go from there.

I think fundamentally the DX team is only interested in supporting large applications, where the extra day of work and extra 100 lines of code are not significant barriers to entry.
 
That's true. For an instance, we have a small library for handling Direct3D initialization, not much different from the nice glut for OpenGL. We also have some templates which eases the works for creating and using vertex buffers.

On the other hand, I think some of the D3DX library is quite useful.
 
"Are there no equivalent to glVertex3f() or something like that? It seams like overkill and a waste of time to use vertex buffers for simple test apps."

It's redundant functionality, and significantly less efficient. A lot of D3D's design philosophy appears to be about not letting you hurt yourself too much. Funnilly enough it's something they added back into the XBox version of D3D. It's a lot easier to hack together little in game debug visualisations with the immediqate mode syntax.

"Btw, do anyone know the rationale why Direct3D have backface culling and lighting enabled by default? It seams like a good way to waste the beginners time trying to figure out why nothing comes out on the screen. "

They're trying to provide a "useful" startup condition. All the states are "guaranteed" at startup (but I wouldn't bet on a particular driver doing it right). Initialisation used to be the real pain in the early days of D3D, you literally had to go through every state and explicitly set it to know what state your hardware was in. Most apps will probably want backface culling on, and most will probably want to use lighting, so they are set by default.
 
I would say that it's not very useful to have lighting enabled with no lights defined. It just makes sure everything will get black regardless of it you provide vertex colors or not. Having a default state is of course a good thing, but to me it makes much more sense to provide default states such that it helps getting stuff on the screen. If I need lighting I can enable it, but if I as a newbie just tries to get into the API it's a pain trying to figure out why my triangles didn't make it onto the screen (or so it appears). The same for backface culling.
 
Humus said:
I would say that it's not very useful to have lighting enabled with no lights defined. It just makes sure everything will get black regardless of it you provide vertex colors or not.

Well, this is not really true.

You can set the ambient color and you will see the objects without adding lights to the scene.

You can also use:
Code:
pD3DDev->SetRenderState(D3DRS_EMISSIVEMATERIALSOURCE, D3DMCS_COLOR1);
and you will see the vertex colors you set even without ambient or lights set.
 
Well, the point I was trying to make is that with the default state triangles may either get culled away or things gets drawn black (which most likely is the color most people will use for clearing the buffer), which really isn't helping the newbie. Point is that the API should be designed in such a way that the newbie can get some early feedback on the screen without needing to dig deeply into different states. For the experienced user it doesn't matter much what the default states are, he'll gonna set it up correctly anyway, but for the newbie it's essential.


Btw, have to complain about another thing, AFAICS I have to provide vertex colors as four unsigned bytes, why is there no support for floating point colors? I'm required to use a specific datatype for everything, in OpenGL I can just choose whatever datatype I find suitable. Personally I almost always use floats for vertex colors unless I have a good reason to not do so, say for performance and memory savings on high polygon count models, but otherwise floats going from 0 to 1 is much more intuitive than unsigned bytes packed together into a word. For dynamic data it would also be faster to use floats since I wont need to do the clamping myself, which can then be performed by the hardware.
 
I'm not convinced by the "should be structured for the newbie" argument. Afterall most of your users are only newbies once.
The "what is the best default state?" question is kind of irrelevant, as far I'm concerned as long as there is one, I don't care.

Basically you can pass colors in as floats, but you can't use FVF flags to do this. You need to call CreateVertexShader with a NULL vertex shader, this allows you to specify the input stream format explicitly, and the Null shader will cause it to send the data to the fixed function pipeline.
Personally I think the coupling of Vertex shaders and Stream formats was a mistake in DX8 since it's common to want to call vertex shaders with a veriety of stream types. They did rectify this in the Xbox version.
 
Well, as you said, you don't care about the default state, but the newbie (in this case me) does care. That's the point. Furthermore it doesn't make any sense to have any fancy stuff enabled by default, not even backface culling or lighting (especially not with no lights defined).

Oh well, guess I'll get used to it ...
 
Here is a simple example using vertex format declaration:

Code:
struct my_vertex_t {
	float x, y, z;
	float r, g, b;
};
dword my_vertex_decl[] = {
	D3DVSD_STREAM(0),
	D3DVSD_REG(D3DVSDE_POSITION, D3DVSDT_FLOAT3),
	D3DVSD_REG(D3DVSDE_DIFFUSE, D3DVSDT_FLOAT3),
	D3DVSD_END()
};
dword my_vertex_handle;

pD3DDev->CreateVertexShader(my_vertex_decl, NULL, &my_vertex_handle, 0);

Then you can use it like this:

Code:
pD3DDev->SetVertexShader(my_vertex_handle);
pD3DDev->DrawPrimitiveUP(D3DPT_TRIANGLELIST, num_triangles, vertices, sizeof(my_vertex_t));

This will only work in HW T&L mode if the driver is supporting the DirectX8 interface.
With HW T&L capable devices check the MaxStreams element of the D3DCAPS8 structure, and don't enable HW T&L when it's zero.

Also be aware that handles returned by CreateVertexShader are lost at device reset.
 
Oki, seams it's doable then at least :)
Not sure I like that stuff can get lost though, how do I know when stuff gets lost?
 
It has been improved vastly in DX8, regarding the device and surface lost issues.
Now you just have to check for device lost (it can be done at Present which returns DEVICELOST if device is lost), and if your application is currently focused, Reset the device and reload every non-managed surfaces and other resources. Of course, a list about everything which can be lost could be handy for beginners.
 
Humus said:
Oki, seams it's doable then at least :)
Not sure I like that stuff can get lost though, how do I know when stuff gets lost?

It was a nightmare in DX7 but it's relatively easy now.
Start your "RenderScene" routine with the following code:

Code:
HRESULT res = pD3DDev->TestCooperativeLevel();
if (res == D3DERR_DEVICELOST)
	return;
else if (res == D3DERR_DEVICENOTRESET)
	ResetDevice();

ResetDevice should:
  1. Destroy any non-managed resources (VBs, IBs, Textures)
  2. Reset the device ( pD3DDev->Reset() )
  3. Initializes render states / texture stage states
  4. Re-set lights
  5. Recreate non-managed resources, vertex-shaders, pixel-shaders (or you can choose to recreate them on-demand)

I recommend using managed resources, because they use video memory on demand, and you can avoid all this fuss (1) with the device reset.

Making ResetDevice a separate function is a good idea as you will want to call it from the OnSize handler of your window.

If the device gets lost during the render scene all functions will fail silently (the big advantage in dx8), expect pD3DDev->Present() where you can simply ignore any errors. (You will handle lost devices in the next render scene, which can be any amount of times later...)

Debugging hint: If Reset fails, you are likely forgot freeing up something.
 
Back
Top