Recovering D3D after task switch.

Goragoth

Regular
I just searched GameDev and couldn't find anything relevant so I'll ask here, surely someone knows: what does a D3D9 program have to do on a task switch? I remember checking for lost surfaces in DirectDraw and having a restoreAll() function that reloads all the game resources. What is the equivalent in D3D?

I need to know how to find out that stuff needs restoring and then what things probably need to be done (like reloading textures and so on). Any help is appreciated. I guess the samples have the code I need but I prefer not to dig through them if I can avoid it :)
 
On a task switch? Can an application even know when a task switch occured, or are you talking about something else?

The only time you have to recreate anything AFAIK is when there's a change to the main rendering surface, such as when the display mode changes or you resize the window. If you make your resources managed you don't have to reload them. Most resources can be made managed. Render targets however have to be recreated.
 
check if the window is in focus in ur loop.
im having trouble eating pizza so u gonna have to use google for search.

maybe msdn may have info, i doubt it though.
 
Humus, can you eloborate on having the resources managed? What I'm working on is a fullscreen exclusive mode D3D game. I haven't worked with D3D much, just DD a while back. Anyway, if I alt-tab out of the game and then alt-tab back into it then it's just frozen (the display mode switches since my desktop mode does not match the game mode so that might be why?). As I said in my previous post in DD you check for DDERR_SURFACELOST (or something like that) and if you get that you restore all the surfaces (sprites).

I guess I should clarify that the solution is more general than just task-switching, that's just the most likely cause of losing resources (if that's indeed what is happening).

Looking at the D3D samples I found this piece of code:

Code:
if( m_bDeviceLost )
    {
        // Test the cooperative level to see if it's okay to render
        if( FAILED( hr = m_pd3dDevice->TestCooperativeLevel() ) )
        {
            // If the device was lost, do not render until we get it back
            if( D3DERR_DEVICELOST == hr )
                return S_OK;

            // Check if the device needs to be reset.
            if( D3DERR_DEVICENOTRESET == hr )
            {
                // If we are windowed, read the desktop mode and use the same format for
                // the back buffer
                if( m_bWindowed )
                {
                    D3DAdapterInfo* pAdapterInfo = m_d3dSettings.PAdapterInfo();
                    m_pD3D->GetAdapterDisplayMode( pAdapterInfo->AdapterOrdinal, &m_d3dSettings.Windowed_DisplayMode );
                    m_d3dpp.BackBufferFormat = m_d3dSettings.Windowed_DisplayMode.Format;
                }

                if( FAILED( hr = Reset3DEnvironment() ) )
                    return hr;
            }
            return hr;
        }
        m_bDeviceLost = false;
    }
This looks like what I want its just somewhat messy the way the whole sample framework is (maybe not messy but complex anyway). I'll try to work through it and figure it out, it shouldn't be too hard but if anyone has a nice and simple way of explaining it all then that would be nice. :)
 
What Hummus was talking about was that you can specify at creation time one of multiple different memory pools to allocate from. The two I remember off the top of my head are POOL_MANAGED and POOL_DEFAULT.

-Evan
 
Thanks darkblu, that's exactly what I was looking for. I might have found that with a little digging but I wasn't even sure what to search for. 8)
 
All working now :D
The D3DXSprite device gave me a bit of trouble for a little while (though I didn't know that's what it was at the time) because it needs a couple of calls when reseting the device but its all good now.
 
I wonder what the best, most elegant way is to handle this.
The best way I can think of is to create a callback for every non-managed resource, which will (re)create the resource and initialize it, and register that in the engine, so it can be called whenever required.
At first I thought it was enough to just cache the creation-flags for all non-managed resources, but then they will remain in an uninitialized state, which is not always desirable.

But I'm still not sure where the callback would actually belong. Does it belong in the object-class, mesh-class, do I need to create a wrapper around the D3D textureclass/vertex/indexbufferclass, and put it there? Or should it just be outside of a class?
So far I've opted for the outside-option, since it was the least intrusive. But it's not very elegant.
 
While my game doesn't use non-managed resources (yet anyway) and is very simple; my thoughts are that I would probably have a reset/recover function at object level and have that call similar functions for any lower level classes (eg Mesh objects) that it is using. Probably inefficent but to me it seems the "cleanest" way. I don't actually like having any functions outside of classes at all (been using Java for the last year or so :? ).

On a related note, what is the best way to handle the recovery routine? I currently have a function that gets called when a present() call fails (with device lost). The functions checks if it can reset the device yet and if it can't it sets a bool to false (to indicate that the app isn't active), does a Sleep(25) and then returns. While active is false the main loop more or less just runs the message loop and then calls the recover function again. Once it finds that it can reset the device it does so and then sets active to true again. Originally I was going to have the function stay in a loop but that meant that it didn't get any messages until it recovered, which wasn't very good, besides the program might later have some stuff that it needs to continue to do even if the app isn't active (such as networking for example). The Sleep(25) is there to make sure the program isn't hogging the system while it isn't active and just running through a "dry" loop. Does this seem about right? Any suggetions on the sleep value, is 25 good or should it be longer, shorter, not there at all?
 
Goragoth said:
On a related note, what is the best way to handle the recovery routine? I currently have a function that gets called when a present() call fails (with device lost). The functions checks if it can reset the device yet and if it can't it sets a bool to false (to indicate that the app isn't active), does a Sleep(25) and then returns. While active is false the main loop more or less just runs the message loop and then calls the recover function again. Once it finds that it can reset the device it does so and then sets active to true again. Originally I was going to have the function stay in a loop but that meant that it didn't get any messages until it recovered, which wasn't very good, besides the program might later have some stuff that it needs to continue to do even if the app isn't active (such as networking for example). The Sleep(25) is there to make sure the program isn't hogging the system while it isn't active and just running through a "dry" loop. Does this seem about right? Any suggetions on the sleep value, is 25 good or should it be longer, shorter, not there at all?


do you have an upper limit on the framerate? if you do then you already have some loop freq capping mechanism, which you can also use to naturally limit the restore() invocations w/o the need for any deliberate sleep() in the resotre() itself.

now, in case you don't cap your framerate, then what you have now is the best to do - not putting any pause inbetween consequtive attempts to restore() device would be pure cpu hogging, and the place this puase should be is exactly at restore() failure.
 
Back
Top