Porting directx to MFC

deepavs

Newcomer
I ported my directx application to MFC wizard. while creating device, i made VSync off as follows

d3dpp.PresentationInterval = D3DPRESENT_INTERVAL_IMMEDIATE;

Now while rendering using infinite while loop, i got above 200 fps . But if I try to render using timer with 5ms, it shows only 64fps. Do i have to set any D3d parameters to get maximum fps using timer.
 
Windows doesn't have very high timer granularity. You will also get the same effect if you call Sleep(1) every frame, which should theorethically allow up to 1000fps, but in reality gets stuck at 64fps in XP. That's still better than what it used to be though. I recall it was like 21fps or so in Win98.
 
You should use a High Resoultion Timer using the QueryPerformanceFrequency() and QueryPerformanceCounter() functions.
 
akira888: That's what I read too, but I couldn't duplicate it on my machine, using debug mode (Release mode did some optimizations that made the numbers totally fucked up, with Sleep(10) taking 1ms... whatever). Here's my code:
Code:
#include <windows.h>
#include <conio.h>

LARGE_INTEGER    t1, t2, mT;
double dT;
void Init()
{
    QueryPerformanceFrequency(&mT);
}

void StartBenchmark()
{
    QueryPerformanceCounter(&t1);
}

void EndBenchmark()
{
    float t;
    QueryPerformanceCounter(&t2);
    dT = ((double)(t2.QuadPart-t1.QuadPart)/(double)(mT.QuadPart));
    t = (float)dT;
    printf("%f\n", t*1000/10);
}

void main()
{
    int i, x;
    Init();
    for(x = 0; x < 25; x++)
    {
        StartBenchmark();  
        for(i = 0; i < 10; i++)
        {
            Sleep(x);
        }
        EndBenchmark();
    }
}
Results, copy-pasted from the console:
0.001676
1.919881
2.923109
3.897842
4.876877
5.846525
6.828941
7.805573
8.781535
9.758447
10.734688
11.711516
12.687953
13.666569
14.622612
15.617824
16.594902
17.569859
18.548307
19.523767
20.500901
21.477926
22.454111
23.431303
24.407208
I read a lot about this issue, but could never duplicate it on Visual C++ 2005, using Windows XP Professional 32-bit. I'd be curious what you get using that same code. The most likely explanation would be the timer has a granularity of 1ms on Pro and 15ms on Home, but that feels a little bit difficult to believe, even more so considering I doubt all of you have Home. For the record, I get similar results if the for(i) loop only has 1 iteration, instead of 10 (it isn't "avoiding" a 10-20ms limit by uniting Sleeps)


Uttar
 
I'd be careful when trying to do timing benchmarks - especially when you're dipping into multiprogramming via Sleep(). Optimizations in release mode can, as you observed, screw things up... if the application can find a way of unrolling or simplifying the loop then it will - which can effectively change what you're timing ;)

There are some other timing aspects under Windows (which can apply to other systems as well) that royally screw things up...

If you have multiple processors then it's possible to get synchronization issues when sending messages between threads. I forget the details, but Microsoft had to release a patch for the dual-core CPU's recently because of some synchronization issue.

Also, the one that really makes my head hurt, is the "speed step" technologies - to use QPC you have to use QPF - and QPF is only seeded at the start of Windows (iirc). Now in the case of speed-step the frequency can change - yet the reported value from QPF won't. Thus your timing gets completely busted - and you can see examples of "slow mo" in games where this isn't handled.

There was a really good discussion of timing under windows on the DirectXDev mailing list a couple of months ago - have a look in the archives if you're interested.

hth
Jack
 
Back
Top