Dynamic Branching Benchmark

Is this thing cpu bound? it appears to be, because it uses 100% of one of my four cores,
opteron 285 ..

FX 4500 - gives about 1100 without and 1600 with branching on
 
x1800xl...

533 without,
1401 with.

So nearly 3x improvement.

Odd though, the framerate starts at 30, then slowly winds down to 20. Where without it stays fixed at 10.
 
I thought that was the case though, at least in Task Manager.

Im pretty sure my cpu shows as 100% utilage whether folding or gaming or whatever... as long as calls to the DX api are happening.
 
The CPU (thread) will be at 100% because most of the time, you're running your game/engine/whatever loop as fast as you can, even if your loop hardly did anything, it would still be at 100%.
 
cpu-usage.png

Try it yourself and see, if you think I've just taken a screenshot of CPU usage making a forum post or something. It can be the case that code eats 100% of the CPU (even if it's doing very little useful work), but it certainly doesn't have to be.
 
I thought that was the case though, at least in Task Manager.

Im pretty sure my cpu shows as 100% utilage whether folding or gaming or whatever... as long as calls to the DX api are happening.

AndrewM said:
The CPU (thread) will be at 100% because most of the time, you're running your game/engine/whatever loop as fast as you can, even if your loop hardly did anything, it would still be at 100%.

Yeah that's what I thought too .... Task Manager's CPU usage always seems to be maxed out.

Rys said:
Think about what you just wrote for a sec. Then pretend you never had :p

You know what I meant :p Granted I didn't do any scientific testing!
 
Well, in the name of science I ran this benchmark at 3GHZ and 4GHZ with basically a 5 point difference between the two CPU speeds...

Prescott 478 3GHz (at 4GHz via 267FSB)
1Gb DDR Adata ram at 534mhz (1:1) 3-4-4-7
Gainward Bliss 7800GS+ AGP 512mb (24-pipe G71 GDDR3) at 525/1540

3GHz / 4GHz

1245 / 1249 - Branching off
1177 / 1182 - Branching on :cry:

Can't remember what forceware this is, but your app reports it as: 6.14.10.9728

Edit: Had the 3Ghz and 4Ghz backwards ... D'oh!
 
Last edited by a moderator:
The CPU (thread) will be at 100% because most of the time, you're running your game/engine/whatever loop as fast as you can, even if your loop hardly did anything, it would still be at 100%.
Wouldn't the loop block and sleep when waiting for the vidcard to complete rendering a frame? If it doesn't then it's a case of really crap OS design.
 
The CPU (thread) will be at 100% because most of the time, you're running your game/engine/whatever loop as fast as you can, even if your loop hardly did anything, it would still be at 100%.
You aren't "folding at home" by any chance :D
 
Wouldn't the loop block and sleep when waiting for the vidcard to complete rendering a frame? If it doesn't then it's a case of really crap OS design.

You would think so but when I had a dual-monitor setup I routinely had task manager and rivatuner running on the secondary monitor check memory usage and GPU temps. IIRC CPU was always maxed out.
 
I've seen cases where the video driver hogs the CPU to reduce latency (presumably to improve review benchmarks by some small amount), even if the app does the right thing. So whether you're at 100% CPU or not might very well depend on the driver you use.
 
Hmm, just to clarify a previous post. When I said I get 100% cpu utilization (according to Task Manager) when folding, I meant GPU folding. :smile:
 
Games (and graphics demos) usually have 100% CPU "utilization" because they never give their time back to the OS, even when they don't need it. Simple sleep(0) statement in each rendering frame will correct that, without (or small) performance impact.
 
Games (and graphics demos) usually have 100% CPU "utilization" because they never give their time back to the OS, even when they don't need it. Simple sleep(0) statement in each rendering frame will correct that, without (or small) performance impact.

Are you sure about that? In my experience Sleep(0) has no effect on CPU utilization and I just tried it again to make sure and I'm definitely still seeing 100% CPU utilization with a Sleep(0) call thrown in every frame in a demo of mine.
 
Games (and graphics demos) usually have 100% CPU "utilization" because they never give their time back to the OS, even when they don't need it. Simple sleep(0) statement in each rendering frame will correct that, without (or small) performance impact.

So how can Microsoft Outlook look for new emails in the background when I am playing an intensive 3d game?
 
His post was phrased a bit incorrectly - an application can't decide not to give up CPU time. It can only request time even if it's doing nothing with it.
 
I guess what I said was wrong, sorry everyone. I recall reading a tutorial about it some years ago and I could swear this should work :(I also run a test just now, and sleep(0) doesn't reduce the CPU last... At least not with Vista and .NET 2.0 What it does instead, is trigger the execution of waiting threads, thus allowing better CPU utilization. Sleep(0) is something like "I am ready for now, you should take care of others". This is pretty useless however, if your application is the only one running :)

My tests were run on an empty loop, I am not sure if it will behave the same in a 3D demo.

I also tried Sleep(1), but that reduced my performance by factor of 3. Hoverer, this is an empty loop... Humus, if you have some spare time, could you probably try Sleep(1) in your demo and tell us the result? I am eager to know how it will reduce the performance.
 
Back
Top