When does a game become CPU/GPU bound(limited?)

Please correct me if i'm wrong.

CPU Bound(I take bound means limited)

You have a PC that has a super fast GPU but a slow CPU.
e.g. NV40 GPU with a 1 gig AMD CPU.

GPU Bound

You have a PC that has a super fast CPU but a slow GPU.
e.g. AMD64 3400+ with a GeFforce DDR

Is this correct? This is how i've understood CPU/GPU bound to mean. Can some one explain this to me?

Thanks
US
 
Unknown Soldier said:
Please correct me if i'm wrong.

CPU Bound(I take bound means limited)

You have a PC that has a super fast GPU but a slow CPU.
e.g. NV40 GPU with a 1 gig AMD CPU.

GPU Bound

You have a PC that has a super fast CPU but a slow GPU.
e.g. AMD64 3400+ with a GeFforce DDR

Is this correct? This is how i've understood CPU/GPU bound to mean. Can some one explain this to me?

Thanks
US

Those would be examples of hardware configs that would exaggerate a game being CPU or GPU bound.

CPU Bound:
The frame rate bottleneck in a game is the CPU. The GPU is waiting around for data to be fed to it.

GPU Bound:
The frame rate bottleneck in a game is the GPU. The CPU is / could supply data faster than the GPU can render it.

Over simplified maybe, but that's the way I understand it anyhow.
 
Unknown Soldier said:
Please correct me if i'm wrong.

CPU Bound(I take bound means limited)

You have a PC that has a super fast GPU but a slow CPU.
e.g. NV40 GPU with a 1 gig AMD CPU.

GPU Bound

You have a PC that has a super fast CPU but a slow GPU.
e.g. AMD64 3400+ with a GeFforce DDR

Is this correct? This is how i've understood CPU/GPU bound to mean. Can some one explain this to me?

Thanks
US

This is correct yes.
Not much to explain really.
Different things require either CPU or GPU power, or memory bandwidth etc. more than most other things.

Physics for an example are done on the CPU, so if a game has highly advanced physics with a lot of object on-screen then it will most likely be CPU bound rather than GPU.
A very pixel shader intense game would tend to be the opposite since pixel shading has no effect on the CPU.
So you can have the fatest CPU and GPU available and you'll still be either CPU or GPU bound depending on the application you're running. (Since 99% of all games have no hardlocked frame cap you're always bound by something or the other.)
 
Note that in some case you can be both limited by CPU and GPU. For example, a program may spend some time on the CPU, then some time on the GPU, without much parallelism between CPU and GPU. It's actually not uncommon. In these case, a faster CPU or a faster GPU can give better frame rates.
 
This reminds me of a concern that I have had recently. particularly since the performance results of the Nv40 were released:

Are CPU's keeping up?

Most reviews for the NV40 agree that it is a tremendous leap from the last generation of cards, but it also sems that most games tested could not properly show the true power of the card unless they were run at very high resolutions because they were so Cpu bound. A high resolution makes a game look better for sure, but lets face it; anything much higher than 1024x768 ventures into the realms of diminishing returns.

The point is that I don't think that there have been leaps as large in the world of CPU's as there have been in graphics cards lately, and I don't think that there will be anytime soon.

What do you folks think?
 
Well, if CPU is not keeping up, it means we can either go for better FSAA/AF, and/or better pixel shader effects. The power is not going to be wasted.
 
Bohdy said:
This reminds me of a concern that I have had recently. particularly since the performance results of the Nv40 were released:

Are CPU's keeping up?...
Just some sidenotes first:

NV surely jumped very far from their NV3x-point with that "new" generation. They had to do it sometime, and since they´ve done it now just shows how much potential GPUs have, when you think about the possibilities.

E.x. GPUs did an amazing jump since ATI introduced their R300-GPU, but these are not linear and by no means "normal". NV40 is kind of an intermidiate step, that seems to take PS/VS 2.0 to an extreme level, but they wanted to attract developers also, so they included PS/VS 3.0 and i think it was a very good choice this time around.

Since ATI pushed that hard since R300, NV pushed even harder this time, and you can expect the R500/NV50-GPUs to be another great step from now.

Now your question:

Whether CPUs keep up is a very good question, it mainly depends on AMD´s and Intel´s commitment to bring CPUs with very high IPC and alround performance to the market. At present it looks like GPUs ran quite some ways further, and even (game-)engines can´t keep up that amazing pace.

Old engines are bottlenecked very heavy know, since they almost depend very heavily on fillrate, and i don´t think that there will be a CPU in the near feature, that can solve this issue. (since you need a fast cpu to feed those pipes)

New engines however will not only show heavy fillrate needs, but even require more shaderpower, especially arithmetics and more complex architecture. Without taking KI-requirements and game physics into account, you can bet that you won´t (in the next 5-10 years or so) have enough cpu-power to satisfy those needs.

I very much hope, that AMD introduces those dual-core CPUs soon, (seems to be happening either @ 0.90µm or 0.65nm, which could be as early as 2006) since i think that will be a major improvement, especially for games (when they finally are coded that way).

Before that i don´t think CPUs will keep up with the power of GPUs, since those "guys" are parallel beasts, and only fast "girls" won´t be enough :)
 
Tim Sweeney (lead programmer of the Unreal engine) had some interesting things to say regarding CPU and GPU "convergence" in the interview he gave recently to Beyond3D :
I think CPU's and GPU's are actually going to converge 10 years or so down the road. On the GPU side, you're seeing a slow march towards computational completeness. Once they achieve that, you'll see certain CPU algorithms that are amicable to highly parallel operations on largely constant datasets move to the GPU. On the other hand, the trend in CPU's is towards SMT/Hyperthreading and multi-core. The real difference then isn't in their capabilities, but their performance characteristics.

When a typical consumer CPU can run a large number of threads simultaneously, and a GPU can perform general computing work, will you really need both? A day will come when GPU's can compile and run C code, and CPU's can compile and run HLSL code -- though perhaps with significant performance disadvantages in each case. At that point, both the CPU guys and the GPU guys will need to do some soul searching!

Anyone think this likely?
 
Diplo said:
Tim Sweeney (lead programmer of the Unreal engine) had some interesting things to say regarding CPU and GPU "convergence" in the interview he gave recently to Beyond3D :
I think CPU's and GPU's are actually going to converge 10 years or so down the road. On the GPU side, you're seeing a slow march towards computational completeness. Once they achieve that, you'll see certain CPU algorithms that are amicable to highly parallel operations on largely constant datasets move to the GPU. On the other hand, the trend in CPU's is towards SMT/Hyperthreading and multi-core. The real difference then isn't in their capabilities, but their performance characteristics.

When a typical consumer CPU can run a large number of threads simultaneously, and a GPU can perform general computing work, will you really need both? A day will come when GPU's can compile and run C code, and CPU's can compile and run HLSL code -- though perhaps with significant performance disadvantages in each case. At that point, both the CPU guys and the GPU guys will need to do some soul searching!

Anyone think this likely?

One day you will just have a cheap cube that will be all transistors doing all that todays computers do with unimaginable speeds.

And long before that CPUs and GPU's will meet. :)
 
I don't think you'll ever see the problem that either CPU or GPU are too fast compared to the other. The games will adapt. Game developers will try to maximize performance by using both CPU and GPU to their max.
You're wasting processing power if you're running into limits on either one, without using the other to the fullest.

If you are very much CPU bound, why not add some really incredible shaders for graphical detail? It's more or less for free anyway...

So the game will often ask for a balance between CPU and GPU power, regardless of the relative processing power at that time.

What will happen, is that differences in CPU and GPU power influence which direction games are heading. Implementing a very complex physics engine, might not be very smart if you're allready very much CPU bound... In that case, game developers will focus on more fancy graphics.


Of course, old games quite often will not be optimized for new hardware... but when you're running 1600x1200 4xAA 16AF at 60fps, you don't really care if it CPU bound, do you? :)
 
It's a reasonable view. Note, though, that there are many reasons why an application might be VPU-bound. A simple example is that the game could be geometry bound or fill rate bound (often simplified to 'front end' and 'back end' bottlenecks).

Inside a single application, the bottleneck may shift from the CPU to the VPU front-end to the VPU back-end frequently - even several times in the same scene.

A particular point to note is that minimum and maximum frame rates often test different bottlenecks to average frame rates.
 
Back
Top