* Next Generation Cards - will they need 4.5+ GHz CPUs?

g__day

Regular
What are folks expectations regarding next generation GPUs - NV50 and R520, do avoid being CPU limited in titles will they really need to be paired with 4.5 GHz Intel / 3.5GHz Athlon to provide you with a non CPU limited rig in the near future?

I am on a 2.3 GHz Athlon and 9800 Pro at the moment, thinking about whether to hold off buying a new X800 or 6800 until the next generation of GPU appears. Also a 2.3 GHz Athlon may hold back a R420 or NV40 class card.

This got me thinking what happens when the NV50 and R520 appear - my CPU will be hopelessly too slow so its major upgrade time then.

So my two questions -

1) how fast a CPU do folk expect the next generation cards will really require?

2) as GPUs accelerate much faster in performance each year than CPUs - what will be required in GPUs to keep future top end system balanced? I am supposing next generation GPUs will move even further processing load off the CPU and onto the video card to alleviate the growing gap / developing chasm between the two critical peices of our rigs.

Many thanks for your thoughts!
 
You might find ever more graphics procesing being moved off from the CPU to the GPU itself. With PEG giving high speed transfer in both directions, VPUs will become less reliant on strong CPUs. VPUs will effectively become graphics co-processors.
 
Bouncing Zabaglione Bros. said:
You might find ever more graphics procesing being moved off from the CPU to the GPU itself. With PEG giving high speed transfer in both directions, VPUs will become less reliant on strong CPUs. VPUs will effectively become graphics co-processors.

Especially now that graphics cards are starting to enjoy such large memory capacities. I still only have 512MB of ram in my system, and within the year it sounds like we should start seeing videocards with the same amount.

Nite_Hawk
 
I wonder what parts of the 3d pipeline you could reasonably consider moving from the CPU to the GPU to better load balance with the next generation of cards? For instance could game physics be moved to the GPU and would this be enough to help much?
 
Nite_Hawk said:
Bouncing Zabaglione Bros. said:
You might find ever more graphics procesing being moved off from the CPU to the GPU itself. With PEG giving high speed transfer in both directions, VPUs will become less reliant on strong CPUs. VPUs will effectively become graphics co-processors.

Especially now that graphics cards are starting to enjoy such large memory capacities. I still only have 512MB of ram in my system, and within the year it sounds like we should start seeing videocards with the same amount.

Nite_Hawk

Hopefully in the next year or so the price of 1 gig sticks will move down to 150 or less .

THat is when system ram will leap again
 
I think a good question also is to ask when we might start seeing 3d games becoming more reliant on gpus than on cpus (less cpu limited than at present.)
 
I guess I am really wondering as game developers chart the next 6 - 18 months of predicted CPU / GPU developments, how do they alter their game engines to best utilise what they see is comming?
 
Well I think on both the GPU and CPU fronts developers should be targeting today's high end parts as their baselines for new engines. So along with the specs for upcoming directx and opengl capabilities, current hardware performance should be a good guide.
 
trinibwoy - true, but we are starting to see scalable engines appearing (my bug bear for 3+ years). So the best game developers are targeting high-end and providing multiple levels of fallback for the wider audiences that are more hardware challenged :)

But it doesn't address the challenge as GPUs scale much faster than CPUs, how do you change either the GPU or game engine algorithms to shift more of the load on to the runaway faster hardware component?
 
One of the priorities in DirectX Next/WGF is to reduce the driver overhead for drawing calls.
This can be done in two ways: redesign of the API to better suit current hardware, and redesign of hardware to require less calls (geometry instancing is a nice example, but we will have tesselation units in the future, which should make a single call even more powerful).

So it seems like the trend is to lower CPU-load, rather than increasing CPU requirements.
I suppose that eventually GPUs will be completely self-sufficient (perhaps then you write a 'shader' with the draw calls for an entire frame, and upload that to the GPU once).

(What is OpenGL going to do? If they don't adapt quickly enough, this could well be the end of the line).
 
Scali said:
(What is OpenGL going to do? If they don't adapt quickly enough, this could well be the end of the line).
What exactly do you mean? Driver overhead is already quite low in OpenGL compared to D3D, GLSL is most likely comparable to WGF HLSL, and the superbuffers extension can still be expected before WGF.
 
g__day said:
1) how fast a CPU do folk expect the next generation cards will really require?
Um, video cards don't require any specific CPU for optimal performance. It's the games that require CPU power to keep the framerate high, and how much CPU power a game requires really is up to the developer of that game. For example, there are rendering algorithms that are more GPU-bound, but at the same time, could result in a lower framerate if one has a high-end CPU and used a more CPU-bound rendering algorithm (a simple example is offloading per-vertex calculations to the vertex shader, such as, say, skeletal animation).

So it really isn't a question about what kind of CPU a specific GPU requires (particularly since you can always increase the resolution and add AA and AF if you're CPU-limited), but more of a question of what kind of CPU a specific game requires.
 
What exactly do you mean? Driver overhead is already quite low in OpenGL compared to D3D, GLSL is most likely comparable to WGF HLSL, and the superbuffers extension can still be expected before WGF.

I mean that D3D might finally put OGL out of business (out of its misery, to be exact) if they get CPU-usage down and add new features like topography processing. They already came quite close with shaders. If OGL will require that 4.5 GHz CPU while D3D can do the same with a 2 GHz, not even Carmack is going to use OGL anymore, I hope.
 
Scali said:
If OGL will require that 4.5 GHz CPU while D3D can do the same with a 2 GHz, not even Carmack is going to use OGL anymore, I hope.
If, yes. But I don't see a single reason to expect such a disparity.
 
I thought it was common knowledge that OpenGL performance of ATi (or pretty much any brand that is not NVIDIA) is considerably less than that in Direct3D, and that this is obviously not due to the 3d hardware itself, so this must be because of the CPU overhead in the driver.
I have actually seen OpenGL software that ran better on my GF2GTS than on the Radeon 9600Pro that I have now.

You could blame ATi, but ATi is not the only one who had problems with OpenGL drivers, it's just the only one that still has a big marketshare.
I say it is because OpenGL itself makes it too hard to design proper drivers for modern hardware. NVIDIA succeeded anyway, good for them, but their hardware works fine with Direct3D aswell, so I see no reason for using OpenGL in Windows.

I would much prefer it if Direct3D would develop itself further for realtime/game use, and if OpenGL went back to the original focus of professional workstation use, instead of trying (and failing) to keep up with Direct3D, so certain game developers won't be tempted to try and use OpenGL for their games, and not bother me with slow and buggy software (alt-tab doesn't even work in Doom3, this should not be possible in 2004).
 
With OpenGL Slang, the code generated can be much better optimized than with Direct3D HLSL. So it is much more likely that OGL performance will improve compared to D3D. And of course, Direct3D only runs on Windows platforms.
 
With OpenGL Slang, the code generated can be much better optimized than with Direct3D HLSL.

In theory, yes. We will have to see in practice. Also, we will have to see if DirectX Next/WGF will not introduce the same design (MS did announce that assembly language for shaders will be abandoned).
Besides, this has nothing to do with the CPU usage.

So it is much more likely that OGL performance will improve compared to D3D.

Compared to the current generation, perhaps. But that generation is nearly 2 years old already. It's bad enough that OpenGL is still behind in terms of features and performance.

And of course, Direct3D only runs on Windows platforms.

If you can say that in theory GLSL is faster, I can say that in theory, Direct3D runs on other platforms. In fact, it does so in practice aswell: http://www.macdx.com
Besides, 'only' running on Windows can hardly be called a disadvantage with 95+% marketshare. Basically the entire target audience for games/realtime graphics runs Windows anyway.
Besides, my point was that I don't want to be bothered by rubbish OpenGL code on Windows. I don't care what other platforms use, since I can choose not to use these platforms.
 
Back
Top