GPU vs Multi-Core CPU

epicstruggle said:
Just out of curiosity, would intel be able to use multi-core CPUs to compete against GPUs? There was some info released that Intel plans to release a 32 core CPU before the end of the decade, could they be used in desktop pc's to take over some graphics work?
It depends on what you mean with "some graphics work". If you're looking at a high-end gaming system then no, the CPU is likely never going to completely take over any stage of the graphics pipeline. Gamers will pay almost whatever necessary to guarantee the best visuals. And dedicated hardware will always be faster, even though the gap might be getting smaller.

But when looking at mid-end and low-end systems, the budget starts to get a definitive influence. Any system already has a CPU, and if it can do "some graphics work" adequately, there's no need to pay for an extra graphics chip. While for cutting-edge games enough is never enough, I see an evolution where more and more 3D applications can actually run just fine on a CPU. Take Google Earth for instance, or casual games with a 3D perspective. You shouldn't have to buy a graphics card to run things like this. Joe Average doesn't really care if it could run at 600 FPS on a GPU. Once you meet certain resolution and framerate requirements with software rendering, the benefit of buying a graphics card diminishes. There's little doubt that in several years you'll be able to play Oblivion on the CPU... Obviously they'll keep producing more realistic graphics which requires a GPU, but more people will be content with what a CPU can produce than today.

Technically I believe CPUs are finally starting to close the gap a little. GPU performance has grown faster than Moore's law because not only did they increase parallelism they catched up with process and design technology. CPUs have only just started the parallelism revolution, and traded clock frequency for real performance as the primary goal. Future generations might even simplify structures like branch prediction, out-of-order exeuction and caches to increase the number of execution units and run more threads to hide latency. Some specialized instructions wouldn't hurt either. So while currently there might be a 100x performance difference between mid-end GPUs and mid-end CPUs this might deminish to say 10x and make 3D graphics on the CPU a whole lot more interesting. For real gamers this 10x difference is still worth an expensive graphics card, others will be able to do without...

But that's just my very personal and probably biased opinion. ;)
 
Nick said:
Once you meet certain resolution and framerate requirements with software rendering, the benefit of buying a graphics card diminishes

This point is a bit blurry for me since much of 3D hardware processing nowadays is practically software rendering, just with a different ISA. We're seeing more and more dedicated hardware functions being absorbed by the shader core (there was one patent suggesting removing ROPs all together). How long till a GPU is nothing but shaders and shader programs? And from there how much of a leap is it to general programmability? I'm seeing the convergence as being more GPU->CPU than CPU->GPU at this point since GPU's are becoming generally programmable faster than mainstream CPU's are gaining high throughput/TLP/FP muscle.
 
trinibwoy said:
This point is a bit blurry for me since much of 3D hardware processing nowadays is practically software rendering, just with a different ISA.
What I meant is software rendering on the CPU. It's certainly true that from a programming point of view GPUs are quickly getting almost as programmable as CPUs, but I don't think it changes anything to the fact that CPUs are becoming more capable of running 3D applications every generation. Of course we're not at the point yet where people stop buying graphics cards and have their CPU directly connected to the monitor cable. But gradually the integrated graphics chips might get surpassed by software rendering and their 3D functionality could shift to the drivers.
 
I think it's more likely that when we do get integrated graphics on a CPU, we'll have a DAC on the motherboard that reads from system memory for display, rather than having the pinout for the VGA/DVI port straight on the CPU socket.
 
Nick said:
But gradually the integrated graphics chips might get surpassed by software rendering and their 3D functionality could shift to the drivers.

The only contention I have with that sentence is I would change might to will. Other than that, it's about as correct and concise of an answer as we'll get.
 
Last edited by a moderator:
Chalnoth said:
I think it's more likely that when we do get integrated graphics on a CPU, we'll have a DAC on the motherboard that reads from system memory for display, rather than having the pinout for the VGA/DVI port straight on the CPU socket.
I can see that too... leading to the inevitable possibility of Motherboard manufactures using cheap low quailty RAMDACs to save a few cents. Of course, I believe it would be more likely for those platforms the North Bridge would get an integrated RAMDAC/VGA core which would be used for Display and the GPU/Vector core would only be used for 3D Rendering/Vista. If it's done right, the system would still work without the Vector Processor, but just wont have any 3D Acceleration.
 
Last edited by a moderator:
Nick said:
What I meant is software rendering on the CPU. It's certainly true that from a programming point of view GPUs are quickly getting almost as programmable as CPUs, but I don't think it changes anything to the fact that CPUs are becoming more capable of running 3D applications every generation. Of course we're not at the point yet where people stop buying graphics cards and have their CPU directly connected to the monitor cable. But gradually the integrated graphics chips might get surpassed by software rendering and their 3D functionality could shift to the drivers.
Right now, I think there's a major problem and that's the limited GPU programming model (DX9/10). IMO once GPUs are down to having just 10X or so brute-force performance advantage over CPUs, the generality of the CPU will become a significant factor, enabling non-incremental improvements that won't be at all possible with the limitation I mentioned.

There is also the matter of DirectX itself. We wouldn't want or need it for software rendering. You'd want to write your own software renderer, optimized and tweaked for the particulars of your game engine. By doing that, you'd be able to go way beyond the hardcoded, high-overhead DirectX feature set and the underlying SGI triangle rendering model.

Interestingly, this already happened this generation with sound on the Xbox360 and the PS3. The only sound hardware in the system is the DAC, and all of the interesting sound mixing algorithms are done in software.
 
Killer-Kris said:
The only contention I have with that sentence is I would change might to will. Other than that, it's about as correct and concise of an answer as we'll get.
I don't believe it's an absolute certainty. If Intel decides to keep increasing the performance of its integrated graphics chips, then there's little room left for CPU based rendering. If we look at the laptop market, performance/Watt is of primary importance, so it makes sense to keep having dedicated and efficient graphics hardware.
 
Bastion said:
There is also the matter of DirectX itself. We wouldn't want or need it for software rendering. You'd want to write your own software renderer, optimized and tweaked for the particulars of your game engine. By doing that, you'd be able to go way beyond the hardcoded, high-overhead DirectX feature set and the underlying SGI triangle rendering model.
Personally I don't expect DirectX to become deprecated. Don't forget that the design of DirectX 11 must have already started and Microsoft is in very close contact with all the big hardware manufacturers to balance features. There's little doubt that the trend for more programmability is going to continue. By the time CPUs are capable of doing some serious 3D processing, GPUs could very well have full scatter possibilities and a programmable setup engine, rasterizer and interpolators.

And while low-end and some mid-end systems might start using the CPU for rendering, gaming systems will absolutely keep using discrete graphics cards for a long time, and DirectX/OpenGL. Game developers are not going to want to learn new APIs just for optimal rendering on the CPU.
Interestingly, this already happened this generation with sound on the Xbox360 and the PS3. The only sound hardware in the system is the DAC, and all of the interesting sound mixing algorithms are done in software.
I don't think it's really comparable. Sound processing (for games) is fairly simple and should take only a couple days. But writing your own renderer is complicated and math intensive. Avoiding numerical imprecision alone can be a nightmare, and implementing optimized anti-aliasing and anisotropic filtering isn't something you want to burden every game developer with.

What you need is a solid API and a trusted implementation. Even though DirectX might not be entirely ideal for software rendering yet, creating a proprietary API would not be succesful. Besides, there's not all that much that would benefit a software renderer. All I can think of right now is raytracing, but this would need a whole new API anyway.
 
Uttar said:
Using CELL for Graphics doesn't make sense unless you use REYES imo, and don't even get me started on that...
Please, do go and get started on that. ;)

Seriously, how would it perform? I really want to know.
 
Alstrong said:
What makes Cell good for Reyes :?:
It's a completely different rendering method, that is run on many CPU's, for starters.

But the main thing is, that it skips all the "hard" points, like triangle setup and rasterization, and likes vector processing above anything else. The main problem is probably going to be memory latency, not processing speed.
 
Chalnoth said:
I've always thought Sweeney was a bit off in his predictions for future PC hardware. As an example, this cost him significantly with the original Unreal engine, which was designed for software rendering. He had properly anticipated the advance of CPU's, but had completely underestimated the progression of GPU's.

Edit: Note that I do have great respect for the guy, and think he's especially excellent for creating Unreal Script, but I think Carmack has always been much better at visualizing the future of gaming hardware.

How did it cost him with Unreal, other than being a rather CPU dependent engine? (which the more modern doom 3 engine also seems to be for its time)

Anyhow, I'd imagine the rasterizer will always stay seperate from the cpu. Brute forcing everything with the cpu will probably never happen.

Maybe he was referring to raytracing, in which case raytracing hardware may not be much faster than a general purpose cpu? When will raytracing ever be a superior method though?

BTW, even if cpus do become powerful enough to rival gpus, there's still the software aspect. Unless microsoft rewrites Direct3d to run on cpus as well, it would be a very very long time before the market could move away from gpus.
 
Fox5 said:
How did it cost him with Unreal, other than being a rather CPU dependent engine? (which the more modern doom 3 engine also seems to be for its time)
Unreal was originally built around a software-rendering architecture. He saw the advancement of CPU's quite well, but vastly underestimated the advancement of 3D graphics hardware. In the end, this meant that the first Unreal engine had a very hard time running well on any 3D hardware. It was many months after release before a decent Direct3D renderer was written, longer before a good OpenGL renderer was available.

And even once those issues were sorted out, the engine placed extremely strict limits upon the polygon counts of the environment.

Maybe he was referring to raytracing, in which case raytracing hardware may not be much faster than a general purpose cpu? When will raytracing ever be a superior method though?
I don't buy it. Raytracing still has the nice property where every ray is independent of every other ray, and also would benefit from acceleration of some things that are currently accelerated in 3D graphics hardware, such as texture accesses. So it might not work well with current GPU's, but it may work well with future architectures.
 
Alstrong said:
What makes Cell good for Reyes :?:
The fact it sucks at texturing, maybe? As such, there doesn't remain much of anything but REYES for it to be good at. And considering its focus on sheer ALU horsepower, it does seem quite appropriate - not that it'd favorly compete with the image quality of a modern GPU, mind you!

Uttar
 
Fox5 said:
How did it cost him with Unreal, other than being a rather CPU dependent engine?
The Unreal Engine 1 uses techniques like portals and polygon sorting to minimize overdraw and avoid the use of a z-buffer. This was very beneficial to a software renderer, but for hardware rendering it's better to do some rough/fast visibility determination and rely on the z-buffer. By the time the engine was rewritten for the first generation of harware, it wasn't ideal for the next generation. So the Unreal engines have always been known as rather CPU limited.
BTW, even if cpus do become powerful enough to rival gpus, there's still the software aspect. Unless microsoft rewrites Direct3d to run on cpus as well, it would be a very very long time before the market could move away from gpus.
That "very very long time" is actuall pretty short: SwiftShader. ;) At least for the software aspect...
 
whilst prolly no where the near the speed of swiftshader i'll point out that mesa has recently added glsl support ( i havent tried it yet, though i assume since its a first release the goals are more accuracy than speed )
 
Back
Top