GPGPU, Upgrade Cycles, and CPU Dependence

nbohr1more

Newcomer
I've been wallowing in a little self-pity about my inability to get a wife-approved system upgrade... My lack of CPU horsepower has started me thinking about something fundamental about the concept of the "Upgrade". At the dawn of 3D acceleration the Video Card was replacing something that was normally done by the CPU. The whole point of the Video Card was to give PC gamers the ability to play games as if they had somehow traveled several years into the future and had purchased a high-end CPU. Obviously, those early cards were still highly dependent on the CPU and platform they ran on but the initial effect was still akin to a CPU upgrade. Hardware T'n'L came along with the promise of even more CPU independence but by then game developers decided to push the CPU along with the GPU so simply upgrading the GPU was no longer an effective way to stave of a CPU upgrade. Now have been through some tumultuous Platform cycles for both AMD and Intel where upgrading CPU's has essentially meant upgrading motherboards and RAM. Essentially what has happened is that upgrades have become platform wide (especially due to the AGP to PCIE transition). Now Direct-X 11 Compute Shaders are on the way. The promise, once again, is that the CPU will no longer be needed for certain types of calculation. This bring up the following questions:

1) Will Game developers see Compute Shaders as a way to mitigate platform dependence (Example you must EITHER have CPU A or GPU B to run this game)?

2) If CPU dependence keeps increasing along with GPU dependence, why not simply put a CPU socket and SATA controller on the Video Card PCB (this approach would even make old PIII systems viable)?

3) Are there ANY current games that take advantage of PCIE's bi-directional communication advantages (other than SLI/Crossfire)?

Bonus Question: Will Microsoft or AMD promote Compute Shaders as a viable GPGPU approach for the Xbox360?
 
I presumed that since Xenos is approximately DirectX 9.5 and the X1800 series has a FAH client that some form of GPGPU can be done on it. I guess that DirectX 9c class hardware wont benefit from OpenCL or Compute Shaders...
 
1) Will Game developers see Compute Shaders as a way to mitigate platform dependence (Example you must EITHER have CPU A or GPU B to run this game)?

I'd say unlikely. Short term you'll see game developer using it for various forms of optimizations. Later on for effect physics and similar. Long term the compute shader will probably take over some tasks entirely. I don't think there will be a lot of CPU fallbacks for the compute shader stuff at any point along this line.

3) Are there ANY current games that take advantage of PCIE's bi-directional communication advantages (other than SLI/Crossfire)?

PCIe, while faster than AGP, is generally viewed as a bottleneck and not something to be taken advantage of. The less you need to use it, the better.
 
Humus, you never let me down! I recall e-mailing you wayyy back regarding some Direct-X 9 demo and getting a cordial response and everything (rather than being spam filtered into oblivion)... So the only real advantages of PCIE are A) quicker level loading and B) Multi-GPU and (if DX11 goes well) C) Multi-CPU communication ... All this talk of a high-speed utopian CPU to GPU dialog is mostly hot air. The Dev's want to keep every thing on the card nearly as much as I do. But it seems that you've also brought up the idea that the card will be used only for things that make it seem YEARS ahead of the CPU rather than being used for things that it can only outdo the CPU in a marginal sense... this implies that, ever more, the CPU requirements will continue to increase concurrently along with the GPU almost regardless of how "general" the GPU becomes... I kinda wish the upgrade cycles were more forgiving but I won't disagree that having both rapid GPU and CPU improvements is a greatly accelerating innovation...
 
Back
Top