JF_Aidan_Pryde
Regular
Following up the topic of "Photoshop Filters on GPU", the consences is that this is feasible. And with PS3.0 it would be a pretty good idea too. This would mean that the GPU pretty much makes an excellent media processor. Anything SIMD based, floating point heavy, the GPU should have a huge advantage over a CPU.
Does this really show NVIDIA's vision (make CPUs redundent) of a GPU + Network processor is a better PC model than CPU+GPU+etc?
Currently, the GPU is said to be 'turing complete.' That is, by computer science definition, it can 'compute' any task that current CPUs can do. It may take ridiculous amout of cycles/passes for certain calculations but it's possible. Once full branching and loop limits are remoted, the GPU should be able to do _everything_.
I'd imange that an NV40+/R400+ would have the (raw) horse power to run WindowsXP + applications. It most likely won't happen but it should be able to. What kind of case does this present for a GPU as a general processor?
At the current rate of convergence, Intel is really losing out the battle in trying to make the GPU obselete. The problem is demand in more processing power in current PCs are due to media intensive taskes. CPUs in this area are improving slowly when compared to GPUs. Since GPUs are already at the 500MHz level, soon it can DO everything the CPU can, albeit, not as fast. But when the CPU tries to do everything the GPU can, the slower pace is not by a little - it's way below real time. Put more directly, a GPU can compute current CPU taskes at slow but 'acceptable' speeds while a CPU is years away from doing GPU taskes (Doom3) at real time.
This appears to me as a compelling case for GPUs eventually making a PC less CPU-centric, since there's very little 'GENERAL' computing anyway if you look at it carefully. Even office applications (Excel) is data-centric. GPU parallelism should shine in every aspect. I can think of very few things that a CPU can do better than a well turned, GPU compiled program for the next generation.
Some Misc data that may interest you:
- A group is currently building a farm of 256 GeforceFXs to create a Supercomputer for floating point calculations.
- NVIDIA is working with the SETI@home group on many projects that will tap into the distributed power of desktop GPUs (presumebly NV30+) for data analysis. They expect the total computational power of the installed base of NV30+s to be greater than CPUs in the future.
Does this really show NVIDIA's vision (make CPUs redundent) of a GPU + Network processor is a better PC model than CPU+GPU+etc?
Currently, the GPU is said to be 'turing complete.' That is, by computer science definition, it can 'compute' any task that current CPUs can do. It may take ridiculous amout of cycles/passes for certain calculations but it's possible. Once full branching and loop limits are remoted, the GPU should be able to do _everything_.
I'd imange that an NV40+/R400+ would have the (raw) horse power to run WindowsXP + applications. It most likely won't happen but it should be able to. What kind of case does this present for a GPU as a general processor?
At the current rate of convergence, Intel is really losing out the battle in trying to make the GPU obselete. The problem is demand in more processing power in current PCs are due to media intensive taskes. CPUs in this area are improving slowly when compared to GPUs. Since GPUs are already at the 500MHz level, soon it can DO everything the CPU can, albeit, not as fast. But when the CPU tries to do everything the GPU can, the slower pace is not by a little - it's way below real time. Put more directly, a GPU can compute current CPU taskes at slow but 'acceptable' speeds while a CPU is years away from doing GPU taskes (Doom3) at real time.
This appears to me as a compelling case for GPUs eventually making a PC less CPU-centric, since there's very little 'GENERAL' computing anyway if you look at it carefully. Even office applications (Excel) is data-centric. GPU parallelism should shine in every aspect. I can think of very few things that a CPU can do better than a well turned, GPU compiled program for the next generation.
Some Misc data that may interest you:
- A group is currently building a farm of 256 GeforceFXs to create a Supercomputer for floating point calculations.
- NVIDIA is working with the SETI@home group on many projects that will tap into the distributed power of desktop GPUs (presumebly NV30+) for data analysis. They expect the total computational power of the installed base of NV30+s to be greater than CPUs in the future.