Deano Calver
Newcomer
State of the art GPU are capable of some fairly general operations. Whats different is the amount of threading that occurs, modern GPU are massively parellel, with chips that process 100's of threads coming in the next couple of years we start seeing a different way of doing things.
An example is smooth vertex normals, I've been doing some work on GPU vertex normals. Rather than just transforming them actual calculating them real-time. Using render-to-vertex techniques and using the pixel shader hardware you easily process 16 at speed.
The combination of CPU multi-threading and GPU is a big shift. Instead of one general computing unit and one graphics unit, we are now in position of having a loads of fairly general units shifting the computation to the right place.
Want to calculate plane equations for physics? Do it on the GPU, it will loop parellel it for maximum speed.
Want to do a subdivision surfaces? Whack it on a CPU core where you can easily have complex tree structures and branches.
Want to do advanced decompression? Maybe a CPU core linked with a GPU as a maths coprocessor.
Want to do influence map AI? Try it on a GPU using textures to real-time modify it.
The permutations and optimisation stragegys are immense, its going to be a long time for we are really sure what bits go where. The blurring of what units do what will give us a lot of room to hang ourselves in the short term, but long term should acheive some excellent results.
The problem for future CPUs and GPUs is now memory latency, we have lots of calculations possible but it takes forever to get results into the processers. Prehaps next-next generation, Giga or Tera Flops won't be the numbers people will be wildly over-estimating but memory latency...
"PS4 will kick XB3, because it only has 100,000 cycle memory latency" ;-)
An example is smooth vertex normals, I've been doing some work on GPU vertex normals. Rather than just transforming them actual calculating them real-time. Using render-to-vertex techniques and using the pixel shader hardware you easily process 16 at speed.
The combination of CPU multi-threading and GPU is a big shift. Instead of one general computing unit and one graphics unit, we are now in position of having a loads of fairly general units shifting the computation to the right place.
Want to calculate plane equations for physics? Do it on the GPU, it will loop parellel it for maximum speed.
Want to do a subdivision surfaces? Whack it on a CPU core where you can easily have complex tree structures and branches.
Want to do advanced decompression? Maybe a CPU core linked with a GPU as a maths coprocessor.
Want to do influence map AI? Try it on a GPU using textures to real-time modify it.
The permutations and optimisation stragegys are immense, its going to be a long time for we are really sure what bits go where. The blurring of what units do what will give us a lot of room to hang ourselves in the short term, but long term should acheive some excellent results.
The problem for future CPUs and GPUs is now memory latency, we have lots of calculations possible but it takes forever to get results into the processers. Prehaps next-next generation, Giga or Tera Flops won't be the numbers people will be wildly over-estimating but memory latency...
"PS4 will kick XB3, because it only has 100,000 cycle memory latency" ;-)