GPUs doing more than just graphics (interesting video)

watupwidat

Newcomer
Don't know if this has been posted already, but I find this very interesting. This video came from a paper that was presented at Siggraph 2002.

Surgical Simulation video

Essentially, the researchers have exploited the programmability of GPUs to enable them to simulate real time dynamic deformation with most of the work being handled by the GPU itself not the CPU (i.e. they are using the GPU as sort of a physics accelerator). The graphics was rendered on Geforce3 running at ~60fps. The paper that accompanies the video can be had here:

DyRT: Dynamic Response Textures for Real Time Deformation Simulation with Graphics Hardware


"This is the first paper to show how to simulate geometrically complex, interactive, physically-based, volumetric, dynamic deformation models in real time with negligible main CPU costs. We do so with precomputed modal vibration models stored in graphics hardware memory and driven by a handful of inputs defined by rigid body motion"

I wonder what they will be able to with NV30 class hardware.
 
There was also a class at Siggraph called Interactive Geometric Computations on Graphics Hardware or something like that. Half of the examples were non-graphics. I don't know if the notes are available on the web anywhere.
 
I imagine there is a market for GPUs anywhere you need alot of matrix calculations. The first company to bring easy to use software that calculates economical/statistical/financial data using the GPU could be huge.
 
Interesting paper. Thanks for the linkage.

LittlePenny said:
I imagine there is a market for GPUs anywhere you need alot of matrix calculations. The first company to bring easy to use software that calculates economical/statistical/financial data using the GPU could be huge.

Regarding applications - there are several fields that uses calculations that could concievably be run on GPUs. I haven't checked the actual capabilities of GPUs - their use for serious number crunching haven't really taken hold in my mind. I'm sorry, but I'm going to be a wet blanket and outline why.

First off, those of us who would benefit from more specialized hardware already use it. Scientific codes that are vectorizeable or parallellizeable by the application of graduate student man hours already is. (Nobody properly explained to me just how plowing through 100000 lines of FORTRAN would bring my PhD closer, but believe me, if such work might benefit a professor, it will be done.:) ) Smaller scale parallell processors have been used for well over a decade for specialized problems. Some problems even have processors custom designed for them.

A glaring problem with using GPUs are the miniscule datasets they allow. Even the P10 with its flexible memory architecture is, in reality, limited to the memory on the gfx-card. As soon as you go over AGP to fetch your data, your bandwidth drops well below that of the host, and you would be better off using the host directly, since they are also bandwidth limited for many (most?) codes suitable for vector processing. "Ah", you say, "but aren't there problems where you could bring the data over in chunks, and then do the heavy crunching, using the gfx memory as a large cache?". Sure there is, and the technique has been used since the dawn of computing and it really does extend the number of problems you can attack. But even low end computational systems such as IBMs Power4 already uses caches the size of gfx-card memories. Of course, an NV30 will hopefully be a lot cheaper, but I've been seeing specialized vector processor add-ons for everything from IBM mainframes to PCs for two decades, and they have never amounted to more than tiny niche solutions. Why? Mostly due to difficulties in programmability and limitations in the memory architecture.

As far as I can see, the same holds true for using GPUs.
Of course, the distant future is always difficult to predict. But the concept is far from new, and never, ever, have such solutions become "huge" even when the performance delta between the add-on processor and the host was even greater. Indeed, you could say that if such applications were sufficiently important, someone would have built machines tailored to that need already.

The use of GPUs for such calculations are possibly a good idea in the cases where the calculated data would be used for visualisation using the same video card, because in that case you avoid the AGP bottleneck rather than introduce it. There is one huge industry that might drive such use.
Games.

Entropy
 
Back
Top