SLI/CrossFire vs GPU parallelism in the industrial sector

It seems to me that Nvidia's SLI and ATI/AMD's CrossFire, their brands of GPU-parallelism, are childsplay compared to GPU-parallelism in the industrial/ commercial sector.

Silicon Graphics (dispite their greatly diminished role in the industry) as well as Evans & Sutherland, the two companies I can think of that do this, i am sure there are more, with their multi $10k to $100k machines (UltimateVision, Prism, RenderBeast) get more of the full benefit of GPU-parallelism, in which performance scales closer to 1:1. i.e. 16 GPUs gets you 16x the performance of 1 GPU. Obviously this is not the case with SLI or CrossFire in PCs. I guess that's due to a number of factors. In highend non-consumer systems have fully dedicated system resources to get the full benefit of each GPU/card. Whereas in the PC world, 2 or 4 GPU/cards have to share the bandwidth of the PCI-Express bus, Windows overhead, drivers etc.

I guess my question then is, is there anything on the horizon for the consumer/gamer market where GPU-parallelism is closer to 1:1 scaling, instead of the relative mess that is SLI and CrossFire?
 
it wouldn't cost nearly that much. SGI had a 40k machine with 8-16 R3xx GPUs
(and many CPUs) awhile back. Ever since SGI and E&S started using ATI GPUs originally built for gaming, the prices of their machines have come down compared to when those class of machines used custom proprietary tech.
 
I don't know about getting things closer to 1:1 multiplication of performance, but AMD-ATI's "Lasso" project may atleast get us into the arena of more than two or four GPUs in a housing - Since litterally nothing is known about the project, even that is questionable, though, but if they are going above two or four GPUs in one case, they're going to need much faster internal communication and probably controller processors, eventually. Starts to look a little like a miniature SGI PRISM, in concept, only in a slave mode.
 
Last edited by a moderator:
I don't know about getting things closer to 1:1 multiplication of performance, but AMD-ATI's "Lasso" project may atleast get us into the arena of more than two or four GPUs in a housing - Since litterally nothing is known about the project, even that is questionable, though, but if they are going above two or four GPUs in one case, they're going to need much faster internal communication and probably controller processors, eventually. Starts to look a little like a miniature SGI PRISM, in concept, only in a slave mode.


interesting, I haven't heard of AMD-ATI's Lasso project. I'll put that on my list of things to google tonight. hopefully Lasso has some promise.
 
It seems to me that Nvidia's SLI and ATI/AMD's CrossFire, their brands of GPU-parallelism, are childsplay compared to GPU-parallelism in the industrial/ commercial sector.

Depends how you look at it: The UltimateVision you mention can have up to 16 ATI FireGL cards, but they do not use Crossfire. So in order to combine them, you need to program for it, e.g., by using Multipipe SDK. So the only thing the big boxes bring is more GPU's per system, but with Quadroplex you can get close (also in price).

Keep in mind that SLI/Crossfire is no magic bullet, it scales mostly fillrate, and your application is still single-threaded. There are cases when you are faster using the cards as separate GPU's in parallel (see my homepage ;) ).

Imo, once the visualization software is able to use this parallelism, the hardware will follow.
 
Depends how you look at it: The UltimateVision you mention can have up to 16 ATI FireGL cards, but they do not use Crossfire. So in order to combine them, you need to program for it, e.g., by using Multipipe SDK. So the only thing the big boxes bring is more GPU's per system, but with Quadroplex you can get close (also in price).

Keep in mind that SLI/Crossfire is no magic bullet, it scales mostly fillrate, and your application is still single-threaded. There are cases when you are faster using the cards as separate GPU's in parallel (see my homepage ;) ).

Imo, once the visualization software is able to use this parallelism, the hardware will follow.

thanks for the informed answer, eile. Basicly along the lines of what I was looking for as far as a reply :)
 
Back
Top