Which GPUs are most capable in triangle/vertex calculations

Smurfie

Newcomer
I work for a 3d CAD engine development company, and recently, we have run into a hitch. Our machines are a little too slow, and our budget is too low to meddle with Realizms and Quadros. An alternative would be to get consumer cards instead and use them on our low end workstations for wire framework.

Question is, which of the consumer 3d cards are the most capable in terms of vertex and triangle calculations? Which of the GPUs have the most muscle in pumping out polygons? Fillrate is absolutely not important, although nice to couple with. We do mostly wire frames and rarely do we render full color.

Another important point. The card has be stable and perform well in OpenGL. This is due to a very bad experience using ATI's 7x00 series cards a long while back. We would consider ATI's current offerings if their OpenGL drivers are rock solid(Note: Our benchmark is 3DLabs).

Thanks in advance for any replies here!
 
Re: Which GPUs are most capable in triangle/vertex calculati

Smurfie said:
I work for a 3d CAD engine development company, and recently, we have run into a hitch. Our machines are a little too slow, and our budget is too low to meddle with Realizms and Quadros. An alternative would be to get consumer cards instead and use them on our low end workstations for wire framework.
I suspect that consumer cards are not optimised for wireframe because it's not really that important for the gaming community. <shrug>
 
Re: Which GPUs are most capable in triangle/vertex calculati

Simon F said:
I suspect that consumer cards are not optimised for wireframe because it's not really that important for the gaming community. <shrug>

Well I suspect that consumer drivers are not optimised for wireframe because it's not really that important for the gaming community.

After all the quadros are just geforce chips.
 
If you're going to buy a considerable number of cards, you may be able to get samples for testing. That'd probably be the best way to go -- get some cards, download the latest reference drivers, and see how well they work.

Otherwise, I'd say it depends on how much you're willing to pay. You should be able to find older generation quadros or FireGL cards for the price of current consumer graphics cards. I assume you don't want to try to mod consumer cards to become quadros.
 
Budget is very very tight. We have approximately less than $2000 and we are looking into improving 5 workstations. That might work out to $400 each, somewhere in the X800Pro, 9800XT, 6800GT range? Of course, if anyone can recommend us something which gives more for less, that's a bonus.

Not sure about samples. We were hoping to get P10 samples, but it never materialised. That would have helped heaps.
 
Then you don't need to see which one is the more capable, but which one is the one that will actually WORK with professional applications. Ati has always been a bit icky on this side, while Nvidia has provided more than decent support even for their non-workstation cards.

Using Maya for example, Nvidia consumer cards is acceptable. Radeons just won't work properly most of the time.

Just my 2 pennies
 
It looks like we might go with the Geforce FX 5900XT, since it might be the closest to the Quadro FX 1000.

Anyone has any good/bad opinions on Geforce FX 5900XT and its OpenGL performance? I don't know much about the Geforce FX lines, since I owned a Radeon 9600 before and recently got a Geforce 6800nu. Any opinions will be appreciated.
 
You can always try to softmod the Geforces into their respective Quadro versions. I said "try" as it seems it's not as easy as many people make it out to be. I softmodded my GFFX 5900U into a Quadro 3000. Then i forgot all about it, changed the drivers, and it went back to being a GFFX.
But as a Quadro, Maya worked much much better than as a GFFX.

Try it. You can always switch back.
 
Why would they cripple such a feature? Why not hype it up as a gaming card + a semi-workstation card too :?:
 
Quite a couple of reasons. Professional graphics card drivers optimize more for vertices and consumer graphics card drivers tend to optimize for fillrate.
 
Alstrong said:
Why would they cripple such a feature? Why not hype it up as a gaming card + a semi-workstation card too :?:

Because then all the studios would buy the £300 cards instead of the "real" workstation ones that can cost up to £3000. the cards are exactly the same, the only thing that differs is the driver.
 
Alstrong said:
:? :? :?


I sometimes hate capitalism :(
Why? In this case, GPU manufacturers seek to make a profit on selling workstation-level cards. In order to do so, they must offset the cost of developing the drivers by inflating the cost of the video cards. This basically must be done because workstation-level cards get such relatively small circulation. If they couldn't sell the workstation GPU's for more than the consumer counterparts, they wouldn't bother at all.
 
Well, it's also for the hardware. Workstation hardware has somewhat different needs than gaming hardware, and so has its own R&D. Even if nVidia shares the cost by making them the same die, they're still eating the R&D cost. Of course, since they are saving money by selling lots of the chips as gaming chips, nVidia's hardware cost is lower than a company that only sells workstation cards (like 3DLabs), and I think this is fairly well-reflected in the cost of their parts compared to other workstation parts.
 
Back
Top