nVIDIA's "SLI" solution

nVidia says that they dont support SLI with different manufacturers. both boards have to be the same model and same manu.
 
Sage said:
nVidia says that they dont support SLI with different manufacturers. both boards have to be the same model and same manu.
Well, they recommend it. From what I've seen, the problem with doing it with different boards is that you'll have synchronization problems. It will probably still work, but it won't work at optimal performance.
 
anaqer said:
Chalnoth said:
Huh? There's no way a normal fan design would work with two closely-packed video cards.
Neither would a blower. Air intake is severely reduced by the back of the other card in both designs.
The air doesn't come from the front of the card. It comes from the side.
 
trinibwoy said:
Why wouldn't you be able to overclock it? As long as you can get the same stable overclock on both cards it should be fine.

I think the overclockers will always find a way.

SLI 6800's will be perfect to run this beast from Apple

indexcompdisplay06282004-03.jpg


30 inches! DVI input, 2560x1600 res.

http://www.apple.com/dk/displays/

With SLI you could run 4 of these screens = 60 inches, 16.4 megapixel, 5120x3200 (oops fixed :oops: ). :oops:

Does a 6800 have the 2d fillrate capabilities to render at these resolutions??
 
Leto said:
trinibwoy said:
Why wouldn't you be able to overclock it? As long as you can get the same stable overclock on both cards it should be fine.

I think the overclockers will always find a way.

SLI 6800's will be perfect to run this beast from Apple

image snipped

30 inches! DVI input, 2560x1600 res.

http://www.apple.com/dk/displays/

With SLI you could run 4 of these screens = 60 inches, 16.4 megapixel, 10240x6400. :oops:

Does a 6800 have the 2d fillrate capabilities to render at these resolutions??

How do you get 10240x6400?

That would take 16 screens.
 
Nite_Hawk said:
5120x1600

(doesn't each display need two dvi connectors?)

Nite_Hawk

Dont think so, if you read the little info bar about the Geforce 6800 with the picture of a 6800, it says; "And even better - it supports two 30" displays ..."

Maybe you need 2 DVI connections to run it at full 2560 resolution..
 
The Apple gigantic monitor has a native resolution (or did I get that the wrong way?) of 2560*1600. Anything lower then the native resolution on TFT/LCD monitors isn´t usually a good idea. Could it be that the Mac NV40 GPU´s have a different RAMDAC? Last I checked NV40´s support a maximum of 2048*1536*32.
 
Well, the RAMDAC shouldn't be used for a DVI output, but I don't know if that means one can get a higher resolution out of DVI.
 
[url=http://www.apple.com/displays/digital.html said:
Apple[/url]]The 30-inch Cinema HD Display requires the next level of DVI connectivity — “dual linkâ€￾ to drive the massive amount of pixels to the screen. And the NVIDIA GeForce 6800 Ultra DDL graphics card (available from the Apple Store) delivers, with the most advanced graphics engine available. This card, designed specifically to support the dual link DVI connection, delivers 2560 by 1600 resolution. Even better, it can drive two 30-inch Apple Cinema HD Displays, giving you the ultimate creative canvas. This card will be available for Mac only in August 2004.
It's probably the same as other high-res LCDs: dual-(or higher)link allows for a decent refresh rate, whereas driving two 30" displays will probably result in low refresh rates for both blingedy LCDs.
 
nutball said:
That's completely false. BTX will make life worse for graphics cards. BTX is a solution to Intels problem, and Intel don't give a flying fudge about the rest of the industry.

Lets see, graphics cards get 2-3x the current CFM of air and 2-3x the fin volume to play with, and it is going to make things worse? What planet are you from.

Aaron Spink
speaking for myself inc.
 
Scaling the geometry processing using multliple GPU cards shouldn't be too difficult if the cards use screen partitioning (as opposed to scan line interleaving).

If the application's 3d engine uses efficient culling of geometry to the viewport (say using hierarchical bounding volumes) the scaling happens automatically since geometry outside a card's partition is efficiently culled before being processed.

If it was felt important to scale the geometry processing of applications that didn't bother to efficiently cull geometry it could be handled by having each card deterministically process half the vertices. A card would send any processed vertices it processed in the other card's viewport to the other card for rendering. As an possible optimization, you could minimize communications traffic by having each card track the vertices that were in its viewport on the last frame and render those first.
 
Leto said:
Nite_Hawk said:
5120x1600

(doesn't each display need two dvi connectors?)

Nite_Hawk

Dont think so, if you read the little info bar about the Geforce 6800 with the picture of a 6800, it says; "And even better - it supports two 30" displays ..."

Maybe you need 2 DVI connections to run it at full 2560 resolution..
The card in question reportedly supports Dual dual DVI. (!)
I remember a discussion we had here that about bandwidth not too long ago, where some assumed that we would be satisfied with 1600x1200 for the foreseeable future. Doesn't look that way, does it. :)
Seriously, if Microsoft ever get their asses in gear and produces a GUI that is resolution independent, the resolution limits can lie very much further out. Screens have crept from 72 dpi to 100 dpi, but that is not a limitation of the display technology per se - it can reach MUCH higher resolutions and it would benefit type greatly, in addition to being generally useful. The incredibly restrictive DVI interface has to go though.
 
BTW, in my opinion screen partitioning might prove to be a more efficient way of distributing processing across multiple pipelines on a single card as the number of pipelines continue to grow and the triangle sizes continue to shrink.

It allows large numbers of vertices and pixels to be processed independently in parallel without potential frame buffer or resource conflicts.
 
SA said:
If the application's 3d engine uses efficient culling of geometry to the viewport (say using hierarchical bounding volumes) the scaling happens automatically since geometry outside a card's partition is efficiently culled before being processed.
It doesn't seem like that would help screen partitioning, as the application's not going to know to split the screen in two. The algorithms may be pretty much identical for deciding which geometry to send to which card, but I don't see how that helps the video card out.
 
Ailuros said:
The Apple gigantic monitor has a native resolution (or did I get that the wrong way?) of 2560*1600. Anything lower then the native resolution on TFT/LCD monitors isn´t usually a good idea..
An integer fraction should be ok, surely?
 
Simon F said:
Ailuros said:
The Apple gigantic monitor has a native resolution (or did I get that the wrong way?) of 2560*1600. Anything lower then the native resolution on TFT/LCD monitors isn´t usually a good idea..
An integer fraction should be ok, surely?

Some lcd displays scale better than others but even that would be a problem for that apple as its a bit of an odd resolution and many games seem to lack support for odd resolutions.
 
AlphaWolf said:
Simon F said:
Ailuros said:
The Apple gigantic monitor has a native resolution (or did I get that the wrong way?) of 2560*1600. Anything lower then the native resolution on TFT/LCD monitors isn´t usually a good idea..
An integer fraction should be ok, surely?

Some lcd displays scale better than others but even that would be a problem for that apple as its a bit of an odd resolution and many games seem to lack support for odd resolutions.
Apple has games?
that LCD is not for games...
 
Back
Top