More SLI

Hurrah! How is that going to help us run games on UE3?

It may not, but virtualised systems are far from a "pipe dream" and very much within the specification for upcoming GUI and 3D API's for windows (and other platforms). However, having the bus bandwidth is hardly going to hinder upcoming engines either as, if they do have requirements greater than the current memory sizes then having more bandwidth is always going to help.
 
DaveBaumann said:
Hurrah! How is that going to help us run games on UE3?

It may not, but virtualised systems are far from a "pipe dream" and very much within the specification for upcoming GUI and 3D API's for windows (and other platforms). However, having the bus bandwidth is hardly going to hinder upcoming engines either as, if they do have requirements greater than the current memory sizes then having more bandwidth is always going to help.

Okie dokes. Guess 'pipe dream' was a bit harsh.
 
DaveBaumann said:
Hurrah! How is that going to help us run games on UE3?

It may not, but virtualised systems are far from a "pipe dream" and very much within the specification for upcoming GUI and 3D API's for windows (and other platforms). However, having the bus bandwidth is hardly going to hinder upcoming engines either as, if they do have requirements greater than the current memory sizes then having more bandwidth is always going to help.

I'm imagining a future time (let's say in 18 months) when virtualised texture accesses are using all of the spare 16 lane PCI Express bandwidth. Page faults in demanding, Ultra High Quality texture, games will be continuous, though the actual bandwidth used will only peak when extreme changes in the scene occur.

What I'm wondering is whether frame rate troughs induced by peaks in texture accesses will be as bad or worse in an SLI configuration as a single card.

Obviuosly it's early days yet, as far as GPU virtual memory is concerned, but I'm curious whether virtual memory will eradicate texture access frame rate drops, and if not, will SLI accentuate the performance drop?

Jawed
 
Jawed said:
What I'm wondering is whether frame rate troughs induced by peaks in texture accesses will be as bad or worse in an SLI configuration as a single card.
Well, since the cards won't be working on the same portions of the scene, there's no a priori reason to believe that both graphics cards will need to do their uploading at the same time.
 
Chalnoth said:
Jawed said:
What I'm wondering is whether frame rate troughs induced by peaks in texture accesses will be as bad or worse in an SLI configuration as a single card.
Well, since the cards won't be working on the same portions of the scene, there's no a priori reason to believe that both graphics cards will need to do their uploading at the same time.

The reasons why one card would need to access new texture data are generally going to be valid for the other card, whether rendering split screen or alternate frame. That's because texture accesses are going to be required when the player moves or looks in a different direction.

It seems you didn't read my earlier posting:

http://www.beyond3d.com/forum/viewtopic.php?p=394189#394189

Jawed
 
Re: How do you squeeze in two Ultras?

Jawed said:
Is any SLI motherboard going to have enough space between (and around) the two "16x" PCI Express slots so that you can fit two 6800 Ultras in? Is this going to be an option, physically?

Will SLI be limited to single-slot sized graphics cards?

It seems to me that anyone with a 6800 Ultra is out of luck...

Jawed

Yes. And as an aside, the distance between the two boards doesn't need to be the distance you've seen using that little connector. There's a ribbon version too.

SLI using dual slot cards like 6800 Ultra shouldn't be a problem, given that.

Rys
 
Jawed said:
The reasons why one card would need to access new texture data are generally going to be valid for the other card, whether rendering split screen or alternate frame. That's because texture accesses are going to be required when the player moves or looks in a different direction.
Nope, because with AFR, the renderings are staggered, so if a new texture is needed by one card, it can also be sent to the other card before it's needed there, preventing additional stalls. Depending upon how this is handled, the driver may not even require the data to be sent from system memory twice (with nVidia's implementation, it is conceivable that it could be sent across the inter-card bus).

And with split-screen rendering the situation can be similar. For one, there's no reason why the two graphics cards need to be operating on the same geometry at the same time. So, if there's a page fault on one, the data could again be passed to the other before there's much chance of a page fault on the second card. Additionally, the chance is much higher that the needed data won't be needed by both cards at the same time, since they are operating on different parts of the screen.
 
Re: How do you squeeze in two Ultras?

Rys said:
Yes. And as an aside, the distance between the two boards doesn't need to be the distance you've seen using that little connector. There's a ribbon version too.

SLI using dual slot cards like 6800 Ultra shouldn't be a problem, given that.
The "little connector" is for placing the cards two slots apart. How far apart the cards must be will depend upon the motherboards, though.
 
Re: How do you squeeze in two Ultras?

Chalnoth said:
Rys said:
Yes. And as an aside, the distance between the two boards doesn't need to be the distance you've seen using that little connector. There's a ribbon version too.

SLI using dual slot cards like 6800 Ultra shouldn't be a problem, given that.
The "little connector" is for placing the cards two slots apart. How far apart the cards must be will depend upon the motherboards, though.

I realise that. I was just pointing out that the fixed width connector isn't your only option for connecting SLI hardware, there's a ribbon version too.

Most people don't seem to realise there's two connector options ;)

Rys
 
Do SLI cards have to be the same model? In other words, could I have a 6800 GT paired with an 6800 Ultra and receive the added performance gains from SLI?
 
pharma said:
Do SLI cards have to be the same model? In other words, could I have a 6800 GT paired with an 6800 Ultra and receive the added performance gains from SLI?

Several people have said that everything has to be the same - AIB/Model/Bios etc. Although I don't see why - if they're using some load balancing algorithm to determine the workload for each card you should be able to mix and match board models with equal framebuffer size - i.e GT/Ultra.
 
NVidia claims everything has to be the same, but that might be an artificial restriction that can be worked around.
 
trinibwoy said:
Several people have said that everything has to be the same - AIB/Model/Bios etc. Although I don't see why - if they're using some load balancing algorithm to determine the workload for each card you should be able to mix and match board models with equal framebuffer size - i.e GT/Ultra.

I should imagine this is why.

Can I mix and match?
No. NVIDIA doesn’t support SLI on two different models or from different vendors. SLI supports configurations with the same model (i.e. 6800 Ultra) from the same vendor (Vendor XYZ).
 
I think you'll have to use the same model (maybe even the same manufacturer/card). I'm not sure if it's due to a "SLI" hardware/software limitation, though; would it be too hard for "SLI" s/w to apportion screen space/percentage to each card based on its relative speed, modifying the default 50/50 formula? I wouldn't think so, given that nV already says there's dynamic load balancing implemented.* It sounds more like common sense WRT app bottlenecks and PEG bandwidth distribution.

* Then again, I vaguely believe (not know or even hypothesize--this is mostyl made up! ;^)) AFR may not be ideal for unbalanced card pairings, due to the unfavorable combination of relatively low framerates (people will probably use SLI to bump their settings as much as to boost framerates, keeping framerates <=60fps avg.) and symmetric PEG distribution (you'd think the card rendering more would need more bandwidth).
 
I'm wondering if AFR will end up in the final drivers. Someone I know has mentioned its not in the drivers they are looking at for SLI and someone else mentioned that ATI has a patent on it (although I can't easily find that and I don't know if it covers the difference between dual chip on a board or dual board).

It will be interesting to see if AFR does make it into later SLI drivers.
 
I'm betting SLI with different boards etc will be like dual celerons on a BP6 - unsupported but will work in the majority of cases. I doubt totally different ram sizes or chipset rev's would work though. By specifying absolutley identical everything they cut down on support costs.

Sort of makes the "buy now - just whack one in later for double the performance" selling point a liitle dubious though.
 
I should imagine that one of the reasons is just to keep the load balancing simpler. Remember that you are not just load balancing the final pixels, but also other, off screen, ops such as render to texture. I would guess that the load balancinghere is more a case of trying to distribute difference offscreen ops to different boards (i.e. if there are two render to texture ops then have one board render one and the other board render another). If this is the case and you have a slow and fast board scenario then one board may end up waiting longer while the other is still doing its offscreen ops.
 
Inter card bus? How? In fact, having one do les work than the other is likely to alleviate the work for the bus since (assuming the slower is the slave) there will be less screen drawn by the slave and less data to pass across.
 
Back
Top