SLI with asymmetrical PCI-E?

Does this (http://theinquirer.net/?article=17058) sound possible to anyone? Is nvidia's SLI implementation flexible enough to work in a scenario where one board employs a 16x PCI-E port and the other something slower?

In that article a hypothetical 8x PCI-E interface is proposed, but I don't believe any standard has been set for such a port.

In any case, unless a SLI implementation can work in *some* sort of asymmetrical configuration, it's unfortuanate to think that nvidia's next chipset won't support dual 16x PCI-E. Surely one would want at least a couple of 1x PCI-E ports, after all.
 
I think that would seriously slow it down. Aside from that i don't see why it couldn't work in a pci-e 1x port There was talk awhile back of maxx tech with a bridge chip connection 1 board in an agp slot to one in a pci slot
 
Hum, is it even possible? I was thinking that physical bus sizes on the motherboard would have to be different no ? And i don't think any graphic card could fit in ?
 
Hmm, Intel's E7525 chipset also supports an 8x port (or alternatively two 4x ports) in addition to its 16x. I believe this will be a common feature. As far as I know the standard provides for the bundling of any combination of 1, 2, 4, 8 or 16 channels.

As for the effect on the performance of the SLI'd cards, it is too early to tell. But considering that the AGP bus doesn't actually appear to be a significant bottleneck in any current games, the 8x limit may not impact actual performance much at all, except for synthetic tests.
 
Temporary Name said:
Does this (http://theinquirer.net/?article=17058) sound possible to anyone? Is nvidia's SLI implementation flexible enough to work in a scenario where one board employs a 16x PCI-E port and the other something slower?
Yes, it is. In fact, it has been demonstrated on a board with two physical x16 slots, of which one only had 8 lanes connected. AFAIK x16 PCIE cards have to be able to auto-negotiate whether to run in x16, x8, or x4 mode.

In that article a hypothetical 8x PCI-E interface is proposed, but I don't believe any standard has been set for such a port.
The PCI-E specification defines x1, x4, x8 and x16 interfaces.

In any case, unless a SLI implementation can work in *some* sort of asymmetrical configuration, it's unfortuanate to think that nvidia's next chipset won't support dual 16x PCI-E. Surely one would want at least a couple of 1x PCI-E ports, after all.
I don't think x16 + x8, or even x8 + x8, would be a real bottleneck.
 
If "implementing" means just to solder a x16 connector on the mainboard instead of a x8 connector, so that x16 cards still fit into the slot, this isn't much of a waste.
 
According to Inquirer...

nVidia's chipset can be expanded up to 32 PCIE slots lane so is that possible to expand gfx cards more than two in SLi mode (1 x 16 PCIe and more than 1 x 8 PCIe) ? I heard nVidia will intro more than two by depending market demand.

I would like to know how fast and thier bandwidth between AGP, PCIe 8x, and PCIe 16x. I think thier bandwidth is a same to AGP, and PCIe 8x. Even with PCIe x16 and It's wont make your gfx card faster. That's my noob's guess....

Prove me if I'm wrong. :oops:
 
I wonder if 2 8x's wouldn't be more desireable. All the solutions seen so far only have 16 lanes in the northbridge itself, the other lanes are coming from the southbridge which may slow communication a bit - commands for the secondary board are going from CPU -> Northbridge -> Southbridge -> graphics. Texturing from system RAM would suck as well as, I believe, the highest transfer rates are from the Intel solution, at 2GB/s (although in truth, I'm not sure about the various AMD solutions) and that will also have input device and sound travelling across it.
 
I would have to say there is definately no reason for the cards not to use 8x considering AGP 8x isn't even needed yet and PCI-E 8x has the bandwidth of AGP 8x and also can speak both ways at once of course.
 
I would. PEG x16 was specced at that spec for a good reasons - x8 was actually proposed at the graphics standard but that was ousted. I'd rather the spec was adhered to rather than diluted. The upstream and downstream bandwith will become useful as more apps are coded for it and the minimum specification of z16 should set the target.
 
Yeah, I would expect x16 will be most important once:

1) People start writing non graphics code to run on GPUs
2) true "Virtualized" texturing implementations appear.
 
DaveBaumann said:
I wonder if 2 8x's wouldn't be more desireable. All the solutions seen so far only have 16 lanes in the northbridge itself, the other lanes are coming from the southbridge which may slow communication a bit - commands for the secondary board are going from CPU -> Northbridge -> Southbridge -> graphics. Texturing from system RAM would suck as well as, I believe, the highest transfer rates are from the Intel solution, at 2GB/s (although in truth, I'm not sure about the various AMD solutions) and that will also have input device and sound travelling across it.

VIA PCIE northbridges have 20 PCIE lanes and can handle 3 devices.
 
Tumwater supplies 20 native lanes on the MCH too.

Rys
 
There was a presentation in France of the SLI solution. There's 3 options for SLI, Auto Select, Split Frame Rendering et Alternate Frame Rendering (ala Rage Fury MAXX). The platform wasn't completly stable so some bugs poped up from time to time.

On mother nature (3dMark03), the improvement was aroud 70% and Farcry or Painkiller 1280x1024 FSAA 8X, AF 8X were running smoothly :)

More information over Hardware.fr
 
what is the maximum we will see pci-e running at ? could we see it at 32x and mabye 64x ?


as for the 8x/8x its nice to see :) would love to see a x800xt pe maxx though and have temporal working. Each chip doing every other frame would be really nice at 6x .
 
PC hosts are antiques as far as architecture is concerned, lousy memory bandwith and equally lousy processing power, advanced process technology can only make up for so much. For most game developers there is no great need for high bandwith, I doubt they would be held back by 8x.
 
Tumwater has 24 PCI-E lanes total on the MCH, but if you want PCI-X you have to give up four to the PCI-E <-> PCI-X bridge.

Tumwater.gif
 
That explains why the 2nd PEG slot on my motherboard is 4X, since there's a PXH ASIC feeding the rest of the I/O.

Makes sense now. I just added up 16 + 4.

Rys
 
Back
Top