More SLI

NVidia and others were accusing 3dfx of trying to bundle together less powerful chips to compete with single-core GPUs. If it took two 6800GTs to equal one X800XT, it would be legitimate to criticize Nvidia's SLI solution, or any multi-core GPU solution as less desirable.

There is a difference between producing a single core that is competetive and which maxes out almost everything (bus size, memory used, transistor budgets) and bundling two of those together in an SLI configuration, but producing a chip which on its own is lackluster, and trying to make up for it by bundling two of them together. This was during the voodoo5 era, not the voodoo2 era.
 
IgnorancePersonified said:
What I don't get is: why does "SLI" need a proprietery adapter? Latency?
Makes more sense to use a separate communications bus to get the final rendering of one card over to the other. That is, you wouldn't want to tax the PCIe bus, and using pass-through like the Voodoo2's, beyond degrading image quality, would require an additional connector on all boards sold.
 
Chalnoth said:
IgnorancePersonified said:
What I don't get is: why does "SLI" need a proprietery adapter? Latency?
Makes more sense to use a separate communications bus to get the final rendering of one card over to the other. That is, you wouldn't want to tax the PCIe bus, and using pass-through like the Voodoo2's, beyond degrading image quality, would require an additional connector on all boards sold.

I'm wondering if ATI's much-rumoured chipset will include an option for a "PCI Express router" separate from the northbridge. I think this is the crux of Kaleidoscope.

Since PCI Express is point to point, I'm hypothesising that it's possible that this chip takes the 16 lanes destined for the 16-lane socket and acts as a secondary "northbridge". This would enable Kaleidoscope to support two 16-lane PCI Express sockets operating in 16-lane bidirectional mode, while leaving the mobo's own north bridge thinking that it has a single 16-lane connection to the (single or double) GPU.

Kaleidoscope effectively offers each of the two GPUs in the Multi Rendering configuration a bi-directional 16-lane interconnect. To be used how the driver/GPUs desire. No need for a separate link.

Anyone have an idea of the bandwidth of the SLI link board?

It seems logical to me to have a dedicated chipset that acts as a 16-lane node (i.e. as a GPU "ghost") which creates a 32-lane private PCI Express network so that the twin GPUs can talk amongst themselves without impacting the mobo's northbridge.

Jawed
 
I didn't think there was any inherent limit to the number of 16-lane PCI Express connections a northbridge could have under the PCI Express standard.
 
Additionally, this PEG-specific chip becomes CPU-type independent. One chip will work with AMD or Intel CPUs, leaving the design of the mobo's own northbridge to be tailored as normal.

Jawed
 
DaveBaumann said:
The costs may not be insignificant - either on a silicon or pin requirement level.
Oh, I agree. I just think it'd be a much easier (and probably cheaper) solution than the workaround Jawed mentioned.

And Jawed:
More chips on the motherboard? Not a good thing.
 
What if the architecture of Kaleidoscope includes IGP?

The same pin-out (note I'm not saying the same chip) functions either as the ghost-GPU/32-lane PEG router or as a pure IGP.

Just thinking aloud...

Jawed
 
Jawed said:
What if the architecture of Kaleidoscope includes IGP?

The same pin-out (note I'm not saying the same chip) functions either as the ghost-GPU/32-lane PEG router or as a pure IGP.
I don't see why this has any relevance. The IGP would typically be within the chip, and so would require no external lanes.
 
Chalnoth said:
Jawed said:
What if the architecture of Kaleidoscope includes IGP?

The same pin-out (note I'm not saying the same chip) functions either as the ghost-GPU/32-lane PEG router or as a pure IGP.
I don't see why this has any relevance. The IGP would typically be within the chip, and so would require no external lanes.

Because when you plug in the add-in card, you want to retain the IGP (for a third monitor). Therefore it might be sensible to route the 16-lane PEG from northbridge via IGP to the PEG socket.

That way the IGP provides intelligent multi-monitor functionality (in collaboration with the add-in card).

Anyway, Baumann's keeping quiet, which basically means we'll have to wait and see... I'm just thinking about the possibilities of Kaleidoscope as an architecture that integrates IGP and Multiple Rendering in a mobo layout that is common to both functions, with the mobo's northbridge being untouched.

Jawed
 
There's a better way than that. (But it depends on some support from the GPU.)

The nForce4 SLI motherboards use a special slot where you put a "routing card" to route lane 8-15 from the north bridge to either lane 8-15 of PEG slot 1, or to lane 0-7 of PEG slot 2. Why not do the same thing, but use PEG slot 2 as the routing slot.

Ie:
Lane 0-7 from NB goes to lane 0-7 on PEG slot 1.
Lane 8-15 from NB goes to lane 0-7 on PEG slot 2.
Lane 8-15 on PEG slot 1 and 2 are connected to each other.

If you only want to use one card, then put a "routing card" in PEG 2. This means that all lanes are routed to PEG 1.
If you want to go SLI, then you've got 8 lanes to each card. And you've got 8 extra lanes between the cards.

This is actually a bit better than the first impression, because it means that each part (NB, GPU1, and GPU2) have a x16 interface to any of the other two, as long there's no communication to the third part. (8 lanes directly, and 8 lanes routed through the third part.)

And it doesn't stop there. Data could be broadcasted to both cards at full x16 speed to each card.

So you could do the exact same thing as a (1+2)xPCIE bridge chip could do, but without the bridge, and even with less complex motherboard than current nfForce4 SLI (removing the "routing slot").


Problems:
I don't know if there's a big hardware difference betwen a PCIE master and a PCIE slave. If that's the case, you need to add master functionality to lane 8-15 of the GPU.

While the added support for lane 8-15 would be obvious to put into a SLI-enabled GPU, it might not be so obvious for non-SLI GPUs (that you might want to use for a multi-monitor setup). So I hope the PCIE sandard have some way to tell cards to ignore lanes.


This is by the way a design I proposed when I first read about PCIE. I think this idea is obvious enough, that I'm surprised to see designs like the nForce4 SLI "router slot" instead.
 
Its been suggested to me from a couple of people, outside of NVIDIA, that the link connector may actually be a PCIe x1 lane anyway.
 
Jawed said:
Because when you plug in the add-in card, you want to retain the IGP (for a third monitor). Therefore it might be sensible to route the 16-lane PEG from northbridge via IGP to the PEG socket.
This has nothing to do with adding an additional 16-lane PCI express connection to the chipset. Anyway, I really don't see what you're getting at here.
 
Chalnoth said:
Jawed said:
Because when you plug in the add-in card, you want to retain the IGP (for a third monitor). Therefore it might be sensible to route the 16-lane PEG from northbridge via IGP to the PEG socket.
This has nothing to do with adding an additional 16-lane PCI express connection to the chipset. Anyway, I really don't see what you're getting at here.

PCI Express is point to point. It is not a bus.

Jawed
 
Chalnoth said:
IgnorancePersonified said:
What I don't get is: why does "SLI" need a proprietery adapter? Latency?
Makes more sense to use a separate communications bus to get the final rendering of one card over to the other. That is, you wouldn't want to tax the PCIe bus, and using pass-through like the Voodoo2's, beyond degrading image quality, would require an additional connector on all boards sold.

Didn't V2 Single/SLI image degradation come from using a VGA passthrough cable to the 2D card and not the SLI 80-pin (?) connector?

Btw, that reminds me, didn't Dave say nVidia's SLI connector will not be bundled with the cards but with nForce 4 mobos only? If that's true then I think nVidia is shooting themselves in the foot. Not too much of course since they will probably charge more for nForce chipsets anyway but it would put a stop to anyone else's SLI hopes.

And what are the chances of a third-party connector? Any legal issues with that?
 
Mordenkainen said:
Btw, that reminds me, didn't Dave say nVidia's SLI connector will not be bundled with the cards but with nForce 4 mobos only? If that's true then I think nVidia is shooting themselves in the foot. Not too much of course since they will probably charge more for nForce chipsets anyway but it would put a stop to anyone else's SLI hopes.
Apparently the connectors are manufactured separately by each motherboard company, so I don't see why they couldn't build them for non-NF4 SLI boards.
 
What i'm thinking is It is limiting in it's appeal unless all cards(nvidia based) have this option. If the pcb for a "SLI" capable card is sold as some sort of "Ultra" version of a chip ie a manufacturer who uses it has to pay extra and pass that onto the consumer- then it sounds like a bit of a gamble in some ways for the manufacturer depending on the amount of $. If every card is SLI capable out of the box, has the connectors and the adapters are freely available and the 6800Gt SLI is the same price as the 6800Gt then it's a moot point. I'm won't be "into it" this time around but if there is siginifcant market penetration and the price is right then it would be an option in future.

If it's a 1 lane connection then it does sound like a technology that is.... like Mike Magee has put it so well, a marchitecture rather than an architecture unless there is some killer technical aspect to it. :?:
 
Fodder said:
Apparently the connectors are manufactured separately by each motherboard company, so I don't see why they couldn't build them for non-NF4 SLI boards.

That's good to know, thanks.
 
Back
Top