SLI Thought

PCIe allows that every pair have a different trace length. Only the traces of one pair need to be the same length.

I can certainly see that as being the case for lanes going to separate connectors (i.e. two different x1 lanes can have different trace lengths), as they are just serial connections. However, is that still the case when there are multiple lanes going to a single connector/device?

I'm not sure if I remember this correctly but I had thought that multi-lane slots operate by "spilling over" into the next lane if the bandwidth requires it - i.e. in a x16 connector lane 1 will always be used, lane two will be used if the quantity of data that needs to be transmitted is greater than a single lane can handle, lane 3 will be used if the data is greater than two lanes can handle, etc., etc. In this case I could see it as being important for the trace lengths for multilane connectors to be the same otherwise timing when they reach the device at the other end would be a little funky.

[edit] - well, having another look at a graphics card it does appear that the visible traces go more or less straight to the core, regardless of whether its in the middle of the (short trace) or outer edges (longer traces). I wonder why it appears that all 16 lanes are routed through the SLI switch connector on an SLI nForce then.
 
DaveBaumann said:
PCIe allows that every pair have a different trace length. Only the traces of one pair need to be the same length.

I can certainly see that as being the case for lanes going to separate connectors (i.e. two different x1 lanes can have different trace lengths), as they are just serial connections. However, is that still the case when there are multiple lanes going to a single connector/device?

Yes it is.

PCIe16BoardLayout.PNG


Part of the Intel reference design for the connect of a 16x PCIe slot. You can see the different length of the yellow lines.

DaveBaumann said:
I'm not sure if I remember this correctly but I had thought that multi-lane slots operate by "spilling over" into the next lane if the bandwidth requires it - i.e. in a x16 connector lane 1 will always be used, lane two will be used if the quantity of data that needs to be transmitted is greater than a single lane can handle, lane 3 will be used if the data is greater than two lanes can handle, etc., etc. In this case I could see it as being important for the trace lengths for multilane connectors to be the same otherwise timing when they reach the device at the other end would be a little funky.

Maybe I will find the description of the right behavior for this case in the Spec. But it is a large document...
 
A PCIe connector has 16 lanes.
A current routing connector has 24 lanes (if my pic is right).
If all lanes went through the routing card, the connector would need 40 lanes.
I think 24 lanes would fit the description "about the same numer of connections on the two, if not more on the connector", and 40 would not.


Demrug is right, a big benefit with serial comm is that the traces can have different length. Sadly he's also right that max trace length might be a problem with my version.
 
DaveBaumann said:
I wonder why it appears that all 16 lanes are routed through the SLI switch connector on an SLI nForce then.

Only the back 8 of the 16 (only 16 pass through, take a look at the back of the mainboard near the MCP and on the top where you route to peripheral slots) are remapped, with nForce4.

There's a diagram I showed you in November last year, I'm not sure if you remember it, which shows a continuity connector layout. I can resend if you like (it's got some other relevant info to this discussion on it too).
 
The big difference between the current method and the proposed ones I can see is that with the current method the PCI Express interface on each card only needs to speak to one controller.

The proposed methods with links between the cards would mean that each PCI-E controller negotiate and address one (in single mode) or two independent controllers.

I don't think this would be a significant problem if we could convince GPU manufacturers to include a stripped-down PCI Express switch on each GPU. Considering Nvidia's SLI connector's similarity to a 1x connector, this may already be partially the case.

The other issue would be getting mainboard manufacturers to eat the cost of more complicated tracing for a niche product, though current SLI boards already have more complicated routing as-is.
 
3dilettante said:
The big difference between the current method and the proposed ones I can see is that with the current method the PCI Express interface on each card only needs to speak to one controller.

The proposed methods with links between the cards would mean that each PCI-E controller negotiate and address one (in single mode) or two independent controllers.

I see building a multi-port switch on each graphics card as very unlikely due to increased cost. You are increasing the cost of every card just in case it might be used in a multi-card configuration.

Of course NVidia's SLI cards already have increased cost due to the on-die SLI support and the SLI connector on each card.

It would make more sense to concentrate the complexity (hence cost) of a multi-card configuration in the motherboard (or hardware that comes with it, such as Basic's idea of a PCI-Express switching card).

So, in the end I think it's better to have a motherboard that can do dual-PEGx16 switching (in addition to supporting other PCI Express x1 slots) with graphics cards that have no specialised components to support dual-GPU operation.

Jawed
 
I just want to point out that my earlier comments with regard to an unofficial "SLI" motherboard, (which turns out to be MSI's K8N Neo$ Platinum) were a complete misapprehension on my part. Tofal fuck-up, in other words.

This is the motherboard:

http://www.hexus.net/content/reviews/review_print.php?dXJsX3Jldmlld19JRD05NTc=

Amongst other things I was under the impression that the SLI connector joining the two graphics cards directly wasn't required, but I was mistaken.

Also, instead of being PEGx8 + PEGx8 as in a conventional AMD-based SLI motherboard, it's PEGx16 + PEGx4, as far as I can tell.

Finally I think NVidia has changed the Forceware drivers to prevent MSI's solution from working - rather than as I said before "changed the BIOS".

Jawed
 
Back
Top