Is the Intel 5820 (28 lanes PCIe) enough for 980 Ti SLI and PCIe storage?

It depends on how the motherboard breaks out the PCIe lanes. Some boards impose strange limitations when the 5820K is installed (e.g. the 2nd x16 slot is limited to 4x, other PCIe or M.2 slots may be completely disabled when two x16 slots are occupied etc.). Technically speaking 28 PCIe lanes should be plenty (8x by 8x for the GPUs leaves 12 lanes for storage and other things), but to avoid any potential headaches you'll probably want to step up to the 5830K.

If you decide to try the 5820 be sure to do your homework when selecting the mobo. The PCIe lane configuration isn't always well defined by all mobo makers, especially when it comes to the 5820K.
 
On the long run, I'm not sure it'll be wise to cut on GPU<->CPU bandwidth.

Intel knew very well what they were doing by cutting the PCIe bandwidth to GPUs by half. For starters, you're limited to nVidia solutions (as long as they keep using dedicated bridges for SLI).
For the current straight-up AFR methods, 8x Gen3 may be enough. But for future DX12 titles where rendering is distributed differently between GPUs, GPU<->CPU communication could become much more important.
Not to mention HSA which could also be an important factor.
 
Intel knew very well what they were doing by cutting the PCIe bandwidth to GPUs by half. For starters, you're limited to nVidia solutions (as long as they keep using dedicated bridges for SLI).

Are you sure about that? A number of review websites state they use the i7-5820k in their articles on R9 290X 8GB CrossfireX performance.
 
Are you sure about that? A number of review websites state they use the i7-5820k in their articles on R9 290X 8GB CrossfireX performance.

Sorry, I didn't mean to say that bridgeless crossfire can't happen with only 8x PCIe. It's just that performance will hurt if the PCIe bandwidth is halved, and even more when DX12 multi-adapter games start using methods other than AFR.
 
An option is the Xeon E5-1620 v3, it's basically a quad core i7 with 40 PCIe lanes and cheaper than the 5820K.

Skylake is the "worst" here at it has just 16x PCIe (but bandwith to the chipset is doubled compared to 4770/4790K, so it's good for dual 8x cards and a PCIe SSD connected to the chipset)

In theory the 5820K gives you one GPU at 16x, one at 8x and there's 4x left for the SSD.
Not sure if all options are good and the difference is psychological. "Psychologically" I'd go for the Xeon but its clocks are locked (or you can do +5% on the BCLK)
 
wouldnt some board makers put an extra pci-e controller on the board
like they do with sata controllers my asus board had 3 (intel/jmicron/silicon image)
or is that not possible with pci-e ?
 
Last edited:
That's PCIe to PCIe bridges, and it is fairly common : for example a bridge has 16x on one side, and two times 16x on the other side. Gives you SLI at "16x" on a high end Z97 motherboard for instance ; and such a bridge is used in dual GPU cards, such as the GTX 690.

(More common still would be a bridge that takes 1x lane and makes it two 1x lanes, if you simply needed one more lane for some controller in your design)
 
So does that mean the op doesnt need to worry about how many lanes the 5820 has
because board makers will add more
 
Well no, if one PCIe line is split between two devices, the two devices obviously won't be able operate at full speed at the same time.
 
So your saying you cant add pci-e lanes (line ???) via a addon controller (like you can add sata ports)
but only split the existing lanes ?
I do know some boards have pci-e 3.0 slots and pci-e 2.0 slots would pci-e 2.0 be supplied by a 3rd party controller and would it be good enough for storage ?
 
Last edited:
So your saying you cant add pci-e lanes (line ???) via a addon controller (like you can add sata ports)
but only split the existing lanes ?
This is basically the same thing, just different shape ports between devices. :) If you split a PCIe lane into multiple SATA devices with a SATA controller, or if you merely split multiple PCIe lanes into other multiples of PCIe lanes for M2 cards or whatever those juicy fruit bubblegum SSD cards are called these days, isn't much of a difference.

Also, usually, bus contention isn't that big an issue most of the time, especially in a well-designed system, because generally, multiple storage devices aren't operating at the exact same time.
 
Its not the same thing. For example my silicon image sata controller adds 2 sata ports it does not split the sata ports provided by the jmicron controller.
I am talking about adding pci-e lanes not splitting the existing lanes.
before cpu's contained a pci-e controllers pci-e lanes were provided by the chipset. I am asking if the chipset can still provide pci-e lanes in addition to those provided by the cpu
 
Last edited:
I am talking about adding pci-e lanes not splitting the existing lanes.
You can't do that; you can only split existing lanes, wether they're provided by the CPU, or by the chipset. In most recent generations of at least Intel hardware, CPU lanes have typically been PCIe 3.0, and chipset lanes have been 2.0. With at least skylake (not sure about broadwell, but I think not), chipset lanes are also 3.0 now, seemingly because of the upgrade to the DMI port between CPU and chipset.

I suppose in the past there wasn't much point in 3.0 lanes on the chipset, if the link back to the CPU would just choke up...
 
You can't do that; you can only split existing lanes,
Well that was what i was asking.
So it wouldn't be possible for the cpu to supply lanes to 2 pci-e graphics slots and another controller to provide lanes to for example to 2 4 lane slots and a 1 lane slot ???
and if not, is that just a limitation of cpu's that have onboard pci-e controllers
would cpu's that have no pci-e controllers and supply no pci-e lanes be immune from this ?
 
So it wouldn't be possible for the cpu to supply lanes to 2 pci-e graphics slots and another controller to provide lanes to for example to 2 4 lane slots and a 1 lane slot ???
Well, in a word, "no", because how would that other controller connect to the system? :) ...Through PCIe, in today's PCs, which would eat some lanes coming off of your CPU.

and if not, is that just a limitation of cpu's that have onboard pci-e controllers
would cpu's that have no pci-e controllers and supply no pci-e lanes be immune from this ?
Not really a limitation, per se. In the past, as you may remember, x86 CPUs had no peripheral I/O at all (and before the Athlon 64 they didn't even have integrated memory controller I/O), they connected through a bus or a port to the chipset where all the I/O was located. So you could pile on a lot of I/O there, theoretically, but to actually get any use out of it, all of that data had to travel back and forth through that one link to the CPU, which created a bottleneck and added latency to the system.

Today's integrated PCIe is actually superior to the way things used to be, not only do we have faster I/O right on the CPU than the old connection to the chipset in previous generations of PCs, we actually have a connection to the chipset still, adding even more I/O power! :D Back in the Core2 generation and before, a lot of peripheral I/O was still old PCI, which as you may remember was a shared, parallel bus. One device doing I/O blocked all other devices on that bus, and theoretical performance was hard-capped below original version 1 of PCIe 1x... So today's point-to-point PCIe is a decided upgrade, from just about every standpoint.
 
Back
Top