aaronspink
Veteran
P2P or not, there's a possibility of flooding when you consider that all your data being transferred has to pass through the interface to the destination video card (the one with the monitor cable attached to it). That's the weak link in the chain, to say nothing of any possible limitations in internal bandwidth of the central hub.
Not anything close to a weak link compared to the existing solution which is at best a 1 GB/s connection via a bridge link.
What measurements; what number of video card running crossfire, what screen resolution? What screen refresh rate? Go 120Hz (which many hardcore gamers are very eager to do), you again double bandwidth requirements over what is the current defacto standard. The bandwidth ceiling's gonna be flying towards your head at that rate.
BeHardware among others have done extensive performance testing of the impact of PCI-e bandwidth. They last year tested a variety of PCI-e configurations with and without CF with a wide variety of workloads.
As far as max rez at 4k, theirs a nice limiter for that, its the cards themselves.
Also consider that of intel's current CPUs, only sandy and ivy bridge EX offer full PCIe 16x interfaces when running more than one board. Few people buy such systems to game on, due to rather massive costs. That halves available bandwidth for crossfire, IE problem for 4k rez/60Hz at least.
you'll only run into a problem if you are running multiple 4k rez monitors in eyefinity. And if you are doing that...
Really? That'd be extremely surprising. There's a lot of pins in those connectors, particularly on AMD cards, enough for at least 4 differential signalling links I should think.
You might want to go look at a 1x PCI-e interface sometime.