Crossfire = 1 Master to multiple slaves? 3+ cards....

I just noticed while reading Tom's preview that he had something no one else had, which was this pic:

MultiVPU Status
http://graphics.tomshardware.com/graphic/20050602/ati_crossfire-06.html

It shows options of up to 4 Requested Slaves.....

Would it be possible to have 1 Master connected to 2 or more Slaves?

You'd have to have a MB like Gigabytes prototype with 4 16x PCIe slots, and you'd have to have a new dongle that daisy-chains the 3+ cards together, but do you guys think ATi would allow 3+ or more cards working on the same image? AFR & Supertiling certainly wouldn't be affected, Scissor mode I don't know.

It seems more possible for ATi's Crossfire to do it than nVidia's SLI........

Just thinking outloud....... :)
 
If you want to connect more than 2 cards you will need IMHO additional mastercards because each mastercard can join the own image with one external image.

3 card config:

1: Slave -> Image Part 1 Output
2: Master -> Image Part 1 Input / Image Part 1+2 Output
3: Master -> Image Part 1+2 Input / Image Part 1+2+3 Output
 
Demirug said:
If you want to connect more than 2 cards you will need IMHO additional mastercards because each mastercard can join the own image with one external image.

3 card config:

1: Slave -> Image Part 1 Output
2: Master -> Image Part 1 Input / Image Part 1+2 Output
3: Master -> Image Part 1+2 Input / Image Part 1+2+3 Output

So do you think that the image compiler chip (that's how I think of it anyway) on each Crossfire Master card would also allow another Master to control it? (i.e., Super-Master card)

All I'm looking for is more than 2 cards on one image.............
 
Karma Police said:
So do you think that the image compiler chip (that's how I think of it anyway) on each Crossfire Master card would also allow another Master to control it? (i.e., Super-Master card)
Why would that card need to "control" the composition chip?
That chip is set up by the driver and then simply does its work, without any external communication except incoming image data.
 
All the ATI diagrams have the monitor output looking like it comes off the bottom DVI of the master, not the dongle. Why have a dongle output then? Because otherwise the master can't run dual-DVI. If that's the case, then multiple cards is easy - you just daisy-chain them. All you need then is for the software to accommodate it, by telling each card to render less of each scene or what have you (supertiling=less tiles, scissor=smaller proportion of screen, need to be able to do just middle bits, AFR=every third/fourth frame).

[edit] Duh, doesn't matter which output you use, cos they're both DVI anyway. Either way, software aside the only big deal is the latency of the compositor, as you'd have to wait for the image to pass through three of them, and the end card would have to wait for two before it could move on, unless there's room for buffering it all.
Also, that'd give you, uh, 28xAA?
 
I agree that you would need one slave and 3 masters to be able to connect 4 cards together like this:

Slave
\
Master
\
Master
\
Master
|
Screen ;)
 
mashie said:
I agree that you would need one slave and 3 masters to be able to connect 4 cards together like this:

Slave
\
Master
\
Master
\
Master
|
Screen ;)
Just think about all the dongles! :oops: ;)
 
You'd need to have a motherboard that would support this first. I'd be highly surprised if anybody did.
 
Karma Police said:
incurable said:
Just think about all the dongles! :oops: ;)
I think there would only be a two card dongle, three card dongle, etc...
While I guess you could make a single dongle that could do that, it would still only daisy-chain the various mastercards' DMS in/outs, with each "compositing engine" combining the chip's output with the input of the former one(s).

Chalnoth said:
You'd need to have a motherboard that would support this first. I'd be highly surprised if anybody did.
Wasn't Gigabyte showing a dual x8+x8 PCIe lane motherboard recently? Then again, it was based on two nForce 4 SLI chipsets (one Intel, one AMD), so it lacked the ATi Crossfire "magic sauce".

Anyways, a quad-R480/R520 setup would be quite interesting to see, if only for a pulicity stunt like a new über-3DMark05 record. The number of evaporator cooling units and/or LN2 tanks alone would make it a sight to remember. :eek: :oops:
 
Yeah, was about to say. 4x8 mobos exist already. Additionally, how hard would it be, just out of interest, to make a chip which added additional lanes? Like, for example, one with HT links on both sides that'd just sit between either chip and SB or NB and SB and give you potentially as many extra lanes as you want? Presumably with the stuff that's on an SB normally the extra latency wouldn't be a huge deal, and it'd let you do all kinds of extra crazy stuff without waiting for full chipset support.
 
Charmaka said:
Yeah, was about to say. 4x8 mobos exist already. Additionally, how hard would it be, just out of interest, to make a chip which added additional lanes? Like, for example, one with HT links on both sides that'd just sit between either chip and SB or NB and SB and give you potentially as many extra lanes as you want? Presumably with the stuff that's on an SB normally the extra latency wouldn't be a huge deal, and it'd let you do all kinds of extra crazy stuff without waiting for full chipset support.
Not hard at all, they chip you need already exist.

The nForce Pro 2200 by itself gives you 20 PCIe lanes. However, it is designed to work with up to three nForce Pro 2050 companion chips, each adding another 20 PCIe lanes.

So now if any kind mobo manufacturer could give us a board with one 2200 and three 2050's we would have a 80 PCIe lanes to spread out on 4 PCIe x16 slots plus another 16 lanes for raidcontrollers, NICs and PPU ;)
 
Works for me :) Is the 2200 suitable for a gaming rig? I mean, obviously it's gonna cost a bomb, but are there problems with latencies, for example?
 
Both the 2200 and the 2050 chips hang of the CPUs on the HTT bus. It is after all a workstation/server chipset based on nForce4 so I would be surprised if it was any worse.
 
mashie said:
Both the 2200 and the 2050 chips hang of the CPUs on the HTT bus. It is after all a workstation/server chipset based on nForce4 so I would be surprised if it was any worse.
I'd be surprised if it wasn't worse. The typical usage workstation/server chipsets are put through is completely different than home systems, and they put a much higher emphasis on stability.
 
Yeah, ECC all over again - that was what prompted me to ask. Well, that and the fact that if that was an easier solution Gigabyte probably would've used it.
 
Charmaka said:
Yeah, ECC all over again - that was what prompted me to ask. Well, that and the fact that if that was an easier solution Gigabyte probably would've used it.
One reason is that nForce Pro is designed for opteron CPUs and not A64y. Then I have a feeling it is cheaper to "glue" together two consumer grade chips.
 
IIRC, the nForce 4 PRO (the 2200+) can run Opterons WITHOUT ECC RAM. Its one of the things that attracted me in the first place.

Mind you, I read that over at anandtech more than a while ago now. A month is a year in the computer industry so that might have changed by now, but would be nice if it was still true.
 
Back
Top