Opinions needed on this Interview

What's the price differential on an ATI mobo w/IG vs a mobo without? That could lead to some increased interest in regular ATI mobos as well. I hope the B3D review will try out this combo to see what kind of performance gains it produces. If you can get an extra, say, 20% performance this way for not a lot of cash in having an IG mobo that might be an attractive route, particularly for the more price/performance minded.
 
The Inq is reporting something similar. What's the desktop equivalent of the integrated chip? I wonder if the master slave thing is just to allow the use of older cards or if it's going to be like that with next gen dual-gpu setups as well.
 
One of the more interesting things I saw in the Hexus piece is the implication that the R480 or higher is required to be a master. Given the whole "availability refresh" logic turned out to be. . uhh, not all that compelling ;) . . .did they actually slip in some hardware enabling in R480, and that was as much the point as anything else?

Edit: This part "Apparently the first solution available will be based on ATi RADEON X850 XT" in my mind also decouples the necessity to have the R520 and MVP launches be simultaneous, for whatever that means. Could R520 launch later than Computex now?
 
My interpretation of the master/slave thing is that one card needs to have the hardware to shunt data to the second card and then reassemble the complete image for output, but that the second card basically just acts as normal as far as it's concerned. In other words, the driver just sends the data to the "master" card, which does the chopping up stuff and passes the stuff to be done by the "slave" along to it over the PCIe bus. The second card just treats the incoming data as it would in standalone, rendering a picture with a bunch of black squares. This is then outputted as normal (probably over DVI, I'd imagine) into the dongle-thing, which is somehow patched back into the "master" card (do all X850s have video in on the backplate? or is it going to need to be plugged into an internal port?), which, as it knows which bits were rendered by which card, can just patch the image back together and output it to the monitor.

Presumably PCIe is a requirement, which gives a legitimate reason for waiting for this long and explains why you need a PCIe card (duh) (for full duplex ability as much as anything else? not sure this would be much fun on a half-duplex bus), but otherwise the slave card doesn't have to do anything different from normal except possibly to know to cull everything in tiles it's not doing before it starts, and the master card just needs the splitting-up and reassembling hardware. In all other respects they're both acting as normal.

This would also suggest that this setup doesn't require anything special on the mobo other than two PCIe slots of the correct size (will it work in 8x? I'd assume so), as there's nothing particularly special going on (unless integrated graphics are involved?) or at the driver end, apart from just remembering to send data to the master only.

[edit] Whoa, paragraphs.
 
Charmaka said:
My interpretation of the master/slave thing is that one card needs to have the hardware to shunt data to the second card and then reassemble the complete image for output, but that the second card basically just acts as normal as far as it's concerned. In other words, the driver just sends the data to the "master" card, which does the chopping up stuff and passes the stuff to be done by the "slave" along to it over the PCIe bus.

This bit doesn't add up for me. Both cards will need access to all screen assets so the master card won't need to 'decide' what to send to the slave board. The dongle is probably mostly used to feed the master board and maybe in the other direction as well to send completed render target buffers to the slave card.
 
...fair point, completely failed to consider that. Drivers will have to send full data to both cards. Guess that removes one additional mod to the master. As to the dongle, I still think it makes more sense for it to be a simple uni-directional video feed with master -> slave data being handled by the PCIe bus. If they were going to have a seperate bidirectional connection between both cards it'd make more sense to have it internally. To me it only makes sense as an external connection from the backplate if it's just passing video across to be assembled on the master. This configuration also makes some sense of ATI's comments about Nvidia's SLI connector - such an arrangement would remove the need for any special connections on either card (video in on the master aside).
 
Oh good point. Sending data from the master to slave via the VGA dongle doesn't make sense. Wonder why the dongle is only needed for certain combinations of cards...
 
I have a feeling these new unannounced "master cards" have DVI IN connectors. It would be the logical solution to get the frame buffer over from slave to master without signal degradation. More or less all ATI cards since the 9700PRO has had DVI out.

The DVI bandwidth is quite a bit if I got the values correct:

1920 x 1200 x 60Hz x 32bit colour / 1024 * 1024 * 1024 gives you a nice 4.11GBit/s uplink from the slave (equivalent of PCIe 16x in one direction)...

Oh hi btw, I think this is my forst post here. :D
 
mashie said:
The DVI bandwidth is quite a bit if I got the values correct:

1920 x 1200 x 60Hz x 32bit colour / 1024 * 1024 * 1024 gives you a nice 4.11GBit/s uplink from the slave (equivalent of PCIe 16x in one direction)...

Oh hi btw, I think this is my forst post here. :D
Single link DVI has 165MHz pixel clock, transmitting 24 bit per pixel (coded as 30 bits for error correction/transition minimization): 495 MB/s. PCIe x16 is 8 times that, don't confuse bits with bytes.

But what matters is, it's enough to send the framebuffer from one card to another. A "SLI bridge" connector seems more elegant to me, though.
 
Xmas said:
But what matters is, it's enough to send the framebuffer from one card to another. A "SLI bridge" connector seems more elegant to me, though.

Yes, but then you would be limited to two cards that include said SLI bridge. Particularly when you are introducing SLI, being able to leverage already existing hardware on the market would be a nice feature.
 
Joe DeFuria said:
Xmas said:
But what matters is, it's enough to send the framebuffer from one card to another. A "SLI bridge" connector seems more elegant to me, though.

Yes, but then you would be limited to two cards that include said SLI bridge. Particularly when you are introducing SLI, being able to leverage already existing hardware on the market would be a nice feature.

AMR doesn't do a much better job of this since it's limited to PCIe but you do have a point. Can't wait to see how all this turns out.
 
TBH I think the "only high-end" thing is just Hexus speculating why they're going with a hardware connection at all given their criticism of the SLI bridge. As to elegance, in many ways I think ATI's system wins in many ways, particularly if they do just have a DVI input on the masters. It certainly wins in practical terms.
 
I don't think either is more elegant. Nvidia requires custom logic to support SLI. ATI requires custom logic on the master board. Nvidia requires a connector across the cards. ATI requires a connector across the cards (although this may not be required in all card combos). ATI adds another dimension with dedicated master and slave hardware, though, and I'd like to see how the marketing for this works.
 
Charmaka said:
TBH I think the "only high-end" thing is just Hexus speculating why they're going with a hardware connection at all given their criticism of the SLI bridge. As to elegance, in many ways I think ATI's system wins in many ways, particularly if they do just have a DVI input on the masters. It certainly wins in practical terms.
Why? It's certainly nice to use available bandwidth that would otherwise be wasted, but the internal solution is still faster, and bidirectional. And it's more aesthetically pleasing ;)
 
Xmas said:
Charmaka said:
TBH I think the "only high-end" thing is just Hexus speculating why they're going with a hardware connection at all given their criticism of the SLI bridge. As to elegance, in many ways I think ATI's system wins in many ways, particularly if they do just have a DVI input on the masters. It certainly wins in practical terms.
Why? It's certainly nice to use available bandwidth that would otherwise be wasted, but the internal solution is still faster, and bidirectional. And it's more aesthetically pleasing ;)

Sure, it's faster, and it looks good if you prefer angles to curves (a nicely-implemented DVI-DVI connector would look sweet IMO), but it loses in terms of design/implementation elegance IMO. Nvidia's setup requires load-balancing in the drivers, identical cards, custom PCBs, precise distance between cards, a reasonably expensive bridge component etc. ATI's just needs the driver to understand to send the data to both cards, what's presumably a fairly small tweak to the actual chip to work out which tiles should be rendered by each card, another larger core tweak to recombine (unless it's on a seperate chip on the PCB), and a cheap cable to connect the two. Moreover, it doesn't need identical cards, works backwards with cards people already own, probably works seamlessly with onboard graphics and should scale in all directions with ease. That's the way I see it, anyway
 
Xmas said:
Single link DVI has 165MHz pixel clock, transmitting 24 bit per pixel (coded as 30 bits for error correction/transition minimization): 495 MB/s. PCIe x16 is 8 times that, don't confuse bits with bytes.

But what matters is, it's enough to send the framebuffer from one card to another. A "SLI bridge" connector seems more elegant to me, though.
Doh! Will try to post when fully awake next time ;)
 
ATI does have one up on Nvidia for sure when it comes to flexibility in combining different hardware. But other than that they are equal. Nvidia has claimed to have zero driver overhead for load balancing (highly unlikely) but it's naive to think that AMR will be performed completely in hardware. I don't think anyone with an SLI/AMR setup cares how the cards are connected as long as the performance is there.
 
trinibwoy said:
Charmaka said:
and should scale in all directions with ease.

Do you know something we don't? ;)

Heh, not in the slightest. Probably should make clear that my "contacts in the industry" amount to reading the intarweb, and mainly B3D when it comes to the nitty gritty. I know less than you :p I'm just speculating on the implications of this if Hexus' info is correct. If it's card-independent, only requires a hard connection between cards for transferring the actual image, supports more than two chips and slave capability is available in every card in the line, there doesn't seem to be much holding it up really.
 
Back
Top