ATI 'Crossfire' practical issues

geo said:
Would certainly be one of the great con jobs if they do it.
I'm starting to expect that out of ATi, they've gotten a whole lot better at being sneaky and actually pulling off some suprises. (I blame it all on Huddy...he's sneaky, ya gotta watch him. ;) )
 
digitalwanderer said:
geo said:
Would certainly be one of the great con jobs if they do it.
I'm starting to expect that out of ATi, they've gotten a whole lot better at being sneaky and actually pulling off some suprises. (I blame it all on Huddy...he's sneaky, ya gotta watch him. ;) )

Well, a couple weeks ago everybody and their brother knew ATI was launching R520 at Computex. Maybe some senior somebody at ATI turned to some other senior somebody and said "go mindf**k 'em".

8)
 
{Sniping}Waste said:
Geeforcer said:
I think the reason {Sniping}Waste is asking about it is because we are all trying to figure out how a R420-R520 setup would work in an SM3.0 game.

With all the info out now, in the box thinking is getting no were. Im thinking out side the box to look for answer with th info out there. The strange thing is with a SLI setup there is no Master or Slave. Both cards are looked as the same but from the info coming out now it looks like ATI setup that you have to have a master card for it to work. There is no master/slave setup with SLI so what is ATI doing? I think what there coming out with is not SLI but somthing different. If there making a master card then what is it controling?

All the talk is about SLI but non have ever thought about it not being SLI but somthing else. Its strange in this fourm that there has been no talk about it being somtheing else with all the ppl here and knowlage they know.

SM3.0 code is still SM3.0 code, even if it does not use any SM3.0 native function, it is still compiled as SM3.0.
look at the new SC it has only 2 modes, SM3.0 and SM1.3.
you cant run the SM3.0 mode under SM2.0 hardware, even if the hardware can easily run those shaders.
for a driver based solution youll need to recomplie the shader code into a compatible code that the older hardware can chew inside the driver, which will be slow and painfull proccess.
but if you had support from the API, OS, and the APP you can run in 2 or more paths, and send the relevant data to the right card.
imho this level of co-proccessing will be imposible to reach, well at least wont be any thing close to practical to do so.
 
Is it possible to have one card working on SM3.0 code and have the other card thats not capable of SM3.0 working on something else?
 
egore said:
Is it possible to have one card working on SM3.0 code and have the other card thats not capable of SM3.0 working on something else?

Thank you for pointing out what DOGMA1138 missed. What you posted is what I posted is what DOGMA1138 missed.
 
egore said:
Is it possible to have one card working on SM3.0 code and have the other card thats not capable of SM3.0 working on something else?
Not in a way that is faster than having the SM3.0 card doing all the work alone in the first place.
 
In fairness to Dogma, he may be trying to imply that it would be impossible to send the SM3.0 stuff to the master card and just have the slave deal with SM2.0 below and such at a driver level.

Me? I have no clue if it's possible or not really at that level, but it seems like it should be possible at the shader compiler/driver level. :|
 
Maybe the slave card in that situation won't do any shading work at all and only assists in AA or z-only pass for stencil shadowing.
 
Xmas said:
egore said:
Is it possible to have one card working on SM3.0 code and have the other card thats not capable of SM3.0 working on something else?
Not in a way that is faster than having the SM3.0 card doing all the work alone in the first place.
why shoud he do all the work alone? they both can share a SM2.0 path.

digitalwanderer said:
In fairness to Dogma, he may be trying to imply that it would be impossible to send the SM3.0 stuff to the master card and just have the slave deal with SM2.0 below and such at a driver level.

Me? I have no clue if it's possible or not really at that level, but it seems like it should be possible at the shader compiler/driver level. :|
yes its seems to be posible if you recompile the code on the driver level.
which is going to be dreafully slow.

basicly what in theory you`ll need to do is...
if we talking about PS you`ll send all info to the "master card', when a shader program that can be run under lower shader mode is found you`ll need to identify all the pixels related to it, some how mark them and send them to the secondery card, recompile the shader code to somthing the scondery card can process, resterize the pixels, send them back to the primary card and place them in the final image. now since there is a large overdraw ratio you`ll need to do this alot.
frankly i dont this thing happening, do you?

egore said:
Maybe the slave card in that situation won't do any shading work at all and only assists in AA or z-only pass for stencil shadowing.
AA and stencil shadowing are not stressfull as shader proccessing you wont be gaining much from it.
yes all the things you suggested are posible in some way or another, but you are missing the point.
ITS NOT WORTH IT, managing such things in the driver will not be even clost to be worth the efforts, and i wont jump to far if ill say you`ll get the preformance of software rendering under most cases. its oh so much simpler and better to force the app to utilize paths that both cards can proccess. thats how MVP`s mix`n`match will work.

note to myself: do not make long posts at 6-7am after 36h+ w/o sleep :oops:
i can bearly speak my own language atm, ill fix all the grammar and spelling misstakes some other time.
im going to passout on the way to my bed, good night
 
{Sniping}Waste said:
The R520 will do all the SM3.0 and long SM2.0 shaders and passing the standered SM2.0 and lower shader to the R420 to do. The same with VS.
This is about as possible as a RIVA TNT suddenly jumping up and running SM3 code faster than a GeForce 6800 Ultra.
 
DOGMA1138 said:
yes its seems to be posible if you recompile the code on the driver level.
which is going to be dreafully slow.

basicly what in theory you`ll need to do is...
if we talking about PS you`ll send all info to the "master card', when a shader program that can be run under lower shader mode is found you`ll need to identify all the pixels related to it, some how mark them and send them to the secondery card, recompile the shader code to somthing the scondery card can process, resterize the pixels, send them back to the primary card and place them in the final image. now since there is a large overdraw ratio you`ll need to do this alot.
frankly i dont this thing happening, do you?
Of course that won't happen. Because it would be slower than having the master card doing the work alone. Which is precisely what I wrote above.
 
Besides all the prementioned, I don't think such an awkward compilation would even work if the two GPUs won't have the same frequencies.

This whole thing is silly anyway; if someone has the money for a dual-peg motherboard, an according PSU and a R520, he'd just sell the R4xx and fetch another R520 and call it a day.
 
Ailuros said:
Besides all the prementioned, I don't think such an awkward compilation would even work if the two GPUs won't have the same frequencies.

This whole thing is silly anyway; if someone has the money for a dual-peg motherboard, an according PSU and a R520, he'd just sell the R4xx and fetch another R520 and call it a day.

what would be interesting is the future when wgf 2.0comes out


Say u buy an wgf 2.0 ati card and then 2 or 3 years later you need more speed. u buy a wgf 2.0 3rd gen ati card and use that as the master and the other as the slave . This way if the feature set didn't change much u will get alot of performance gain .

It would be better than having to stick with the smae generation of card though i don't know how likely this is
 
The problem, as I see it, is that the lower-performance card won't add much to the performance of the high-end card, so it'd be better to either use the old card in a different machine or otherwise sell it off.

There's the added difficulty that any sort of AFR scheme just won't work with two different cards (one fast frame, one slow frame consistently would just look horrible).
 
jvd said:
Ailuros said:
Besides all the prementioned, I don't think such an awkward compilation would even work if the two GPUs won't have the same frequencies.

This whole thing is silly anyway; if someone has the money for a dual-peg motherboard, an according PSU and a R520, he'd just sell the R4xx and fetch another R520 and call it a day.

what would be interesting is the future when wgf 2.0comes out


Say u buy an wgf 2.0 ati card and then 2 or 3 years later you need more speed. u buy a wgf 2.0 3rd gen ati card and use that as the master and the other as the slave . This way if the feature set didn't change much u will get alot of performance gain .

It would be better than having to stick with the smae generation of card though i don't know how likely this is

GPU refreshes or next generation cards within any timeframe are in the majority of cases clocked higher and carry also higher speced memory (rarely with lower frequencies due to other improvements). As I said in order to get two GPUs to work in either SFR or AFR frequencies would have to be the same.

Would you be able to get to work theoretically a R360@360 with a R420@520MHz? In such a case if it would be even possible you'd have to logically find a level where both would operate at the same level, like both at 400MHz; why would you even buy a R420 for such a config in the first place?

Besides any feature change will be a culprit for such a combination; change just one algorithm fundamentally in a future generation and a whole can of worms is opened. Even if it would be possible, I doubt any IHV would waste resources for such a project.
 
Would you be able to get to work theoretically a R360@360 with a R420@520MHz? In such a case if it would be even possible you'd have to logically find a level where both would operate at the same level, like both at 400MHz; why would you even buy a R420 for such a config in the first place?

what i'm thinking is more like a r300 and a r360 used together .


Say ati comes out with the radeon 1 and its a full wgf 2.0 card and its clocked at 500mhz and 16 pipelines . Now wgf2.0 is meant to last as long as dx 9 (which is almost at 4 years now ) so say 3 years down the road ati has a 800mhz card with 32 pipelines and full wgf2.0 feature set . You can hook up the old card to the new one . While its not going to give you the same bang as two 800mhz cards . It will give you a better bang than a single 800mhz card .


Or even really just the diffrence of a year between a r300 and a r360. They both support the same features so why not let them work together ? I don't know why they couldn't make a dynamic load balancer ?
 
It seems as most of you have missed the most usefull combinations of different cards.

1) Adding a card that is similar to what you've already got. But since it's an old card now, it's cheaper. It's a good thing that you don't have to find the exact same card. This is of course not possible if the card you've got can't be a master.

2) Combining your old high end card with a new mid range card. Both having roughly the same performance. Again, giving you a cheap update.
 
http://www.theinquirer.net/?article=23522

Boy, I sure hope this is true, and if anything understated. But I'll refrain this time from playing my broken record about SLI/MVP being uninteresting to me until it lets you crank up the IQ beyond current single-card solutions. Oh, wait, too late. . . :)
 
jvd said:
Say ati comes out with the radeon 1 and its a full wgf 2.0 card and its clocked at 500mhz and 16 pipelines . Now wgf2.0 is meant to last as long as dx 9 (which is almost at 4 years now ) so say 3 years down the road ati has a 800mhz card with 32 pipelines and full wgf2.0 feature set . You can hook up the old card to the new one . While its not going to give you the same bang as two 800mhz cards . It will give you a better bang than a single 800mhz card .

Assuming same feature-set:

16*500 = 8 GPixels/s
32* 800 = 25.6 GPixels/s

Overclock the first to say 550MHz and underclock the second to the same level:

16*550 + 32 * 550 = 25.6 GPixels/s

Notice anything?

That's a weird theory anyway since there's no guarantee that features won't change in the least during the WGF2.0 era. Assume one IHV decides to implement a entirely new antialiasing algorithm, with not only higher sample densities but with lower framebuffer/memory footprint consumption than before. It would be idiotic in such a case to dump the advanced GPU back to the old algorithm and AA wouldn't end up any faster than on a dual board config.

Identical frequencies have to be a presupposition; you can't have two GPUs sharing a frame or render alternating frames when they run at different frequencies.

Or even really just the diffrence of a year between a r300 and a r360. They both support the same features so why not let them work together ? I don't know why they couldn't make a dynamic load balancer ?

Because the hardware obviously wasn't layed out for it. Both SLi and MVPU use specialized hardware; GPUs have to be either "SLi-" or "MVPU-ready".

SLi isn't just a "bridge" between the cards and a driver as isn't MVPU just a cable between the GPUs + driver.
 
geo said:
http://www.theinquirer.net/?article=23522

Boy, I sure hope this is true, and if anything understated. But I'll refrain this time from playing my broken record about SLI/MVP being uninteresting to me until it lets you crank up the IQ beyond current single-card solutions. Oh, wait, too late. . . :)

There was a rumor about this a few weeks ago.
though I wonder wtf these 10x and 14x modes are, I only expect 8x and 12x modes (and maybe 4x and 2x modes). Most probably it's about combining two rendered images to do 2x supersampling.
 
Back
Top