ATI's unannounced SLI solution compatible with nforce4?

Razor1 said:
Thats a possiblity they wouldn't hurt ATi's SLi but would probably improve performance of thier own cards. As the nforce boards already do this.
Actually, I have yet to see any evidence of this in practice. Do you know of any benchmarks that highlight, for instance, a greater performance ratio of NV to ATI on an nForce vs. Intel, VIA, or some other motherboard chipset?
 
Chalnoth said:
Actually, I have yet to see any evidence of this in practice. Do you know of any benchmarks that highlight, for instance, a greater performance ratio of NV to ATI on an nForce vs. Intel, VIA, or some other motherboard chipset?
Actually some did do that test, not sure if it was techreport or not. They took an ati card and nv card, tested the performance on a via, nforce, and intel board. The NV card gained much more performance on the nforce mb. This was a while back, im pretty sure thats what it said.

epic
 
Chalnoth said:
Actually, I have yet to see any evidence of this in practice. Do you know of any benchmarks that highlight, for instance, a greater performance ratio of NV to ATI on an nForce vs. Intel, VIA, or some other motherboard chipset?

Chalnoth, come on, you know better.........

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2004

Since nVidia claims that nForce3-250 performs best with nVidia's latest graphics cards, benchmarks will also compare performance of an nVidia 5950 Ultra, 9800 XT, and our standard ATI 9800 PRO on the nF3-250 chipset.

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2009&p=4

You can compare these results to others that have been published to see the expected results with 5950U versus ATI 9800. There certainly does seem to be a synergy in the 5950U/nF3-250Gb combo that yields performance that is a bit better. The ATI 9800 XT performs just as well in nForce3-250 as it does in other Athlon 64 Socket 754 boards, but 5950U is a little faster.

It seems that nVidia truly has something extra for their own video cards when used with their chipsets....... So what make you think that the same won't apply to SLI....? While I wouldn't describe this as anything like damning proof, it does show that they are willing to put that extra bit of work for their own cards. Given the way they responded to ATI's clear lead durring the 5xxx series, is it far fetched to think that given a similar senerio that they wouldn't do whatever it takes to make themselves look better? Especially IF they control the majority of the chipset market......

Many here complain how little ethics matter in this industry, and have pretty much given nVidia a pass for past transgressions. Does any here actually believe that nVidia has turned into a different company now that they are competitive? Can any here say with certianty that, given a similar lead by ATI in SLI that nVidia wouldn't return to the cheating ways of the past?
 
martrox said:
It seems that nVidia truly has something extra for their own video cards when used with their chipsets....... So what make you think that the same won't apply to SLI....?
Do you see any difference between crippling on purpose Radeon performance on their m/b and making nF+GF pair working better together (though i'm not THAT sure in Anand's results; should check it out for myself...)? That's VERY different things to me.

First is illegal and just plain stupid for business. Second is actually good IF other vendor's cards shows the same performance as on other vendor's chipsets, BUT GF shows some gains over other vendor's chipsets.
 
martrox said:
It seems that nVidia truly has something extra for their own video cards when used with their chipsets....... So what make you think that the same won't apply to SLI....?
Ah, but you see, the comparison you posted was with different AGP cards. From what I understand, PCIe works rather differently from AGP in that the performance should be much less dependent upon motherboard drivers. So I'd be much more interested if you found similar performance discrepancies with the nForce4.

And, finally, the only possible difference here is between an nForce board running an nVidia card vs. an ATI card. It's just not realistically possible for SLI to add anything new into the mix. Once again I'm not even saying nVidia won't do it. I'm saying they can't.
 
DegustatoR said:
martrox said:
It seems that nVidia truly has something extra for their own video cards when used with their chipsets....... So what make you think that the same won't apply to SLI....?
Do you see any difference between crippling on purpose Radeon performance on their m/b and making nF+GF pair working better together (though i'm not THAT sure in Anand's results; should check it out for myself...)? That's VERY different things to me.

First is illegal and just plain stupid for business. Second is actually good IF other vendor's cards shows the same performance as on other vendor's chipsets, BUT GF shows some gains over other vendor's chipsets.

Well compairing and ATi running on an nforce board they still run faster then other boards, or right around what other boards go at with the same ATi card.

But nVidia's card just run plain faster. Thats ok and acceptable business. Since you are not huring ATi's cards on thier motherboards, just making sure its equavalent or a litlle faster on the nforce boards. It could be just better compatbility between the nforce and geforce driver sets. Anyways I don't see why nvidia would hurt ATi's performance if they do make a SLi complaint card. Its just not smart business.

Edit:

Better compatibility because nvidia is able to tweek both side of the software not just one.
 
True, but there is always some margins, during engine development I would love to have access to the way caching is handled in a graphics card but this is kept very secretively by the graphics companies, so I do some trial and error process, but this takes time, time we usually don't have lol, but if I did have access it this information, I'm sure I could optimize at least a 10% increase in speed for different chip sets.

I don't know much of driver developement something similiar might be possible there as well, now with PCI-E its more of a standrad system since information can go both ways, which both companies will be optimizing for so as you said it should decrease
 
Razor1 said:
True, but there is always some margins, during engine development I would love to have access to the way caching is handled in a graphics card but this is kept very secretively by the graphics companies, so I do some trial and error process, but this takes time, time we usually don't have lol, but if I did have access it this information, I'm sure I could optimize at least a 10% increase in speed for different chip sets.
Sure, with AGP. With AGP, from what I understand, some of the texture memory management was handled by the motherboard. This meant that the motherboard drivers could have a large impact on video card performance, and it also meant that the optimal way of doing the management would be different between different graphics cards.

But, from what I understand, PCIe is much more a "dumb interface," where there just aren't the settings available to tweak to get better performance.
 
Back
Top