Will ATi jump on the SLI bandwagon?

trinibwoy said:
PatrickL said:
Or more likely capturing an empty niche :)

If it works the niche will have at least one member - ME!! :LOL: According to the INQ Nvidia is reporting 7229 in 3dmark05 with dual 6800 Ultra's. Now if the INQ is also right about a 5000 score with a single Ultra then that is nowhere near double the performance on the SLI setup.

Heh , 500$ for 1.3 times the performance of the x800xt pe or 1.44 times the performance of the 6800ultra .

Hopefully with more mature drivers they get the performance up to around 1.7ish of a single card . otherwise double the cost isn't all that great .
 
jvd said:
trinibwoy said:
PatrickL said:
Or more likely capturing an empty niche :)

If it works the niche will have at least one member - ME!! :LOL: According to the INQ Nvidia is reporting 7229 in 3dmark05 with dual 6800 Ultra's. Now if the INQ is also right about a 5000 score with a single Ultra then that is nowhere near double the performance on the SLI setup.

Heh , 500$ for 1.3 times the performance of the x800xt pe or 1.44 times the performance of the 6800ultra .

Hopefully with more mature drivers they get the performance up to around 1.7ish of a single card . otherwise double the cost isn't all that great .

That's assuming that 7229 isn't limited by the 3.4Ghz Pentium CPU in the box. When we start getting high res high AA/AF comparisons then I'll be able to make my decision :D
 
That's assuming that 7229 isn't limited by the 3.4Ghz Pentium CPU in the box. When we start getting high res high AA/AF comparisons then I'll be able to make my decision

How do you know ? Have you tested 3dmark 2005 with sli ? Can you back up your claims ?
 
jvd said:
That's assuming that 7229 isn't limited by the 3.4Ghz Pentium CPU in the box. When we start getting high res high AA/AF comparisons then I'll be able to make my decision

How do you know ? Have you tested 3dmark 2005 with sli ? Can you back up your claims ?
Damn dude, what claims!? All he said was he'd have to wait for high res AA/AF tests to find out if the SLI setup is CPU limited in 3DMark05.
 
Fodder said:
Damn dude, what claims!? All he said was he'd have to wait for high res AA/AF tests to find out if the SLI setup is CPU limited in 3DMark05.

Don't bother man. This guy sees what he wants to. To summarize:

trinibwoy said:
I think SLI is a viable concept and if Nvidia delivers I'll consider building a dual-GPU rig

jvd said:
I think SLI is crap and will never work so let me jump into every thread and beat people over the head with my negative vibes
 
Don't forget that these 3dmark05 scores might be at 1024x768 without AA or AF. SLI will be of benefit most in situations where the resolution, AA, and/or AF settings are bumped up.
 
jimmyjames123 said:
Don't forget that these 3dmark05 scores might be at 1024x768 without AA or AF. SLI will be of benefit most in situations where the resolution, AA, and/or AF settings are bumped up.

I'd have thought it is safe to say that NV will have quoted the best numbers possible in the press release - it would be daft for them not to. If anything, this would indicate to me that SLI doesn't provide more of a benefit with AA/AF etc and could even be less of a benefit when these features are applied.

An alternative is that their SLI software support isn't up to scratch yet. Even if the score was to increase a decent amount with future drivers, this could be a 'panic' press release because R420 'out-3DMarks' NV40 at the moment.
 
SLI is a backwards looking technology. Nvidia's argument against 3dfx's SLI tech was that it was 1. inefficent 2. our 1 chip solutions will be cheaper and faster.

It'll be interesting to see if Nvidia is able to keep pace with ATi's 1 chip solutions despite taking engineering side roads like this SLI.

The whole argument of SLI revolves around the two card set being the most powerful out there. If theres a one card solution that is as fast or faster, SLI suddenly looks ugly as heck as it did in the 3dfx days.
 
duncan36 said:
The whole argument of SLI revolves around the two card set being the most powerful out there. If theres a one card solution that is as fast or faster, SLI suddenly looks ugly as heck as it did in the 3dfx days.

If there's a one card solution that is fast or faster than two NV?? then Nvidia has a lot bigger problems than the viability of SLI.
 
Mariner said:
I'd have thought it is safe to say that NV will have quoted the best numbers possible in the press release - it would be daft for them not to.
1024x768 0xAA/0xAF is pretty much the standard for 3DMark though, so sure they could put out a press release saying "we get 4000 at 1600x1200/8xAA/16xAF" but your average reader will just see the 4000 and relate it directly to the X800XT's 5000+.
 
I think it's telling that there is no test configuration information at all in Nvidia's press release. For all we know that score could be at 640x480, or using overclocked boards, or with post-processing turned off. They gave all the details on the system configuration, so they could easily have specified the test settings as well, or at least said something like "default".

I guess that's the benefit of "benchmark tests performed by Nvidia". You can press release a score achieved with unknown settings on an unavailable product that no one can confirm. I suppose that's the only response they could come up with for getting soundly beaten by ATI on every existing product.

The risk, of course, is that they've let slip that SLI probably isn't going to achieve the kinds of performance gains they were projecting earlier (i.e. 1.8x). At 7229, that means a $1000 6800 Ultra SLI setup (probably closer to $1500 if you count the motherboard and PS upgrades that will almost certainly be necessary) is less than 25% faster than a $500 X800 XTPE. And it pretty much kills the argument for using SLI on anything slower than a 6800 GT. Especially when you account for the fact that 3DMark05, being a graphics-focused benchmark, is probably going to be less CPU bound than a typical game (which has AI, collision detection, physics, sound, networking, etc. to worry about).
 
The less CPU bound, the better SLI will perform. SLI is not backwards looking. Both ATI and NV are building cards that can operate in clustered mode. For the very highest end, e.g. DCC and offline, it is forward looking. And of course, there are always mad people who will want the ultimate system and are willing to pay for it.

I think the ultimate SLI realization will be to make supersampling perform well. 8xS and 16xS will probably be feasible.
 
DemoCoder said:
The less CPU bound, the better SLI will perform. SLI is not backwards looking. Both ATI and NV are building cards that can operate in clustered mode. For the very highest end, e.g. DCC and offline, it is forward looking. And of course, there are always mad people who will want the ultimate system and are willing to pay for it.

I think the ultimate SLI realization will be to make supersampling perform well. 8xS and 16xS will probably be feasible.

I remember specifically the arguments Nvidia used to attack 3dfx's SLI. They seemed to be legitimate because as we all know Nvidia won and 3dfx lost.

The real argument is efficiency, SLI is inefficent as all hell. Yes you can arrange a massive core melting array of SLI cards and it will be powerful, or you can use that engineering time to better arrange cores on die which is more efficent, more cost effective, and just all around better.

I agree with old Nvidia and disagree with new Nvidia.
 
This speculation is getting tiring. Why won't Nvidia just release their damn SLI bridge chip and let everybody see what the thing can do in actual games.
 
Maybe because it's not working as advertised yet? We're now 3 months and counting since the announcement, and as far as I know there hasn't been a single independent test result posted anywhere.
 
GraphixViolence said:
Maybe because it's not working as advertised yet? We're now 3 months and counting since the announcement, and as far as I know there hasn't been a single independent test result posted anywhere.

Maybe they got scared into an early announcement fearing that the Alienware SLI would steal their thunder? That just sums up Nvidia for the last couple of years - they are totally reactive to what others are doing. They are no longer a proactive, leading company.
 
Bouncing Zabaglione Bros. said:
Maybe they got scared into an early announcement fearing that the Alienware SLI would steal their thunder?

I think that is most definitely the case considering the expected timeline for availability of dual-PEG boards.
 
duncan36 said:
I remember specifically the arguments Nvidia used to attack 3dfx's SLI. They seemed to be legitimate because as we all know Nvidia won and 3dfx lost.
You're wrong since you refer to NV's arguments on 3dfx's _multi-chip boards_, not _SLI_ itself. Current NV's SLI is more like Voodoo 2 SLI when you could _add_ another _card_ to the one you already own and get a nice speed bump from this. I do hope you remember how Voodoo 2 SLI demolished Riva TNT in every speed test back in 1998 or i'd say that your memory is somewhat fragmented
icon_rolleyes.gif
 
It's my understanding that 3dmark05 will not be as GPU limited as 3dmark03 is. In that case if I'd want to waste enough money on a dual GPU setup, there would be theoretically little to nothing holding back from also going for a dual CPU config also.

Whether it makes any sense or not is an entire story of it's own.

I think the ultimate SLI realization will be to make supersampling perform well. 8xS and 16xS will probably be feasible.

8xS is already feasable in 1280*960 at least in many flight/racing/space sims where one really doesn't need framerates in excess.

However it's usually a no go for demanding FPS games and if then only in mediocre to lower resolutions. Now if a single sollution gets let's say 25fps with 8xS+ some AF in 16*12 and the dual setup gets 50fps, then honestly I don't see much point in it either. It still makes far more sense to use a MSAA+AF combination and if possible to crank up the resolution even more.

On the other hand I don't think 16x has anything more to offer than 8xS, other than the added 2*1 SSAA; I can't get FSAA Viewer to work with 16x to see what the sampling grid exactly looks like.

That said for a flight simulator fan as an example where cost is not an issue it will be the the best affordable sollution he can get.

The risk, of course, is that they've let slip that SLI probably isn't going to achieve the kinds of performance gains they were projecting earlier (i.e. 1.8x). At 7229, that means a $1000 6800 Ultra SLI setup (probably closer to $1500 if you count the motherboard and PS upgrades that will almost certainly be necessary) is less than 25% faster than a $500 X800 XTPE. And it pretty much kills the argument for using SLI on anything slower than a 6800 GT. Especially when you account for the fact that 3DMark05, being a graphics-focused benchmark, is probably going to be less CPU bound than a typical game (which has AI, collision detection, physics, sound, networking, etc. to worry about).

I recall myself when I first heard those 80% estimating that it should be considered as the highest threshold and NOT the average.

Average performance predictions are hard to make due to the complicated nature load balancing gets handled. Drivers expose three available modes so far:

1. Scene gets split up in strips where each GPU renders a group of strips.

2. Alternate frame rendering in combination with option (1) above.

3. Auto mode where the driver itself decides whether (1) or (2) is better.

Democoder had a point though; hybrid AA modes will most likely scale by a factor of 1.8 or sometimes even more. The question though still remains under which situations exactly.

I personally would love to have a sollution where I could use 8xS with 8xAF in 16*12 in games like Rallisport Challenge; yet the cost is way too high for me.
 
Back
Top