Nvidia went SLI because they can't compete?

Status
Not open for further replies.
You're not stepping on SLI toes. If your arguments had merit many of us would listen. I see no trolls being irked. ATI has SLI coming. Can't wait to hear your statements on that. On my end, I am interested in ATI's solution as well. But not because it is from ATI. Rather because I am building a new gaming rig. You on the other hand, are still stuck in your anti-Nvidia trolling and you try to hide it with long, drivel driven posts claiming you have legitimate reasons for your prejudice against Nvidia's solution.

Fact is, we aren't stupid Walt. Just stop talking already.

Oh, and by the way, you call me stupid yet you don't explain your two hummer to video card analogy. Care to finally clear that up for us? Us 'stupid' people would love to be shown how your amazing comparisons work.
 
trinibwoy said:
ROFL!!! Now we need giant monitors for SLI !!?? HAHAHAHAHA!! You are hilarious!!! :LOL: :LOL:

Just how valuable is SLI at 800x600, or 1024x768? Clearly, in the review SLI is most cost-effective in terms of the frame-rate improvement it provides at 1600x1200. But this presents somewhat of a dilemma to people with smaller CRT monitors (especially 17" monitors), since at 1600x1200 it's often difficult to see what you're doing on a smaller monitor.

Myself, I find my 21" CRT is barely adequate for 1600x1200 gaming (just my preference in screen size of course) but *I don't use* 16x12 because of another reason: refresh rate. At 16x12 my 21" CRT does a max refresh of 85Hz which means, of course, that running with vsync on caps my maximum frame rate at 85. If you know anything about the way that Hz affects frame rate you'll understand that often the 85Hz cap will result in a max frame rate of ~42fps or even less in some situations. But in all cases with vsync on it can never exceed 85 fps.

The way around all of that of course is to turn vsync off. But then that is also unsatisfactory for me because I can then see a great deal of visual tearing at 16x12, which is even more distracting to my game play than a lower frame rate. So...on my 21" monitor I prefer gaming at 1280x1024 (which maxes at 100Hz refresh) or else 1152x864 (100Hz) to 1024x768 (120Hz) with a decent level of FSAA and vsync on so that I don't see the tearing I see with vsync off, while getting a better frame-rate than is possible at 16x12.

So since by far the maximum benefit in terms of frame rate for SLI is found at 16x12, and the SLI frame-rate benefit diminshes the lower you go in resolution in comparison to single-card operation, it doesn't seem at all odd to me to think that owning a monitor which does 16x12 at 100Hz or higher would be anything except very desirable with an SLI system.

Next, what about the people who own 1024x768 or 1280x1024 LCDs as their primary monitor? If they should stick with their present monitors when buying SLI then they *will never see* the maximum frame rate advantage it supplies since they can never under any circumstances run at 16x12 at all.

In a sense, SLI presents an opposite situation from stand-alone 3d in terms of resolution. With single 3d cards the frame-rate performance generally declines the higher the resolution. But with SLI the advantage of SLI over single-card operation *increases as resolution increases* to the point where the most cost-effective resolution for SLI in terms of the frame-rate increase it provides is at the maximum resolution of 1600x1200. Perhaps you do not think a bigger-better monitor is required (than that which most people already own), but for these reasons I think that to get the most of the frame-rate advantage over a single card that SLI provides means that you have to consider your monitor in the mix as well since to get the most from SLI that it can provide you *have to* run at 1600x1200.

So where are your 'balanced' comments on the pros of SLI? Or are there none?

Other than raw frame-rate increases above 1024x768 when contrasted to single-card operation, which I do not dispute at all, what other advantages are there? The question for single-card users is not so much whether SLI provides a frame-rate increase above 1024x768 over their current single card, but the primary question it seems to me is whether or not the current users of single-card 3d find that their frame rates above 1024x768 are *too slow* for them at the moment with their SOA 3d cards. If so, then depending on their resolution preferences SLI might well be desirable--but if not, then it doesn't seem to me that SLI would provide them with much advantage at all.

OTOH, the disadvantages I would list as follows:

1) Cost (you can buy a lot of other stuff--like faster cpus, faster/more ram, faster/more hard drives, or a *bunch* of software, etc.) for the cost premium SLI demands over single-card configuration. If you need a larger-better monitor that has to be considered, too.

2) heat, noise, and power consumption (which may or may not require a different case, additional funds for water-cooling or silencer coolers, beefier PSU, etc., depending on your individual needs and preferences.)

3) SLI is not transparent--games must be specifically supported by nVidia to run under SLI, and in certain modes under SLI, else the game will run on a *single card* (the master) and the slave card will not be used even though it is present and operating in the system. If configured incorrectly under SLI, the SLI system may actually degrade performance of the game to that below a single card--despite the fact that both cards are present and operating--due to the cpu overhead SLI requires. Some games will *never* receive any benefit from SLI for as long as the SLI system is owned. IE, the situation isn't remotely like the difference between replacing a GF4 with a 6800U, or replacing a R8500 with an x800 xt.

In short, the point to me about SLI is that there is a whole lot more to it to consider than just the frame-rate increase it affords in some cases at resolutions above 1024x768. I think that's a very equitable, fair and accurate statement to make about it.
 
WaltC said:
You're funny--test results which you don't like you call "anomalies"--and of course the test results which support your pre-determined point of view I would imagine you'd call "representative." Sorry, but I missed the part of the review in which Dave said the heat issues were mere "anomalies" which we could all pretend didn't exist in reality and which we should ignore. Mind quoting where he said that as I surely didn't read it?...:/

LOL. So when 3 SLI conjurations (6800, GT and Ultra) show no signs of excessive temperature and one (6600) does, and I point it out, you have the gall to accuse ME of using selective data to prove my point? ROFL. The fact remains: Most SLI systems tasted do now show excessive heat. You can twist all you want and latch onto 6600 results as long as you like, but the data is there for everyone to see. I now you are probably incapable of answering a simple yes or no question, but let me try anyway: Is system operating at 36 degrees Celsius "hotter then hell"? Yes or now Walt?


How in the name of all that is rational might you conclude that *TWO* 6800GT's running their fans at full speed might only be as loud as ONE 6800GT running its fan at full speed...? Heh...;) The human capacity for self-deceit never ceases to amuse and amaze, and of course what you say about Dave's graph and what Dave says about Dave's graph simply do not match, do they? Since Dave did the graph and the testing I'd suggest you substitute his interpretation of the numbers in it for your own...;)


It’s not my fault dear Walt that you knowledge of acoustics is so dismal. Adding an singe identical sound source would result in only marginal increase in noise, both perceived and measured. Just for the reference, it would take 10-fold increase number of sources for perceived noise loudness to double (the scale is logarithmic, remember?) Dave's quote does not diverge from the results, the results do diverge from your assertions (SLI configuration is excessively noisy, as compared to non-SLI configuration). His comments (and data) show that single card is already noisy. The problem is thus not limited to SLI.
 
WaltC,

Regarding LCD's they already have enough issues when it comes to gaming without using SLI as some kind of detractor. For best IQ they need to be run at their native res and then there is considerable ghosting on all but the best LCD's. And what about single card configurations that can do 16x12 but are limited by the native res on the LCD - this restriction is not specific to SLI, it also applies to the PE's and Ultra's out there. I have a friend who bought a vanilla 6800 because his monitor is best at 10x7. You have to realize that many of your arguments apply to all levels of GPU hardware - not just high-end vs SLI.

When it comes to CRT's you must have very sensitive eyes. The only games I have ever experienced tearing on are Mafia and Freedom Fighters. I have NEVER seen it on an FPS so vsync-off is my default setting. I'm sure most others here would agree.

We also don't know how the emergence of powerful dual-GPU solutions will affect developers' high-end targets for in-game settings. Sure the lower settings will remain with the mid-range cards. But if they start putting extreme settings in there that are much more GPU limited than today's games then the lower resolutions will start seeing more gains with SLI. I think that's a reasonable possiblity given IHV and developer relations.

Your evaluation of the pros and cons of SLI is relatively accurate but for one thing - that single pro outweighs all the cons for many people and they are the ones purchasing SLI systems today. I find it strange that you can rail against SLI so much when it's actually selling quite well in the market. What SLI needs is a killer app that brings single GPU's to their knees - this generation is just so powerful that all the games out there are being chewed up by a single high-end card.
 
I didn't bother to read the whole thread, just the last couple of pages, so please forgive me if it has already been done ...

I can, in a way, agree with the TI, but I don't exactly see a case of being unable to compete.

However, multi-chip solutions IMO are an act of desperation. They always incur inefficiencies and redundancies on top of the added hardware and software complexity required to support themselves in the first place.
A 32x1/6 single-chip NV40 derivative running at ~400MHz would stomp any and all NVIDIA SLI solutions into the ground, would work with and scale well in every application that works with current 16x1/6 NV40 chips, not just a few dozen, would require no extra platform and software support etc. The obvious problem is manufacturing.

You go multi-chip because you're out of ideas for elegant performance improvements, not because you can.

NVIDIA certainly isn't that desperate for performance boosts. For them it's just an extra ultra-high end segment. For a more drastic example, look at the XGI V8 Duo. They couldn't compete with a single-chip solution. So they went multi-chip, opened a can of worms in the process ... and still couldn't compete. I'm positive they would have had a better performing product and a lot less problems if they built a single-chip "V16" instead.
 
zeckensack said:
I didn't bother to read the whole thread, just the last couple of pages, so please forgive me if it has already been done ...

I can, in a way, agree with the TI, but I don't exactly see a case of being unable to compete.

However, multi-chip solutions IMO are an act of desperation. They always incur inefficiencies and redundancies on top of the added hardware and software complexity required to support themselves in the first place.
A 32x1/6 single-chip NV40 derivative running at ~400MHz would stomp any and all NVIDIA SLI solutions into the ground, would work with and scale well in every application that works with current 16x1/6 NV40 chips, not just a few dozen, would require no extra platform and software support etc. The obvious problem is manufacturing.

You go multi-chip because you're out of ideas for elegant performance improvements, not because you can.

NVIDIA certainly isn't that desperate for performance boosts. For them it's just an extra ultra-high end segment. For a more drastic example, look at the XGI V8 Duo. They couldn't compete with a single-chip solution. So they went multi-chip, opened a can of worms in the process ... and still couldn't compete. I'm positive they would have had a better performing product and a lot less problems if they built a single-chip "V16" instead.

I don't agree with the slant (title) of this thread or everything you say. I agree about the 32 pipe GPU being superior in every which way, but that has very little to do with reality.

You can't possibly suggest Nvidia should launch a product that is twice the performance of the competition even if they could. ATI would have to respond and they would both suffer by gulping down apetizer, main course, and dessert before they would normally be ordering their martinis.

It can be an act of desperation or a poor design decision, but only when you rely on the multi-chip solution to compete. The VSA-100 is a good example. A single unit could not compete, but a dual could. Imagine if a single VSA-100 could have been competitive with a Geforce 2 GTS. Then the V5 would have been very impressive. This, however, is muddling history and reality.

The way I see it, Nvidia is competitive on a single-GPU basis. I would more openly state that ATI is the one needing a new VPU to compete, but I fear the wrath such a comment would bring upon me. I refer to the feature set here.

SLI is offering to increase performance where a single unit is already competitive. I hardly view this as desperation. Perhaps desperation to appear so significantly more performant as to quickly regain market share and mindset. Not desperate to keep up in terms of performance.

Your example of the XGI V8 Duo is definitely in the same category as Voodoo 5 v Geforce 2. The V8 just isn't competitive and the duo seems like a disasterous attempt to measure up. Maybe they were hoping that the thought of dual GPUs would help market it, not worrying so much about actual performance.

Again, back to the basic claim. You can't just jump the gun on the natural progress of technology and performance without taking a significant hit in the pocket book. Done effectively with an aligned strategy - mostly amounting to annihilation of the competition - you can make net gains, especially looking forward (once competition is destroyed and you can slow down and demand premiums), but I think Nvidia and ATI benefit too much from each other being around that they cannot afford that strategy.

PS. The NV40 can be a 32x1/6 single-chip running at 400MHz under certain circumstances. (or should that be written as 16x2x1?)
 
trinibwoy said:
So that monstrosity is a real consumer product!! :oops: Good luck Asus.

Yeah, one of the advantages of dual-card sli is there is a relatively small r&d investment in the hardware and not another separate card that will sell small volume. I mean, selling 50k discrete 2-chip cards would seem like nothing. . .selling *an additional* 50k of your already-existing top end card (to be the second card in a SLI) seems like a much better deal for the manufacturer to me. . .
 
geo said:
Yeah, one of the advantages of dual-card sli is there is a relatively small r&d investment in the hardware and not another separate card that will sell small volume. I mean, selling 50k discrete 2-chip cards would seem like nothing. . .selling *an additional* 50k of your already-existing top end card (to be the second card in a SLI) seems like a much better deal for the manufacturer to me. . .

Agreed. I'm just wondering how they're going to fit that in an ATX case.
 
Here's an interesting question (to me, at least). Assume you weren't going to go SLI as a manufacturer, but set as a target the performance level of current 6800U SLI for a single-gpu card. . .could you ("you" being either major IHV) design, manufacture, and sell it at $1,000 at a profit? Assume the same volume as current SLI market for GT/U. Also assume the same manufacturing process as current 6800U. Or is SLI the *only* way you can reach these performance levels at a profit at that price point? i.e. is leveraging your existing r&d and manufacturing investment the only way to make these performance levels possible at a profit at this price? Because if the answer is "yes, this is the only way", then the only room you have to complain about SLI is in class-warfare terms.

OTOH, if the answer is "no, it could be done single-gpu if I knew a market of that size was there willing to pay $1,000", then that might have future implications as NV and ATI prove there is a significant market at $1,000. One thing to consider in evaluating this is there is some pricey memory in SLI not being fully utilized compared to the way a single-gpu implementation could utilize it. . .(to balance against the r&d cost and manufacturing cost)
 
Well, the problem is, if you could design a single card that could perform to those levels, what's preventing your competitor from doing the same, and then offering SLI technology on top of that?
 
Chalnoth said:
Well, the problem is, if you could design a single card that could perform to those levels, what's preventing your competitor from doing the same, and then offering SLI technology on top of that?

Because now it is a $2,000 price point and you have to prove the market all over again. . .maybe at the end of the day the most valuable thing SLI will provide is proof of the market size at $1,000 at relatively low risk/cost to the IHV's in proving it exists and at what size. Tho if I were them I'd want to see it for at least a couple years before I started trying to design a single-gpu solution to fill it.
 
I don't see how it could be considered an act of desperation whatsoever. They introduced SLI at the same time they introduced the 6800 series. And we know the 6800 series is one hella performing card. If the card was slow and it took sli to compete with ATI, then it may be considered desperation. But the 6800 surely competes in every way with ATI. We just so happen get a choice of additional performance at an extra cost. We as a consumer have a choice which is always what we want. ATI will do the same thing. The way I see it, it is good for us.
 
geo said:
Here's an interesting question (to me, at least). Assume you weren't going to go SLI as a manufacturer, but set as a target the performance level of current 6800U SLI for a single-gpu card. . .could you ("you" being either major IHV) design, manufacture, and sell it at $1,000 at a profit? Assume the same volume as current SLI market for GT/U. Also assume the same manufacturing process as current 6800U. Or is SLI the *only* way you can reach these performance levels at a profit at that price point? i.e. is leveraging your existing r&d and manufacturing investment the only way to make these performance levels possible at a profit at this price? Because if the answer is "yes, this is the only way", then the only room you have to complain about SLI is in class-warfare terms.

that's one of my dream. Going to the future, grab some kick ass technology and then go back to present to sell the future technology at 10 times the price of current one.
 
LeGreg said:
that's one of my dream. Going to the future, grab some kick ass technology and then go back to present to sell the future technology at 10 times the price of current one.

Yeah but then you mess with the future and the technology you 'stole' will be old by then :)
 
A major benefit of SLI for nVidia is that it is a unique checkpoint offered via their chipsets. I wouldn't be the least surprised if the decision to develop SLI wasn't seen to be at least as strategically important for their core-logic business as for their gfx cards.
 
Has anyone figured out why SLI doesn't provide any useful performance boost for the HDR rendering style in Far Cry?

If FC worked with AFR, would SLI make the HDR rendering mode work faster?

Does the HDR in SC:CT run at faster frame rates with SLI?

In other words, is SLI actually "compatible" with floating point blended HDR?

If not, then isn't SLI a bit of a dead end? Is there a driver fix coming? Maybe it's something in the hardware? Surely it's fixable...

Jawed
 
trinibwoy said:
When it comes to CRT's you must have very sensitive eyes. The only games I have ever experienced tearing on are Mafia and Freedom Fighters. I have NEVER seen it on an FPS so vsync-off is my default setting. I'm sure most others here would agree.
I wouldn't. I see tearing all the time on my CRT at 1024x768 @ 100Hz if I don't have v-sync enabled. :?
 
wireframe said:
I don't agree with the slant (title) of this thread or everything you say. I agree about the 32 pipe GPU being superior in every which way, but that has very little to do with reality.

You can't possibly suggest Nvidia should launch a product that is twice the performance of the competition even if they could. ATI would have to respond and they would both suffer by gulping down apetizer, main course, and dessert before they would normally be ordering their martinis.
In a way they have done that by introducing SLI. As long as SLI is expensive, niche and high margin, this doesn't look like destroying the market. And they did put themselves under pressure. I'd expect the next new high-end chip (NV50 probably) to outperform or at least be very close to two SLIed NV40s. If it doesn't, high-end nuts won't buy it, because they already have SLI rigs ...

Whatever, I think the hypothetical single-chip 32x1 NV40 could probably run for 1kUSD, too. Its existance, feasible or not, wouldn't alter the current market segmentation that much IMO. Of course it would hurt other players' mindshare, but NVIDIA-exclusive SLI does just the same.
wireframe said:
It can be an act of desperation or a poor design decision, but only when you rely on the multi-chip solution to compete. The VSA-100 is a good example. A single unit could not compete, but a dual could. Imagine if a single VSA-100 could have been competitive with a Geforce 2 GTS. Then the V5 would have been very impressive. This, however, is muddling history and reality.
I avoided using 3dfx chips as examples because nowadays graphics chips to a lot more than just rastering spans. A pure raster chips w/o any geometry processing, cull, setup hardware is a perfect base architecture for SLI. No redundancy. You waste a little memory but otherwise get the full bang. Modern GPUs are different. That's why XGI's solution utterly failed. For the very same reason there are multiple SLI modes selectable in the NVIDIA drivers, and even despite that choice, relatively few applications get those large boosts.
wireframe said:
The way I see it, Nvidia is competitive on a single-GPU basis. I would more openly state that ATI is the one needing a new VPU to compete, but I fear the wrath such a comment would bring upon me. I refer to the feature set here.

SLI is offering to increase performance where a single unit is already competitive. I hardly view this as desperation. Perhaps desperation to appear so significantly more performant as to quickly regain market share and mindset. Not desperate to keep up in terms of performance.
I also find NVIDIA to be competitive, to say the least. I'm not interested in NVIDIA-vs-ATI pissing contests and didn't mean to make it sound as if I was.

Re "desperation": they apparently wanted to achieve higher performance, otherwise there'd be no point in NVIDIA introducing SLI. My assertion is that they couldn't, for whatever reason, at the time, get the performance bump they wanted out of a single chip. Two choices:
1)Put up a sad face and forget about it. Wait for "natural" process and manufacturing advances. Eventually it becomes feasible to go wider --IIRC there's a 24 "pipe" NV40 derivative in the works, so this may have already happened.

2)Decide that it's worth it to go multi-chip. Accept the drawbacks and run with it because you really want more oomph right now.

Doing something "right now, at all cost" IMO clearly is desperation. No, I don't think it'll ruin NVIDIA. I don't even think it was necessary to compete nicely. But that has little bearing on my interpretation of NVIDIA making that move.
wireframe said:
Your example of the XGI V8 Duo is definitely in the same category as Voodoo 5 v Geforce 2. The V8 just isn't competitive and the duo seems like a disasterous attempt to measure up. Maybe they were hoping that the thought of dual GPUs would help market it, not worrying so much about actual performance.
XGI wanted high-end performance for the enthusiast mindshare. I think they underestimated the complexity of the dual-chip approach and the performance gotchas (render-to-texture, anyone?). Scaling on these things is a joke. The Duos are EOLed anyway. Good riddance.

VSA100 OTOH had nothing (T&L ...) that would have made SLI ineffective in any way. Looking at VSA100 and pretending that modern GPUs can scale just the same is a mistake that's made much too frequently. Perhaps XGI made it ...
wireframe said:
Again, back to the basic claim. You can't just jump the gun on the natural progress of technology and performance without taking a significant hit in the pocket book. Done effectively with an aligned strategy - mostly amounting to annihilation of the competition - you can make net gains, especially looking forward (once competition is destroyed and you can slow down and demand premiums), but I think Nvidia and ATI benefit too much from each other being around that they cannot afford that strategy.
SLI sits above the "old" GPU market, price-wise, and so would a 32 pipe single-chip monster. AFAIK SLIed NV40s are in a price range that previously just didn't exist. Performance-wise this is jumping that gun. But I personally don't see the GPU market dying off because of it.

wireframe said:
PS. The NV40 can be a 32x1/6 single-chip running at 400MHz under certain circumstances. (or should that be written as 16x2x1?)
Yeah, I know about the ROP subleties. I meant traditional "complete" pixels, i.e. color+depth.
 
Status
Not open for further replies.
Back
Top