Nvidia went SLI because they can't compete?

Status
Not open for further replies.
DemoCoder,

Any chance I get my question answered (if you know the answer of course)?

---------------------------------------------------------------------------------

As for the rest I'd say we better get used to multi-chip/-board configs from now on, irrelevant from which of the two IHVs.

There are and will be products for every wallet out there and I'm sure some of the extreme enthusiasts are going to feel happy that they can buy setups previously found only in let's say professional simulation systems. Something for those that used to drool over simFusion or Independence systems.

I'm asking the specific question in my former post for a reason: suppose you can use a SLi setup as Quadros and you can actually scale geometry as expected. Now compare the cost of a single Quadro vs a SLi setup and imagine the performance a professional can squeeze out of such a system. Same will obviously go for ATI's future setups, if it would be theoretically possible there too. Presuppositions here met, for a professional those could be of more than just good use; even moreso if each GPU has dual DVI (x2 GPUs).


ANova,

I'm sure they are developing a high level next gen card, question is when will we see it?

I have severe doubts that NVIDIA intends to compete with ATI's next high end offering with just NV40 SLi.

Apart from that, Microsoft delaying the Longhorn release in the past (and in conjunction the followup API to dx9.0), didn't exactly help any IHV to keep on to their original roadmaps. There have been obviously shifts in roadmaps because of the above and I expect to see the next batch of high end cards, whenever higher speced GDDR3 availability is on an adequate level. All IMHO as always.
 
Ail, if your question was about geometry scaling then it will scale in AFR mode in non-CPU limited situations (since they are working on entirely different frames). In SFR mode Transformation/Vertex Shading won't scale but will from the clipping stage (since screen space is known from here).
 
I have severe doubts that NVIDIA intends to compete with ATI's next high end offering with just NV40 SLi.

I wouldn't qualify the R520 as the next high end offering for ATI, it's more of an in-between. I think ATI decided to go this route because of what you mentioned about Longhorn being delayed for at least another year and a half. Naturally it makes sense for ATI to introduce their own SM3 card since it will be lasting them quite awhile and I doubt we'll be seeing more then a 30% speed improvement. I don't expect anything from nvidia except maybe a higher clocked NV40 with 512 MB ram.
 
Pete said:
350,000 SLI nForce chipsets for the first revision alone doesn't sound like a "horrible waste of money" to me.
If those numbers are correct, I think it wasn't a terribly successful product, either. At least not in the consumer electronics space.

If the chipset was selling for $50-75, then it might have paid for its R&D, but I doubt that.
 
ANova said:
I don't expect anything from nvidia except maybe a higher clocked NV40 with 512 MB ram.

It seems reasonable to me to expect that there are inherent advantages in having hundreds of thousands of users (and a nice chunk of developers)banging away at your architecture in the real world for nearly a year, so far as tweaking the next rev goes. I will be very interested to see if ATI's v1 shot at SM3.0 is as polished, balanced, and stable as NV's v2. Architecturally ATI has been largely resting on their laurels (fairly and impressively won with R3xx) the last couple years --time to renew the street cred in my book.
 
RussSchultz said:
If those numbers are correct, I think it wasn't a terribly successful product, either. At least not in the consumer electronics space.

Russ, I think those numbers are for the SLI variant alone not the whole of nForce4 platform processor.
 
DemoCoder said:
The fact is, both ATI and NVidia are milking their previous architectural generations.

Moving to 90 nm lithography may allow some more substantial changes, either in performance or features, or both.
Will DX be updated for new features soon though? What architectural tweaks will the current APIs allow given the larger transistor budget?

Personally, my hope for 90 nm is that the IHVs only provide slightly tweaked cores with substantially reduced power draw. The performance improvements of the last 3 years have come at the cost of an increase in power draw of a factor 3-4 (twice that in the case of SLI). This is unsustainable, and if the IHVs have any sense, they will reduce power draw for their first generation of 90 nm parts, reserving space for when they need to create market movement another year or two down the line when they are still confined to the same process. Sooner or later they will have to deal with performance levelling off due to power draw hangover, and the best time for doing so is in a major lithographic step.
 
DaveBaumann said:
Heh. I'm thinking at the very high end there won't be any reduction in power draw!

I agree for this and not-so-distant generations...(Scotty impression: "More power to the PCIe slots, captain!")

Still, what Entropy is saying is correct...it's just not sustainable to keep things going the way they are. It will be interesting to see who "flinches" first wrt drawing the line on exotic heat removal / power draw, etc.
 
Joe DeFuria said:
(Scotty impression: "More power to the PCIe slots, captain!")
rofl.gif


Shouldn't it be more like:

Kirk: "Scotty, I need more power to the PCIe slots NOW man!"

Scotty: "But Captan, me poor bonny rails canna bear such a load, we'll blow the mobo fer sure sir!"

Kirk: "Damn it man, I need that power now!"

rofl.gif
rofl.gif
rofl.gif
 
ANova said:
SLi would be good if it weren't for the fact that a single next gen card will equal or best a pair of 6800Us and have more features for an overall less cost while working with all games.

In which case, you'd pair up two next-gen cards. Why the strawman comparison of two old gen SLI cards vs next-gen. For me, the comparison is two next-gen cards vs a single next-gen card.

As for features, are you sure ATI's next card refresh is going to offer any features beyond DX9, nor any that will be exposed and taken advantage of? I'm expecting SM3.0, in which case, they'd be on par with the previous gen. Just about the only new features I can see them offering over that would be more frame buffer formats and AA, such as antialiased HDR framebuffers, anisotropic/trilinear filtered HDR textures, FP, etc
 
Why the strawman comparison of two old gen SLI cards vs next-gen. For me, the comparison is two next-gen cards vs a single next-gen card.

Because some people are thinking about what they can afford?
 
DaveBaumann said:
Heh. I'm thinking at the very high end there won't be any reduction in power draw!
I think you're right. The battle for mind share is probably to intense for any backing off. However, this may come back to haunt the IHVs a year or two down the line when they have to stimulate the market somehow, and their power and transistor budgets were already spent in their first generation. GDDR4 may come in as a saviour though. But...

Reflecting a bit, Intel has already run into these problems, and while they have only been able to increase performance very slightly since the 3.06GHz P4, this hasn't really hurt them in the marketplace, and they seem a lot better positioned now with their new outlook for the future. The idea mainly being that new functionality, better ergonomics and the possibility of new form factors are better industry drivers than hiking power, and thus performance another small notch. (If you're interested in the future of gfx on the desktop, some of the IDF slides that were more forward-looking were quite intrigueing.) I think this is true for gfx as well. At some point the gains won by allowing power draw to sky-rocket will have to be amortized as far as public expectations are concerned.

(* shrugs *)

Or maybe not.
I just don't know anyone who have found x800s or 6800s appealing enough to buy, and my friends can typically afford whatever cards they want, so my personal and social bias is admittedly strong on these issues. Intel seems to agree though.
 
As far as power is concerned I think we should look to the CPU world to see what is likely to happen with graphics. Peak power draw for the high end will continue to go up, but we'll get better and finer grained power management capabilities that will adjust more to the demands of the system. This is already happening to a small degree as even the high end parts are now recieving the same mobile power technologies as these chips are being leveraged into "mobile" DTR's and its going to be important that we don't have all kinds of fans whiring and power being sucked up when we are running Longhorn's desktop interface.
 
If I was going to dabble in SLI I'd have bought the mobo and stuck a single 6800GT in it, waited a year, and added the second one. Wait a year, start over with single-card current gen, rinse, repeat. I would be very surprised if developers start getting to the point where they are going to require a dualy top-end gpu solution to get acceptable frame rates with all (or almost all) of the eye-candy on at decent res (1280x1024).

Now, if the IHVs really want to pique my interest in SLI, they'll figure out a way to make certain IQ-enhancement features available but workable only on an SLI system. . .like, say, 16x AA, 128x af, something striking along those lines (no nitpicking, please, I mean conceptually).
 
DaveBaumann said:
As far as power is concerned I think we should look to the CPU world to see what is likely to happen with graphics. Peak power draw for the high end will continue to go up, but we'll get better and finer grained power management capabilities that will adjust more to the demands of the system. This is already happening to a small degree as even the high end parts are now recieving the same mobile power technologies as these chips are being leveraged into "mobile" DTR's and its going to be important that we don't have all kinds of fans whiring and power being sucked up when we are running Longhorn's desktop interface.

Yup.
However, as long as we allow those high peak power levels, we still have to supply an underlying architecture and form factor that is able to deal with them on a continous basis. That is a pretty severe restriction. The PC industry is very strongly ruled by inertia, but I really don't see this limitation as desireable in the marketplace.
So in my book this is the most likely scenario for the next decade:
The market will fragment further. It is already split into "desktops" and "portables", with portables gaining ground rapidly, and I'd predict that the desktop part of the market will further see a clearer split between the classic upgradeable box, and small/cool/quiet/heavily integrated systems. Given how the market looks, portables and these smallish systems will reduce the typical classic upgradeable box to an initially fairly large but ultimately shrinking niche. It will probably still be large enough to have a thriving eco-system for the foreseeable future. But CPU development for instance will be driven by volume needs. GPUs are less obvious, since their parallellism is more easy to exploit to make products adequate for different markets.

There are other scenarios possible. (Just deleted away a looong description of such and consequences for the gfx IHVs) But one scenario that isn't possible, is taking the data from the last five years or so and extrapolating forward. I can't see that anyone can believe those power trends are sustainable.
 
Do you think designers of the PCIe standard are surprised/unhappy that right out of the gate top-end cards are requiring external power? That seems either not very forward-looking, a major upspike anomaly in historic trends catching them by surprise, or some pretty serious technical hurdle (read $$) to providing more power than they did.
 
geo said:
If I was going to dabble in SLI I'd have bought the mobo and stuck a single 6800GT in it, waited a year, and added the second one. Wait a year, start over with single-card current gen, rinse, repeat. I would be very surprised if developers start getting to the point where they are going to require a dualy top-end gpu solution to get acceptable frame rates with all (or almost all) of the eye-candy on at decent res (1280x1024).

Now, if the IHVs really want to pique my interest in SLI, they'll figure out a way to make certain IQ-enhancement features available but workable only on an SLI system. . .like, say, 16x AA, 128x af, something striking along those lines (no nitpicking, please, I mean conceptually).

:oops: You can afford to buy (and desire to purchase) a new $4-500 videocard every year? Just to "keep up?" Incredible.
 
I think some are confusing their own purchasing decisions with what others decide to purchase. If someone has 800 to spend on two video cards, then let them. No need to tell them they're wrong doing it. I personally like the idea of ATI and Nvidia using SLI. At some point, I intend on treating myself to one of these new rigs and a couple of highend cards punchin out mad frame rates at highest quality settings should keep me happy for say... 6 months :LOL:

Something else I would like to note: SLI offers us consumers one more thing that we should never cry that we have too much of and that is choice.

I think I also read that Nvidia has updated their SLI profiles. Anyone know of any benchies for these newly updated SLI profiles/drivers? I would be curious to see how some of those other games are benefitting from SLI (if they are at all).
 
Status
Not open for further replies.
Back
Top