GF100 evaluation thread

Whatddya think?

  • Yay! for both

    Votes: 13 6.5%
  • 480 roxxx, 470 is ok-ok

    Votes: 10 5.0%
  • Meh for both

    Votes: 98 49.2%
  • 480's ok, 470 suxx

    Votes: 20 10.1%
  • WTF for both

    Votes: 58 29.1%

  • Total voters
    199
  • Poll closed .
Nvidia's stratagy is to have the FASTEST SINGLE GPU in single GPU comparisons. They will eventually release a dual based card designed to take on the 5970. This insistence that the 480 was ment to take on the 5970 is just completely insane.
Majority of customers dont care if its single-dual-or w/e card, as long as it delivers. Lets say for the argument sake you are right, and nVidia only cares about fastest single-gpu card, and launches dual after refresh, do you realize it means this strategy automatically abondons highest-end market? Because AMD launches dual's at the launch (or near it) each generation, while nVidia after 6-12 months (after refresh), when AMD has their own refresh incoming.
 
And that is relevant for potential buyers in...? Right...absolutely nothing.
Well, due to the die size the product is 6 month late, power hungry, loud and more expensive. Right... maybe typical potential buyers of these products are already accustomed...
%50 more than 5870, exaggerate much? Try 20-25% more as I dont see the MSRP being in the 6-700 range in any review. Current average cost for 5870 is in the 390 range, 480 is about 530. Yeah, 50%.
It's 50% here... http://forum.beyond3d.com/showthread.php?p=1415043#post1415043
 
Which you knew was only temporarily unique to AMD hardware. The difference is that you know CUDA and PhysX won't be running on AMD's stuff anytime soon.

Yes, I knew that. I just didn't find those features more compelling than DX11 at that time. And I didn't wanted to wait :).
 
Funny how the 5970 is unfair argument being thrown around now. Silent Buddha already caught XMAN's double standard (see sig). Funny how trini was talking about moving goalposts thingy, maybe a mirror is in need for an order before anything else! :eek:
 
Wow, deja vu and all that. I do believe you've been thoroughly punked on this specific topic for your posts when the 5870 launched. There are things called search engines, you know, so you might want to work on your consistency when debating relative performance expectations.

No, sorry. I clearified it then. Comparing a dual gpu card to a single GPU card is nuts. But if one wants to see how well a single GPU of a new/refresh gen does against a previous gen dual based card, fine, but dont use the dual GPU card as a baseline for performance of comparison. SLI/XFire cards/setups have their own market segment and should be left there to compete against one another. And back when the 5870 launched, I said it was not worth an upgrade from a GTX285/295/4870X2 as the performance did not truely warrant the price or the upgrade. I still find this to be true, although much less so in terms of the 285 due to driver improvements to the 5870.
 
Funny how the 5970 is unfair argument being thrown around now. Silent Buddha already caught XMAN's double standard (see sig). Funny how trini was talking about moving goalposts thingy, maybe a mirror is in need for an order before anything else! :eek:

He caught nothing as he misunderstood what I was saying/getting at. See my above post,, it states pretty much what I said to clarify what I said then.
 
Comparing a dual gpu card to a single GPU card is nuts

There are many ways one can compare cards (pure performance, certain feature set, pure price, a variation of the previous three, etc). Therefore, making a blanket statement like the one above is what I would call "nuts".
 
Well, due to the die size the product is 6 month late, power hungry, loud and more expensive. Right... maybe typical potential buyers of these products are already accustomed...

Well, I certainly agree that the die size didn't help production. But that's not the point I was making. I was merely saying that the die-size has absolutely no relevance to a potential customer, since it's not a factor in the buying decision.
 
Will nVidia be able to release a card for each market segment akin to AMD? I'm amazed at the speed with which they're releasing they Cypress based chips right now. They have all their bases covered with regards to nVidia who have... two. Being that I'm not an expert on the subject I can't make any claims about the ease at which nVidia can pull a similar move off.

Either way it's an interesting situation. AMD are making money and have (I suspect) sunk their development costs, meaning that they're in pure profit mode until a refresh or new product has to come down the pipe. By the time nVidia is starting to release products to compete more competently in all market segments against AMD (Wihout using re-branded GT200 chips) they might come out with their refresh, nullifying nVidias catchup efforts. Obviously nVidia is't gonna die if they can't pull if off, but it might mean that they'll have to wait a while to make a decent profit. Why? They might have to initiate their refresh process much earlier then if they'd launched earlier to effectively compete.
 
Majority of customers dont care if its single-dual-or w/e card, as long as it delivers. Lets say for the argument sake you are right, and nVidia only cares about fastest single-gpu card, and launches dual after refresh, do you realize it means this strategy automatically abondons highest-end market? Because AMD launches dual's at the launch (or near it) each generation, while nVidia after 6-12 months (after refresh), when AMD has their own refresh incoming.

Actually, alot do as they dont care for Dual solutions or simply dont like AFR so stay away from dual solutions.
 
It's not. It's just a tagline in that smear campaign you alluded to earlier. I'm baffled as to how 20% higher price for 15-20% higher performance for a flagship part is a travesty. Goal posts keep shifting.

[edit] Sorry, I was being too generous. HD 5870's are going for $429 now so the performance gain is even higher than the price increase. I would love to hear the arguments for why it has poor price/perf.

I realise this has been pointed out earlier in the thread but here in the UK at least GTX470 goes head to head in price with the HD 5870 and the GTX 480 is listed at an almost 50% price premium.

Hexus GTX 4x0 UK launch pricing.

5870 in stock for £295

It will be interesting to see how prices develop over the next few weeks and months but how things stand at the moment Nv's lineup looks overpriced here.
 
There are many ways one can compare cards (pure performance, certain feature set, pure price, a variation of the previous three, etc). Therefore, making a blanket statement like the one above is what I would call "nuts".

Did you stop there or did you even bother to finish what I posted?
 
They will eventually release a dual based card designed to take on the 5970. This insistence that the 480 was ment to take on the 5970 is just completely insane.
Remember the pricing of GTX 280 at launch? $649? Do you believe that a more costly chip like Fermi wouldn't have been priced in a similar territory had Hemlock not been there?
 
Did you stop there or did you even bother to finish what I posted?

Yep

but dont use the dual GPU card as a baseline for performance of comparison. SLI/XFire cards/setups have their own market segment and should be left there to compete against one another

Equally nuts. People can compare cards however they want. If you want to leave R800 out when comparing R8x0 to gt300, that's fine. But don't make blanket statements telling other people when/how they should compare cards.
 
Funny how trini was talking about moving goalposts thingy, maybe a mirror is in need for an order before anything else! :eek:

Because my comment on prices is obviously relevant to the discussion of dual-GPU vs single GPU? At least you could try to be a little less transparent :LOL:

Lets say for the argument sake you are right, and nVidia only cares about fastest single-gpu card, and launches dual after refresh, do you realize it means this strategy automatically abondons highest-end market?

Nope it just means that they will just keep exchanging the performance crown as it's always been.

3870X2 -> 9800GX2 -> 4870X2 -> GTX 295 -> HD 5970 -> GTX 4xx ?

Yes, I knew that. I just didn't find those features more compelling than DX11 at that time. And I didn't wanted to wait :).

Yep, and you don't have to justify that to anybody either :smile:
 
Remember the pricing of GTX 280 at launch? $649? Do you believe that a more costly chip like Fermi wouldn't have been priced in a similar territory had Hemlock not been there?

I'm sure it would have, and it would sit on the shelves. Nvidia needs to figure out how to either do chips the way you guys(ATI) are or figure out how to make GPUs and their professional cards seperate from eachother. cause if they dont, its gonna kill them.
 
Fact is that AMD decided to go dual chip to compete with Nvidia's top end. Nvidia poured scorn on them for "giving up" on the giant monolithic chip. Now AMD's top end squashes the best that Nvidia can produce... and there's nothing out from Nvidia to match it, so their fans spin this fantasy that Nvidia only wants the single fastest chip, that 480 was never supposed to go up against dual chip cards, that Fermi was always supposed to be so cheap it could go up against a much smaller chip, that losing the 512 processor SKU is somehow okay, that Nvidia will bring out a dual chip version when a single 480 already pulls 300 watt.

It's all nonsense from those who can't handle that their delusions haven't been met.
 
Yep



Equally nuts. People can compare cards however they want. If you want to leave R800 out when comparing R8x0 to gt300, that's fine. But don't make blanket statements telling other people when/how they should compare cards.

And people who think that 2 higend chips WONT be faster than a single highend chip from a competitor are insane. Unless sometime in the future MGPU scaling falls off the charts, this fact should almost never change. I stand by what I said. USing a mGPU setup as a baseline for performance to measure single GPU cards to is nuts and stupid.
 
Back
Top