GF100 evaluation thread

Whatddya think?

  • Yay! for both

    Votes: 13 6.5%
  • 480 roxxx, 470 is ok-ok

    Votes: 10 5.0%
  • Meh for both

    Votes: 98 49.2%
  • 480's ok, 470 suxx

    Votes: 20 10.1%
  • WTF for both

    Votes: 58 29.1%

  • Total voters
    199
  • Poll closed .
Fact is that AMD decided to go dual chip to compete with Nvidia's top end. Nvidia poured scorn on them for "giving up" on the giant monolithic chip. Now AMD's top end squashes the best that Nvidia can produce... and there's nothing out from Nvidia to match it, so their fans spin this fantasy that Nvidia only wants the single fastest chip, that 480 was never supposed to go up against dual chip cards, that Fermi was always supposed to be so cheap it could go up against a much smaller chip, that losing the 512 processor SKU is somehow okay, that Nvidia will bring out a dual chip version when a single 480 already pulls 300 watt.

It's all nonsense from those who can't handle that their delusions haven't been met.

Ah, Nvidia strategy has always been fastest single gpu, please enlighten us all when they changed that one. Other might be ok with the 480sp part, I for one think they tossed in the towel. And this wont be the first time that Nvidia had to wait for a refresh to d a dual GPU card to compete with a dual GPU card. GT100 to GT200 for a card to compete with R700 or did you ferget about that one. Nvidias concern has always been single GPUs and until they change their strategy and mind set on GPUs to something more inline with ATIs line of thinking, it will stay that way.
 
bla bla BS from me...USing a mGPU setup as a baseline for performance to measure single GPU cards to is nuts and stupid.
Its not about the "gpus, or chips ect, its about the fastest CARD in ONE slot. If I as a buyer want the fastest CARD, I would have to buy a 5970. No BS from your green loony rants will ever change that and i am sure your on many ignore lists now. BUT it IS about the fastest card that one can buy and put in ONE slot in a motherboard. All the rest of your rants are FAIL.
 
Its not about the "gpus, or chips ect, its about the fastest CARD in ONE slot. If I as a buyer want the fastest CARD, I would have to buy a 5970. No BS from your green loony rants will ever change that and i am sure your on many ignore lists now. BUT it IS about the fastest card that one can buy and put in ONE slot in a motherboard. All the rest of your rants are FAIL.

Except that AFR doesn't always work and it certainly doesn't scale 100% when it does, so fastest single GPU on a card still holds a lot of merit.
 
Its not about the "gpus, or chips ect, its about the fastest CARD in ONE slot. If I as a buyer want the fastest CARD, I would have to buy a 5970. No BS from your green loony rants will ever change that and i am sure your on many ignore lists now. BUT it IS about the fastest card that one can buy and put in ONE slot in a motherboard. All the rest of your rants are FAIL.

Good for you, that card is still MULTI FREAKIN GPU ENABLED and Nvidia doesn't design for mGPU cards they design for sGPU based cards. If the design allows for a mGPU card, they do it or wait to see what a refresh brings. They have done this in the past, it wont change until they change how they make the GPU.

G80-G92,GT100-GT200, GF100-???
 
Be careful, your boy over at SA was proven wrong before with similiar prognostications ;) Why so sour though, thought you would be happy with the outcome?

I don't have a "boy" or know what "SA" is. I don't see how you are going to get two Fermis onto a single card and stay within PCI specs when a single chip barely does.

I guess all the willful blindness is annoying me. Fermi in it's current form is an extremely flawed product, cut-down, over-volted, late to the party, expensive, hot and noisy, yet we have all these people saying how wonderful it is because all the previous ridculousness have given it a 0-15 percent benchmark lead on ATI's second (not first) tier product.

I'd rather Fermi was a great product than all this rah-rah nonsense of ignoring it's serious shortcomings from the usual suspects.
 
Its not about the "gpus, or chips ect, its about the fastest CARD in ONE slot. If I as a buyer want the fastest CARD, I would have to buy a 5970. No BS from your green loony rants will ever change that and i am sure your on many ignore lists now. BUT it IS about the fastest card that one can buy and put in ONE slot in a motherboard. All the rest of your rants are FAIL.

This is not true for many buyers. It is not true for me.

Remember that a crossfire solution on one board is still a crossfire solution. These cards scale differently than single GPU cards. They have different bottleknecks, and you are still limited in numbers of GPUs you can have in parallel configurations regardless of how many of those GPUs you put on a single card.

For example, for practical reasons you are limited to 4 GPUs for SLI/crossfire solutions. So you could use 2 5970s or 4 GT480s. Either would provide the same number of GPUs - even if the motherboard supports 4 cards. Adding another card to either configuration causes issues. You can push the total number of GPUs up to 8 under some circumstances, but pushing it past 4 is troublesome for both manufacturers. So when considering how to get best performance, I do need to consider the number of GPUs on a card. Of course, there are benefits to 2 GPU cards as well. For instance, compare the cost of a 4 480 solution to a 2 5970 solution and one of those benefits will become glaringly obvious.

There are also other issues with SLI and crossfire for gaming. After having run an SLI setup for 2 years, I went back and made sure I purchased the fastest single GPU card I could. Unless I see some major changes with SLI and Crossfire, this will continue to be my strategy. So when I look for the fastest single CARD, I am really looking for the single fastest GPU. I just do not like the issues that come with running dual GPU cards or dual GPU solutions. So I will not compare the 5970 to the 480 - if I wanted to compare it would be dual GPU to dual GPU, or single GPU to single GPU. I am certain other people feel differently. But to blankly assume that buyers looking for the fastest card assume that dual GPU is the same as single GPU is not really a good assumption - especially in the enthusiast market.

Just a note so I don't get lumped into a fanboy category - I wont buy a 480 right now even though I was interested in upgrading my 285. I didn't feel that the 5870 was worth the upgrade 6 months ago, so I definately don't feel the 480 is worth the upgrade now. There are not really any games out right now that I'm hurting for not playing because of my card, and I am resolution limited by my monitor anyway so performance isn't an issue. I think NVidia's biggest problem will be consumers like me. We were just getting interested in DX11 and considering upgrading. For those who buy cards today, the 5870 seems like a better deal. For many of us, we'll wait another 6 months to see what hits the shelves game wise and card wise. I think Fermi may turn out to be a good thing in the long run still, but only if they can get power usage down and performance up. We'll see.
 
I don't have a "boy" or know what "SA" is. I don't see how you are going to get two Fermis onto a single card and stay within PCI specs when a single chip barely does.

Well that's precisely what was said about GT200 no?

I guess all the willful blindness is annoying me. Fermi in it's current form is an extremely flawed product, cut-down, over-volted, late to the party, expensive, hot and noisy, yet we have all these people saying how wonderful it is because all the previous ridculousness have given it a 0-15 percent benchmark lead on ATI's second (not first) tier product.

Hmmm, even if they had hit target clocks and had all SM's enabled do you honestly think it could make a run at the 5970 in anything but a few specific scenarios? Nvidia isn't going to compete with AMD on that front unless AMD starts spending/wasting transistor budget on other things. That's not going to happen as long as they have a CPU division.

We don't have people saying it's wonderful or trying to convince anyone of its awesomeness. Have you seen a single person in this thread suggest to someone else that they should buy a Fermi based part? What we have is a cabal that's curiously expending a vast amount of energy trying to convince anyone who will listen that it's a POS that shouldn't be given the time of day. And all they can come up with is that it has a loud fan. /shrug
 
Well, I certainly agree that the die size didn't help production. But that's not the point I was making. I was merely saying that the die-size has absolutely no relevance to a potential customer, since it's not a factor in the buying decision.
I can agree, that die-size isn't important factor for customers, but why are you talking about customers?

It was nVidia's decision to make the second largest GPU die which was ever built. For sure they didn't design a mammoth die to outperform 200mm smaller competitor by 15%.

I also think it's quite silly to believe, that nVidia's performance target wasn't ATi's high-end but 15% over ATi's sweetspot/midrange product at the expense of 6-months delay and all the other problems. It's quite obvious, that Fermi with all clusters active and decent clocks would be competitive to HD5970. But nVidia overestimated the 40nm process, didn't reach decent clocks and sipmly failed to get GF100 at the targeted performance levels.

Is it really easier to believe, that nVidias target was to outperform ATi's $399 single-chip by 15% at all cost, or that they wanted to compete to ATi's high-end, but failed? Just like ATi failed with the R600 a few years ago? I don't understand why is it so difficult to admit it for some people.
 
I think that GF100 is designed for much more than gaming and it shows in its size and lower efficiency per size for gaming. That's pretty clear IMO. ATI's chips don't seem nearly as extreme in the GPGPU direction.

If manufacturing wasn't quirky, it would be an awesome chip though. That's what it comes down to. ATI built a smaller GPU that took the process issues to heart, NVIDIA did not. NVIDIA probably has no choice because they are trying to compete with AMD and Intel CPUs with their GPUs.

And I'm sure this has been said 100 times already, but these threads are inherently circular so whatever. ;)
 
Well that's precisely what was said about GT200 no?



Hmmm, even if they had hit target clocks and had all SM's enabled do you honestly think it could make a run at the 5970 in anything but a few specific scenarios? Nvidia isn't going to compete with AMD on that front unless AMD starts spending/wasting transistor budget on other things. That's not going to happen as long as they have a CPU division.

We don't have people saying it's wonderful or trying to convince anyone of its awesomeness. Have you seen a single person in this thread suggest to someone else that they should buy a Fermi based part? What we have is a cabal that's curiously expending a vast amount of energy trying to convince anyone who will listen that it's a POS that shouldn't be given the time of day. And all they can come up with is that it has a loud fan. /shrug

Anand already has the gtx 480 using 14 watts more under load than the gtx 295 which has two gpus on it.

http://www.anandtech.com/video/showdoc.aspx?i=3783&p=19

Hell the gtx 470 is already only 12 watts behind the 5970 .

I can see the gtx 470 sli in a single card. Mabye in a few months with binned parts or another spin. Certianly not with what htey are getting now
 
1. flawed product
2. cut-down
3. over-volted
4. late to the party
5. expensive
6. hot
7. noisy,
8. yet we have all these people saying how wonderful it is because all the previous ridculousness have given it a 0-15 percent benchmark lead on ATI's second (not first) tier product.

1. Are you a GPU designer? If so, in what way is it flawed.
2. Yep, I think they threw in the towel too
3. How do you know it is over volted? See 1 and supply info where it should have been lower than 1v.
4. Yep
5. 470 is about right, but the 480 is over priced
6. Yep
7. only relavent to the end user. I have been using a nice set of stereo noise reduction head phones for about 5 years now, so I dont hear my case with its 2x GTX260s and fans set too 100% and 6 120mm fans.
8. The overall average lead is 15% with launch(are the review drivers going to be shipping with it) drivers compared to a much more mature driver set for the 5870. I'd call this not to shabby in my book.
 
USing a mGPU setup as a baseline for performance to measure single GPU cards to is nuts and stupid.

Ow c'mon, you can't seriously argue that. Sure, mGPU cards are simply not an option for me at the moment, and others might have their own reasons for excluding them from the equation. But that doesn't mean that everyone who looks at such cards is 'nuts and stupid'. Cost/benefit/features/disadvantages, let people figure it out for themselves.
 
Uh huh, you just said:



This 'launch' is clearly not as strong as it could've been and I see wide consensus there. But crowing and slinging fanboi accusations around like you did on top of that takes it quite a step further, and it's probably good you're weaseling out now instead of standing by that silliness.

I agreed with the general view of his post.

If there is a fanboi here it is you.

And please keep up with you personal attacks. Attacking me is the only thing for you left to do since there is nothing you can do to make the 480 look better given the facts.
 
I don't have a "boy" or know what "SA" is. I don't see how you are going to get two Fermis onto a single card and stay within PCI specs when a single chip barely does.

They could always just pull an Asus ROG Ares, of course. 2 8-pins and a 6-pin, haven't actually seen many complaints about the PCIe specs there from the other usual suspects.
 
I can agree, that die-size isn't important factor for customers, but why are you talking about customers?

Because we were talking about things like price/performance and that's a customer oriented thing now isn't it ? Then the die size was introduced as well and I couldn't see the relevance for the customer...

no-X said:
It was nVidia's decision to make the second largest GPU die which was ever built. For sure they didn't design a mammoth die to outperform 200mm smaller competitor by 15%.

I also think it's quite silly to believe, that nVidia's performance target wasn't ATi's high-end but 15% over ATi's sweetspot/midrange product at the expense of 6-months delay and all the other problems. It's quite obvious, that Fermi with all clusters active and decent clocks would be competitive to HD5970. But nVidia overestimated the 40nm process, didn't reach decent clocks and sipmly failed to get GF100 at the targeted performance levels.

NVIDIA's focus is graphics without a doubt, but GF100 isn't for graphics only and the chip's die will obviously reflect that.

I too don't get how you reach that conclusion. I remember that a few months back, I said something like "by SP number alone, we could see over GT200 x 2 in performance", where I was greeted with "so NVIDIA's units scale perfectly ?" and yet now, you assume that NVIDIA, that surely didn't want to be late i.e. wanted to release their products in October 2009, was aiming at HD 5970 performance levels, with a single GPU, that based on theoreticals alone, was going to have a hard time beating the HD 5870 ? And still, we see the GTX 480, a cut down chip, with mild clocks and a slight advantage in memory bandwidth beating it and with a % advantage higher than what the majority expected.

Obviously things went wrong and performance should be even higher than it is with a fully enabled chip and decent clocks (and not such a big power draw), but it makes absolutely no sense to think that those were the performance targets of GF100.
 
I think that GF100 is designed for much more than gaming and it shows in its size and lower efficiency per size for gaming. That's pretty clear IMO. ATI's chips don't seem nearly as extreme in the GPGPU direction.

If manufacturing wasn't quirky, it would be an awesome chip though. That's what it comes down to. ATI built a smaller GPU that took the process issues to heart, NVIDIA did not. NVIDIA probably has no choice because they are trying to compete with AMD and Intel CPUs with their GPUs.

And I'm sure this has been said 100 times already, but these threads are inherently circular so whatever. ;)
Agreed.

Anand already has the gtx 480 using 14 watts more under load than the gtx 295 which has two gpus on it.

http://www.anandtech.com/video/showdoc.aspx?i=3783&p=19

Hell the gtx 470 is already only 12 watts behind the 5970 .

I can see the gtx 470 sli in a single card. Mabye in a few months with binned parts or another spin. Certianly not with what htey are getting now
Trini is a lost cause now, I realised it very late and wasted time responding to him before finally putting him on ignore.
 
I agreed with the general view of his post.

Ah that must be why you used the word exactly then.

If there is a fanboi here it is you.

Tssk tssk, there you go again.

And please keep up with you personal attacks. Attacking me is the only thing for you left to do since there is nothing you can do to make the 480 look better given the facts.

Heh, I honestly don't care what you think about the value of this card. You seem really distraught about the fact that not everyone agrees with you though, to the point that you start accusing people of being unreasonable and biased. Do you experience these feelings of insecurity more often?
 
Agreed.


Trini is a lost cause now, I realised it very late and wasted time responding to him before finally putting him on ignore.

Haha, what a cop out. You don't have much tolerance for opinions different from yours, eh. :LOL:
 
Back
Top