GF100 evaluation thread

Whatddya think?

  • Yay! for both

    Votes: 13 6.5%
  • 480 roxxx, 470 is ok-ok

    Votes: 10 5.0%
  • Meh for both

    Votes: 98 49.2%
  • 480's ok, 470 suxx

    Votes: 20 10.1%
  • WTF for both

    Votes: 58 29.1%

  • Total voters
    199
  • Poll closed .
1. If ATI disregarded power and heat, we would have the same performance and a smaller chip to boot.

2. Time matters. This chip is 6 months after cypress.

3. I can argue some people will prefer physX while other will prefer eyefinity.

1) What do you mean? [oh I get it, you mean they should up the volts and clock instead of being relatively low power]

2) yup

3) probably heavily shifted towards Physx. They aren't really related IMO, and eyefinity is definitely going to be very niche. I'd think most people going that route would rather have a seamless huge TV.
 
Again, if ATI disregarded power and heat, we would have the same performance and a smaller chip to boot.

I have listed some of the considerations that I would apply if I were buying a high end GPU right now. Other people may have yet other reasons to prefer one vendor to another. Personally whether a chip is smaller is not terribly important to me. Point is that there is now a choice, and fortunately it's not up to you to decide that this choice shouldn't exist.

Edit: nice stealth edit there compres.

Ok point 2: I would say that that is completely irrelevant to someone who is buying a GPU in the coming months.

Point 3: Eyefinity is really cool, but so what? If you want that, go and buy that. Someone else might want Nvidia's 3d version. I suspect that most people just don't have the space for 3 screens though, or would be embarassed to have them on their desks, or dislike bezels, or prefer one larger screen or a beamer, or whatever. CF/SLI seems to be in use by oh, what was it at the last Steam count, 2.5% or something? I'm guessing Eyefinity/Nvidia surround will end up around that scale of uptake as well, but hey.
 
Last edited by a moderator:
Haven't we been there for a while now already?

btw, edit button no worky for you?

Sometimes we are at +300%, sometimes <+100%. I just deleted my post because I changed my mind about wanting to discuss it after looking at some charts. ;) But be my guest.
 
1) What do you mean?

MSI Lighting, Asus Matrix, etc.


yup.

3) probably heavily shifted towards Physx. They aren't really related IMO, and eyefinity is definitely going to be very niche, assuming anyone prefers that kind of setup over a huge TV instead.

They arent related. And there is no data to prove or disprove that people will take one over the other. But they are features that are not available from both lines and can make purchase criteria.
 
Point is that there is now a choice, and fortunately it's not up to you to decide that this choice shouldn't exist.

Your sweeping statement that every objective person must agree that these cards shouldn't exist is something completely different, of course.

I notice you keep lying about me saying the product should not exist. You have been spinnin gand spinning what I said:

Exactly what everyone without bias will see it. Guess now we have a rough estimate of the percentage of users with nVidia bias (on B3D) from the poll.

The fact that they still show up here arguing is quite telling, at least others have remainded silent, as they should.


In response to:

After having read through many reviews and giving it some time, my final take on Fermi is this:

There's not much with Fermi worth giving thumbs up over. You can say "oh but the performance is there!" but that comes at a price. The card punishes the users to extract the performance. Severe heat build up, huge power drains and a shrieking fan are what one must readily endure to experience the performance.

Nvidia disregarded praticality for a benchmark victory. Not a great one at that consdering this is the least they could do against the competition which has had their products out on the market for 6+ months. All this and the still have the nerve to charge 25% more than the competition. It's just a poor offering all around.

Perhaps future iterations of Fermi will be more reasonable from a price, performace and practical usage standpoint. This version however, should have stayed in the labs....
 
I notice you keep lying about me saying the product should not exist. You have been spinnin gand spinning what I said:




In response to:

What's the difference between 'should've stayed in the labs' and 'product shouldn't exist'?
 
What's the difference between 'should've stayed in the labs' and 'product shouldn't exist'?

The lab guys would get to play with it!

Sometimes we are at +300%, sometimes <+100%. I just deleted my post because I changed my mind about wanting to discuss it after looking at some charts. ;) But be my guest.

Which games did you see less than +100% gains in? There probably aren't direct comparisons but you could use the GTS 250 as a substitute.
 
Games don't run 24fps anymore like they used to in the 3dfx days when nerds first started caring about average frame rates. They run 80fps and now there is a more important number. The minimum framerate number, which represents how much your experience is ruined under the worst conditions. Built into that number is also the quantity of times your experience is ruined when it passes under 30fps or 60fps.

So what statistics are you looking at?

http://pcper.com/images/reviews/888/gtx480/metro-1920-bar.jpg
http://pcper.com/images/reviews/888/gtx480/wic-1920-bar.jpg
http://pcper.com/images/reviews/888/gtx480/batman-1920-bar.jpg
http://images.anandtech.com/graphs/nvidiageforcegtx480launch_032610115215/22165.png

Anandtech: "[In Crysis] the GTX 400 series completely tramples the 5000 series when it comes to minimum framerates, far more than we would have expected. [...] GTX 480 still enjoys a 33% lead in the minimum framerate, and the GTX 470 is well ahead of the 5850 and even slightly ahead of the 5870."

Across those four titles the minimum framerate advantage GTX 480 has over 5870 is some 40% on average, and if more "journalists" measured anything but average frame rate using built in tools you might find the minimum across 10 more games would still be 30%.

Yet some baby einsteins are still quoting 13% as the only performance difference between the cards.

The costly power drain is a problem. The noisy fan is a problem. But the performance advantage over a 5870 is not so small as 13%. Dropping to 33fps versus 25fps is a significant advantage because the second number is the only one you notice.
 
heat is noise. The point being that inside a case the temps go up, resulting in higher fan speeds, resulting in more noise. And stock case have minimal SLF/NRC/DLF.

Yes, but how many people use their cards out in the open or with case covers off? I'd venture to say far fewer than those who buy these things. Put the setup in a case, close it up and place it in the 2 places most people stick it, on the floor and on the desk and measure the heat and noise output then. I dont know about you, but I dont play games with me freakin ears 6" from my cards.
 
The costly power drain is a problem. The noisy fan is a problem. But the performance advantage over a 5870 is not so small as 13%. Dropping to 33fps versus 25fps is a significant advantage because the second number is the only one you notice.

But none of those charts show how long those dips are. If one card dips more than the other, but only for one frame, it's not important, and is actually skewing those charts. The graphs that actually shows the changing frame rate is more useful. A stable, consistent frame rate is more important than one that changes dramatically, whether it be to peaks or dips.
 
1. If ATI disregarded power and heat, we would have the same performance and a smaller chip to boot.

I think this is a key point. If ATI wants to, they can open up the power envelope, add another 50W and likely win the majority of the benchmark battles.
 
But none of those charts show how long those dips are. If one card dips more than the other, but only for one frame, it's not important, and is actually skewing those charts. The graphs that actually shows the changing frame rate is more useful. A stable, consistent frame rate is more important than one that changes dramatically, whether it be to peaks or dips.

I disagree having played several games in the past where dips into the sub 25fps range have caused death or even worse, an unpleasent gaming experience.
 
Yes because Nvidia cards can't be OC'd?

I dare anyone to try. They are already right at the edge of the power and thermal envelope. They already carry the biggest coolers and the most powerful fans. They already generate the most heat and noise. There's no room for overclocking even on the cherry picked review parts.

Did we seen any review tell us of overclocking potential? I suspect it was banned by the Nvidia reviewer guidelines.
 
Yes, but how many people use their cards out in the open or with case covers off? I'd venture to say far fewer than those who buy these things. Put the setup in a case, close it up and place it in the 2 places most people stick it, on the floor and on the desk and measure the heat and noise output then. I dont know about you, but I dont play games with me freakin ears 6" from my cards.

put it in the case, and the fan has to run at a higher RPM to maintain the same temperature. Unless you either have an aftermarket case or one of the very few cases designed with sound deadening, then your case actually makes very little difference as far as noise reduction.

Your complaints with the videos are humorous at best. It is a perfectly reasonable test setup. Maybe the issue is that nvidia had to use a fan that can go to upwards of ~65db in order to cool their design.
 
Yes because Nvidia cards can't be OC'd?

no, because Nvidia cards are ALREADY BASICALLY AT THE POWER WALL! Nvidia has already admitted the quoted TDP isn't actually the TDP! In other words, the 480 is certainly using more than 250W. OTOH, ATI TDP for the 4870 so far has proven to be conservative which means that they certainly have significant power headroom if they wanted to trade off power for performance in a standard vendor part. Nvidia has already made the power/performance trade off and doesn't really have any margin to increase power for performance.
 
Aaron, if that's so, why aren't any OEMs selling such OC cards? (are there any?) You'd think it would be a good market differentiator.

It seems like it shouldn't be that hard to prove this theory by simply OCing some cards and doing the benchmarks. I'm skeptical that the results will scale as linearly as you might think, since it appears that Fermi is winning some benchmarks by having a higher minimum FPS. If that's the case, the bottleneck could be architectural, and simply overclocking by 10-20% might not drive result in 10-20% improvements on the averages. We've seen some funny figures of Fermi winning at low resolutions, which suggests that amping up the ALU frequencies and TUs might not help as much.
 
aaronspink said:
I think this is a key point. If ATI wants to, they can open up the power envelope, add another 50W and likely win the majority of the benchmark battles.
I doubt it. Saw a review with a HD5870 overclocked to ~1GHz, it was still getting beat by the GTX 480 (default clocks).
 
Back
Top