GF100 evaluation thread

Whatddya think?

  • Yay! for both

    Votes: 13 6.5%
  • 480 roxxx, 470 is ok-ok

    Votes: 10 5.0%
  • Meh for both

    Votes: 98 49.2%
  • 480's ok, 470 suxx

    Votes: 20 10.1%
  • WTF for both

    Votes: 58 29.1%

  • Total voters
    199
  • Poll closed .
IMO it is very justified.

The debating is because some claim that the ridiculous power and heat levels are acceptable.
To some they probably are acceptable. I'd rather not deal with it myself, but I wouldn't grudge anybody that felt differently. Even if you don't think the heat/power levels are acceptable, that doesn't mean others necessarily will agree with you.
 
:LOL:

The deaf can't hear the fan therefore it is not noisy for anyone. Classic.

:D
I was making a funny

And while were at it, the graphics performance wouldn't be an issue for the blind, and the price wouldn't be an issue for the rich! I could go on all day!!



On a more serious note, I haven't seen much coverage of the new AA modes. I'd really like an in depth look at how they've upgraded alpha to coverage to take into account the extra coverage samples when using CSAA.
 
Of course, it's hard to predict the future, but it does seem to be the case that film-quality 3d rendering makes beautiful pictures by rasterizing huge numbers of tiny polygons. If we believe that the future of real-time graphics is to approach film rendering, than GF100's emphasis on geometry over pixel shaders seems justified.
I've been rather disappointed with the way normal maps and such create lumpy plastic worlds so any change there is for the better IMO. But it's not like ATI isn't pushing geometry. Both ATI and NV have messed with tessellation all the way back to GeForce 3 and Radeon 8500. R600 on up (and the 360) have hardware tessellation that was ignored until now. And the industry's move to unified shaders immediately allowed for more flexiblity in geometric complexity.
 
Last edited by a moderator:
To some they probably are acceptable. I'd rather not deal with it myself, but I wouldn't grudge anybody that felt differently. Even if you don't think the heat/power levels are acceptable, that doesn't mean others necessarily will agree with you.

It goes like this:

1: a says I only care about performance, I will tollerate the power and heat. GTX480 is great.
2: b says you can get that level of performance from overclocked hd5870 at those heat/power levels or HD5970 already outperforms it.
3: a spins till infinite and beyound about: double GPU vs single GPU, min frame rates, physX, etc

Bottom line is the chip is late and did not deliver.
 
It goes like this:

1: a says I only care about performance, I will tollerate the power and heat. GTX480 is great.
2: b says you can get that level of performance from overclocked hd5870 at those heat/power levels or HD5970 already outperforms it.
3: a spins till infinite and beyound about: double GPU vs single GPU, min frame rates, physX, etc

Bottom line is the chip is late and did not deliver.
Considering an HD 5970 is much more expensive, and overclocking is never guaranteed, 2) isn't a given.
 
True, but trini should also not discount that power and heat come into play as well.

When did I do that? I'm not pumping up Fermi at all or dismissing its shortcomings. The loud fan and high power consumption were acknowledged back on page 1. Since then there's been a concerted effort to paint the thing as useless as a result.

The debating is because some claim that the ridiculous power and heat levels are acceptable.

May I ask why you think you or anyone else should dictate to other people what they should consider acceptable? Isn't that a wholly subjective issue? :eek: Tell that to these guys. :LOL:
 
Considering an HD 5970 is much more expensive, and overclocking is never guaranteed, 2) isn't a given.

Because we are comparing MSRP versus MSRP?

It seems to me people think the GTX480 is immune to market conditions, current inflated HD5970 prices (€619) are ~10% over the current 480 pre-release prices (€549)
 
Because we are comparing MSRP versus MSRP?
I looked at newegg's prices. Maybe things may be different later, but for now the GTX is around $500-$530, while the 5970 is around $700-$750.

It seems to me people think the GTX480 is immune to market conditions, current inflated HD5970 prices (€619) are ~10% over the current 480 pre-release prices (€549)
I wasn't sure of a good European site to check.
 
The HD 5970 is more comparable to the GTX 480 than it is for any SLI or Crossfire setup.

Between the HD 5970 and the GTX 480 they are actually similar on almost all accounts. They both use similar power, they both have a single PCB, similar size, shape and noise and the only difference is whether someone has any negative feelings towards mutli gpu setups for whatever reason.

However between the HD 5970 and other multi-gpu solutions there are a few caveats which must be overlooked when thinking about the overall market.

1. Does the person have multiple PCI-E slots and the right power supply?
2. Does the person run on a platform amenable to Sli/Crossfire? Note* The 1156 pin P55 platform does not have enough PCI-E lanes and Nvidia boards do not do crossfire and AMD boards do not do SLI which leaves only the 1366 X58 platforms open to both.
3. Is the case big enough with enough airflow for 400W worth of graphics cards? 300W is bad but 400W is pushing most enthusiast cases quite hard especially when that 125W CPU may also be pushing north of 150W overclocked.

If you can run an HD 5970 you can run a GTX 480 and vice versa. Arguably the only difference is personal preference in relation to how multi-gpu setups perform.
 
And people who think that 2 higend chips WONT be faster than a single highend chip from a competitor are insane. Unless sometime in the future MGPU scaling falls off the charts, this fact should almost never change. I stand by what I said. USing a mGPU setup as a baseline for performance to measure single GPU cards to is nuts and stupid.
FYI, many people think your opinion is nuts and stupid. :smile:
 
One thing to keep in mind about the 480 running so hot and fan so hard is all that heat is being dumped back into the case. Thus your case temps are increasing which will likely have other fans in your system kick it up a notch to compensate. Then it won't just be a noisy video card you have to contend with.

I didn't see an external vent for the GTX480 so please correct me if I'm wrong and I'll hang my head in shame :(
 
One thing to keep in mind about the 480 running so hot and fan so hard is all that heat is being dumped back into the case. Thus your case temps are increasing which will likely have other fans in your system kick it up a notch to compensate. Then it won't just be a noisy video card you have to contend with.

I didn't see an external vent for the GTX480 so please correct me if I'm wrong and I'll hang my head in shame :(

http://www.hardocp.com/images/articles/126962492671BZgJ5ZxI_1_8_l.jpg
 
One thing to keep in mind about the 480 running so hot and fan so hard is all that heat is being dumped back into the case.
That's, um, not true. Some is, yes, but definitely not all. It does have a vent out the back for a reason :p
 
Their other angles with Tesla and Quadro sort of demand a big chip strategy though. AFR performance isn't relevant there. From a purely academic standpoint I admire what they're trying to do with product differentiation and growth of the business. It's execution that's killing them.

Why does the Tesla approach require a big chip strategy? Their larger Tesla offerings are already multi-chip, and so it shouldn't be much difference if the individual chips were smaller. Then the workload can be split between the different chips, set up as a deep pipeline with each chip taking care of one part of the processing or whatever fits the problem space.

And when using a single chip a smaller arch shouldn't make too much difference, albeit performing slower, since the same problem should be able to compute on 256 and 512 cores.

I think that GF100 is designed for much more than gaming and it shows in its size and lower efficiency per size for gaming. That's pretty clear IMO. ATI's chips don't seem nearly as extreme in the GPGPU direction.

Agreed, looking at the performance of the 480 GTX in SmallPT GPU, starting at #283, it is really interesting. So for those of us with an interest in both games and GPGPU this really isn't a shabby card.

So this silent version is a bit tempting, if very pricey and probably extremely hard to get..
 
Looks like if you can stomach the heat and power, SLI is quite formidable:

http://www.maingearforums.com/entry.php?24-So-You-Want-To-Buy-A-GeForce-Part-2

Is this legit? Fantastic scaling.

Edit: just checked Nvidia in facebook, and a lot of people raging abou tpower and heat there as well LOL.

Ati still has a ways to go with their crossfire. Frankly at this point, it's not excusable. SLI tends to scale well with most games where as Xfire still proves to be a hit or miss.
 
Why does the Tesla approach require a big chip strategy? Their larger Tesla offerings are already multi-chip, and so it shouldn't be much difference if the individual chips were smaller.

Yes, it's true that they already have multiple board Tesla configurations. However, in the end the sales pitch boils down to what a single chip can do. Memory capacity is also a major selling point and the wider bus is critical there. I don't think they need to be making 500mm^2 monstrosities but the "sweet-spot" strategy isn't really applicable in those markets.

Nice video :D

Yeah I know. And I always thought people went water for the quiet :LOL:
 
Back
Top