GF100 evaluation thread

Whatddya think?

  • Yay! for both

    Votes: 13 6.5%
  • 480 roxxx, 470 is ok-ok

    Votes: 10 5.0%
  • Meh for both

    Votes: 98 49.2%
  • 480's ok, 470 suxx

    Votes: 20 10.1%
  • WTF for both

    Votes: 58 29.1%

  • Total voters
    199
  • Poll closed .
Not this again...Why do some feel the need to try and justify that guy's wrongs, by twisting everything into a "reality" that doesn't exist ?

The GTX 480 was never meant to compete with the HD 5970. Never did NVIDIA counter a dual GPU with a single GPU and this time, it's not different. Even more so, because of the delays.

You do realise that just because nVidia says that it's not competing against 5970, it doesn't make it necessarily true? It's all about price and performance. If price is more in the 5970 region, it will compete against that card.
 
Not this again...Why do some feel the need to try and justify that guy's wrongs, by twisting everything into a "reality" that doesn't exist ?

The GTX 480 was never meant to compete with the HD 5970. Never did NVIDIA counter a dual GPU with a single GPU and this time, it's not different. Even more so, because of the delays.
You mean R600 was meant to compete with 8800GTS, and not 8800GTX? ;) Same here (no way nVidia's goal was to face their fastest card against lower speed opponents card), the only difference is AMD learned from their mistake (just remember nVidia's "to make such huge chip is f**** hard") and they left high-end for dual-gpu's, while 5800 series is the best mid-level has to offer, like 8800GTS in their days. nVidia still havent learned from "sweet-spot" strategy, but its only question of time till they do IMO.
 
What if those were titles you actually wanted to play?

Well, all I can tell you is that I had both 285 and 5870 at one point (for over 2 months), and I did play Batman AA. Sure the effects were nice, but weren't enough to change my decision on which card to sell.
 
You do realise that just because nVidia says that it's not competing against 5970, it doesn't make it necessarily true? It's all about price and performance. If price is more in the 5970 region, it will compete against that card.

Either that or Nvidia is admitting they've got nothing to compete with 5970 as the fastest card. I don't think they'd publicly admit they are ceding the fastest product to AMD, and that this was intentional.
 
Was FX really louder? Its YS-Tech cooler was rated as 46.5 dBA. The GTX480 is louder than HD2900XT, but I can't find any comparision with the FX...

Tom's Hardware managed to measure it once. It was louder in the case than a jet engine, IIRC.
 
Interesting, what makes them think they can get a higher price for it after launch? Probably figure nobody will be paying attention by then.
I have no idea - maybe it's reviews. Launch-price is quoted in reviews, many sites also publishes price/performance analysis. Low prices will stay in the reviews forever - great form of advertisement. It's similar to local situation with GF8800GT. Reviews quoted great price/performace and customers bought it despite significantly higher street price (in fact worse price/performance than GTS640 offered).
 
If it was meant to compete with 5870, then why is it 58% larger?

LOL, so should we start looking at GPUs solely on their die-sizes ? I really never understood the die-size fixation of some (well actually I do for some...:))...Everyone should start going to stores and asking the clerk for a graphics card with a chip of a certain size :LOL:

Seriously now, one thing has nothing to do with the other. First because they don't design a chip, based on their assumption of what the die size of the competition will be. And second, because it's preposterous to expect that just because of the delays, the recently launched cards must beat a card with two GPUs in it. They wanted to release this in 2009, probably right after Windows 7 i.e. before the HD 5970 was out or are you somehow saying that NVIDIA wanted to be late (which makes even less sense) ?

And where's this die-size measurements ? I really must've missed that tidbit from reviews, because I haven't seen any confirmation of the die-size yet.
 
You mean R600 was meant to compete with 8800GTS, and not 8800GTX? ;) Same here (no way nVidia's goal was to face their fastest card against lower speed opponents card), the only difference is AMD learned from their mistake (just remember nVidia's "to make such huge chip is f**** hard") and they left high-end for dual-gpu's, while 5800 series is the best mid-level has to offer, like 8800GTS in their days. nVidia still havent learned from "sweet-spot" strategy, but its only question of time till they do IMO.

Different situation isn't it ? Did NVIDIA have a dual GPU card based on G80 at that time ? No, so everyone was expecting the HD 2900 XT to compete with the 8800 GTX, which it didn't, so it had to be priced lower.
If NVIDIA had a 8800 GX2 befoer the HD 2900 XT was released, the situation would be similar. Just because of the delays, no one should be expecting the late single GPU, to be faster than the dual GPU. ATI also didn't want R600 to be that late, just like NVIDIA didn't either for GF100.
 
About GTX480's diminishing advantage once it reached 2500x1600 resolution : what are you guys suggesting ? is it an architectural problem ? or a driver problem ?

Neither. It has superior geometry performance, but closer/same/lower pixel performance (be it alu or texture) - so as you move up in resolution focus shifts and you become pixel bound..
 
You do realise that just because nVidia says that it's not competing against 5970, it doesn't make it necessarily true? It's all about price and performance. If price is more in the 5970 region, it will compete against that card.

I couldn't care less about what NVIDIA says (though I would like to see where they've said that on record, just so I see it for myself).
It's about common sense. NVIDIA didn't want to be late. They wanted to release these cards at least at around the time Windows 7 launched i.e. before the HD 5970. Just because it's late you are somehow thinking that the chip needs to magically gain performance it was never designed to have.

But of course, you can compare it with whatever you want, even if the GTX 480 is not "more in the 5970 region".

GTX 480 MSRP is $499
HD 5870 is at best $390-400
HD 5970 costs much more than $600

How exactly is the GTX 480 more in the HD 5970 region ?
 
LOL, so should we start looking at GPUs solely on their die-sizes ?
We can look on it from the die-size perspective => 60% larher than RV870
...or from price-segment perspective => 50% more expensive than HD5870
...or from price/performance perspective => same league as HD5970
only the performance perspective situates it at the 5870's level

overclockers perspective is also interesting - GTX260-216 at the end of fall was nearly three-times cheaper than GTX470 now. Good OC makes it only 10-15% slower than GTX470 in many games...
 
LOL, so should we start looking at GPUs solely on their die-sizes ? I really never understood the die-size fixation of some (well actually I do for some...:))...Everyone should start going to stores and asking the clerk for a graphics card with a chip of a certain size :LOL:

You are free to shop for your card whichever way you like. In case you haven't noticed so far, the people who design gpu's care. The people who make B3D the best online technical forum care about die sizes, perf/mm and perf/W. A lot. I am not expecting you to care about these things. But please excuse me if I (and possibly others) don't share you indifference for die sizes, perf/mm and perf/W.

Seriously now, one thing has nothing to do with the other. First because they don't design a chip, based on their assumption of what the die size of the competition will be.
I hope Intel makes you lead architect for LRB2. :LOL:

And second, because it's preposterous to expect that just because of the delays, the recently launched cards must beat a card with two GPUs in it.
If it heats up like a duck, and it is about as big as a duck, costs almost as much as a duck, then......

And where's this die-size measurements ? I really must've missed that tidbit from reviews, because I haven't seen any confirmation of the die-size yet.
Haven't seen it either.
 
Different situation isn't it ? Did NVIDIA have a dual GPU card based on G80 at that time ? No, so everyone was expecting the HD 2900 XT to compete with the 8800 GTX, which it didn't, so it had to be priced lower.
If NVIDIA had a 8800 GX2 befoer the HD 2900 XT was released, the situation would be similar. Just because of the delays, no one should be expecting the late single GPU, to be faster than the dual GPU. ATI also didn't want R600 to be that late, just like NVIDIA didn't either for GF100.
Situation is strikingly similar (R600 and Fermi launch), and yes - nVidia knew AMD's fastest new gen card will be dual, while they insisted of making massive single die high-end card to compete with... and it didnt went so well.

If nVidia would have planned to launch with X2, their chip strategy would have been different. We can expect X2 after refresh, but Fermi2 will be single-die, again, unless they learned from their mistake. And by the way, Fermi2 will face dual NI card, and probably will get beaten again. As you can see, AMD strategy pays off not only for mid-level cards margins, but eventually they took over Top card as well, and it seems for a long time.
 
Tom's Hardware managed to measure it once. It was louder in the case than a jet engine, IIRC.
That surely would depend on the distance you measure. At the same distance - a definite no.
I don't know if GTX480 is louder or not than FX. At the very least I think it has much better fan management (ht4u says GTX480 is the loudest graphic card they ever measured, but they didn't measure the dustbuster). Though obviously, with a cooler like the GTX480 has, the FX could probably be cooled very quietly, the power draw doesn't really compare...
About GTX480's diminishing advantage once it reached 2500x1600 resolution : what are you guys suggesting ? is it an architectural problem ? or a driver problem ?
I doubt it's any problem or it's going to change, that's just meaning it's more efficient at lower resolutions relative to other cards. Well I didn't do the math, but as long as fps/pixel count doesn't get lower at higher resolutions (and I strongly doubt it does) there is no problem anywhere, just the other cards which have bottlenecks in geometry handling or whatever (meaning, if your bottlenecks shift to pixel load they catch up).
 
We can look on it from the die-size perspective => 60% larher than RV870

And that is relevant for potential buyers in...? Right...absolutely nothing!

no-X said:
...or from price-segment perspective => 50% more expensive than HD5870

What do you mean by this "price-segment" ? 50% more expensive in die-size ? Again, where does that affect any potential buyer's decision ?
If it's not die-size, then do explain what you mean.

no-X said:
...or from price/performance perspective => same league as HD5970
only the performance perspective situates it at the 5870's level

How to you figure ? How does 15-20% more performance for 25% more money compared to the HD 5870, make it at the same league as the HD 5970 ? :oops: The GTX 480 is priced considerably lower than the HD 5970...
 
Performance, DX11? Although I agree that for most people it might not be worthwile, as it's not worth it to upgrade from Cypress to Fermi.

Luckily for Nvidia everyone didn't upgrade to a Cypress based card in the last six months. I didn't even realize we were considering Cypress owners here - I agree it would be silly for them to upgrade (or downgrade according to some people here :))

Out of curiosity - what features do you use (not have) that made your 285 irreplaceable thus far?

Let's take the ever popular PhysX. By switching I eliminate that option and the decision is made for all titles - upcoming Mafia II included for example. What each person should do is discount their future perceived value of all PhysX effects in all games they will play during the lifetime of the card. For some that will be zero. For others it will be higher, maybe significantly so. I for one own Batman and though I haven't played it yet I have seen what the additional PhysX effects bring and whenever I get around to it I plan to have those effects enabled.

Same goes for CUDA. Thus far its only use to me has been running benchmarks, demos, playing with the SDK, contributing to OpenCL/CUDA comparisons, messing around with Just Cause 2 settings etc. Nothing practically useful but still something I have access to and have used in the past and therefore is of value to me as someone interested in the technology. I don't have to just read about it on the internet.

Now the argument on the other side of the coin is that all of that is useless and I should give it up for the quieter fan. Not very convincing....
 
Situation is strikingly similar (R600 and Fermi launch), and yes - nVidia knew AMD's fastest new gen card will be dual, while they insisted of making massive single die high-end card to compete with... and it didnt went so well.

So you are stating (like some before) that NVIDIA wanted to be late ? Ok...that makes no sense, but if that's what you want to believe, then go for it...

Harison said:
If nVidia would have planned to launch with X2, their chip strategy would have been different. We can expect X2 after refresh, but Fermi2 will be single-die, again, unless they learned from their mistake. And by the way, Fermi2 will face dual NI card, and probably will get beaten again. As you can see, AMD strategy pays off not only for mid-level cards margins, but eventually they took over Top card as well, and it seems for a long time.

Single die ? Maybe you wanted to say big-die, but I would love to see those inside sources. Must be the same that state that NI will be, its performance against undefined competition and also that everything will go smoothly with it :LOL:
 
Back
Top