GF100 evaluation thread

Discussion in 'Architecture and Products' started by rpg.314, Mar 27, 2010.

?

Whatddya think?

Poll closed Apr 6, 2010.
  1. Yay! for both

    13 vote(s)
    6.5%
  2. 480 roxxx, 470 is ok-ok

    10 vote(s)
    5.0%
  3. Meh for both

    98 vote(s)
    49.2%
  4. 480's ok, 470 suxx

    20 vote(s)
    10.1%
  5. WTF for both

    58 vote(s)
    29.1%
  1. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    If it was meant to compete with 5870, then why is it 58% larger?
     
  2. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    It's highly unlikely they knew the die size of the 5870 at the time the die size of the GTX 4x0 was decided upon.
     
  3. air_ii

    Newcomer

    Joined:
    May 2, 2007
    Messages:
    134
    Likes Received:
    0
    You do realise that just because nVidia says that it's not competing against 5970, it doesn't make it necessarily true? It's all about price and performance. If price is more in the 5970 region, it will compete against that card.
     
  4. Harison

    Newcomer

    Joined:
    Mar 29, 2010
    Messages:
    195
    Likes Received:
    0
    You mean R600 was meant to compete with 8800GTS, and not 8800GTX? :wink: Same here (no way nVidia's goal was to face their fastest card against lower speed opponents card), the only difference is AMD learned from their mistake (just remember nVidia's "to make such huge chip is f**** hard") and they left high-end for dual-gpu's, while 5800 series is the best mid-level has to offer, like 8800GTS in their days. nVidia still havent learned from "sweet-spot" strategy, but its only question of time till they do IMO.
     
  5. air_ii

    Newcomer

    Joined:
    May 2, 2007
    Messages:
    134
    Likes Received:
    0
    Well, all I can tell you is that I had both 285 and 5870 at one point (for over 2 months), and I did play Batman AA. Sure the effects were nice, but weren't enough to change my decision on which card to sell.
     
  6. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    Either that or Nvidia is admitting they've got nothing to compete with 5970 as the fastest card. I don't think they'd publicly admit they are ceding the fastest product to AMD, and that this was intentional.
     
  7. I.S.T.

    Veteran

    Joined:
    Feb 21, 2004
    Messages:
    3,174
    Likes Received:
    389
    Tom's Hardware managed to measure it once. It was louder in the case than a jet engine, IIRC.
     
  8. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,455
    Likes Received:
    471
    I have no idea - maybe it's reviews. Launch-price is quoted in reviews, many sites also publishes price/performance analysis. Low prices will stay in the reviews forever - great form of advertisement. It's similar to local situation with GF8800GT. Reviews quoted great price/performace and customers bought it despite significantly higher street price (in fact worse price/performance than GTS640 offered).
     
  9. Silus

    Banned

    Joined:
    Nov 17, 2009
    Messages:
    375
    Likes Received:
    0
    Location:
    Portugal
    LOL, so should we start looking at GPUs solely on their die-sizes ? I really never understood the die-size fixation of some (well actually I do for some...:))...Everyone should start going to stores and asking the clerk for a graphics card with a chip of a certain size :lol:

    Seriously now, one thing has nothing to do with the other. First because they don't design a chip, based on their assumption of what the die size of the competition will be. And second, because it's preposterous to expect that just because of the delays, the recently launched cards must beat a card with two GPUs in it. They wanted to release this in 2009, probably right after Windows 7 i.e. before the HD 5970 was out or are you somehow saying that NVIDIA wanted to be late (which makes even less sense) ?

    And where's this die-size measurements ? I really must've missed that tidbit from reviews, because I haven't seen any confirmation of the die-size yet.
     
  10. Silus

    Banned

    Joined:
    Nov 17, 2009
    Messages:
    375
    Likes Received:
    0
    Location:
    Portugal
    Different situation isn't it ? Did NVIDIA have a dual GPU card based on G80 at that time ? No, so everyone was expecting the HD 2900 XT to compete with the 8800 GTX, which it didn't, so it had to be priced lower.
    If NVIDIA had a 8800 GX2 befoer the HD 2900 XT was released, the situation would be similar. Just because of the delays, no one should be expecting the late single GPU, to be faster than the dual GPU. ATI also didn't want R600 to be that late, just like NVIDIA didn't either for GF100.
     
  11. Psycho

    Regular

    Joined:
    Jun 7, 2008
    Messages:
    746
    Likes Received:
    41
    Location:
    Copenhagen
    Neither. It has superior geometry performance, but closer/same/lower pixel performance (be it alu or texture) - so as you move up in resolution focus shifts and you become pixel bound..
     
  12. Silus

    Banned

    Joined:
    Nov 17, 2009
    Messages:
    375
    Likes Received:
    0
    Location:
    Portugal
    I couldn't care less about what NVIDIA says (though I would like to see where they've said that on record, just so I see it for myself).
    It's about common sense. NVIDIA didn't want to be late. They wanted to release these cards at least at around the time Windows 7 launched i.e. before the HD 5970. Just because it's late you are somehow thinking that the chip needs to magically gain performance it was never designed to have.

    But of course, you can compare it with whatever you want, even if the GTX 480 is not "more in the 5970 region".

    GTX 480 MSRP is $499
    HD 5870 is at best $390-400
    HD 5970 costs much more than $600

    How exactly is the GTX 480 more in the HD 5970 region ?
     
  13. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,455
    Likes Received:
    471
    We can look on it from the die-size perspective => 60% larher than RV870
    ...or from price-segment perspective => 50% more expensive than HD5870
    ...or from price/performance perspective => same league as HD5970
    only the performance perspective situates it at the 5870's level

    overclockers perspective is also interesting - GTX260-216 at the end of fall was nearly three-times cheaper than GTX470 now. Good OC makes it only 10-15% slower than GTX470 in many games...
     
  14. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    You are free to shop for your card whichever way you like. In case you haven't noticed so far, the people who design gpu's care. The people who make B3D the best online technical forum care about die sizes, perf/mm and perf/W. A lot. I am not expecting you to care about these things. But please excuse me if I (and possibly others) don't share you indifference for die sizes, perf/mm and perf/W.

    I hope Intel makes you lead architect for LRB2. :lol:

    If it heats up like a duck, and it is about as big as a duck, costs almost as much as a duck, then......

    Haven't seen it either.
     
  15. Harison

    Newcomer

    Joined:
    Mar 29, 2010
    Messages:
    195
    Likes Received:
    0
    Situation is strikingly similar (R600 and Fermi launch), and yes - nVidia knew AMD's fastest new gen card will be dual, while they insisted of making massive single die high-end card to compete with... and it didnt went so well.

    If nVidia would have planned to launch with X2, their chip strategy would have been different. We can expect X2 after refresh, but Fermi2 will be single-die, again, unless they learned from their mistake. And by the way, Fermi2 will face dual NI card, and probably will get beaten again. As you can see, AMD strategy pays off not only for mid-level cards margins, but eventually they took over Top card as well, and it seems for a long time.
     
  16. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,022
    Likes Received:
    122
    That surely would depend on the distance you measure. At the same distance - a definite no.
    I don't know if GTX480 is louder or not than FX. At the very least I think it has much better fan management (ht4u says GTX480 is the loudest graphic card they ever measured, but they didn't measure the dustbuster). Though obviously, with a cooler like the GTX480 has, the FX could probably be cooled very quietly, the power draw doesn't really compare...
    I doubt it's any problem or it's going to change, that's just meaning it's more efficient at lower resolutions relative to other cards. Well I didn't do the math, but as long as fps/pixel count doesn't get lower at higher resolutions (and I strongly doubt it does) there is no problem anywhere, just the other cards which have bottlenecks in geometry handling or whatever (meaning, if your bottlenecks shift to pixel load they catch up).
     
  17. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    Well, if NVidia/Fermi is "Intel" in this story, it doesn't necessarily bode well for AMD. :)
     
  18. Silus

    Banned

    Joined:
    Nov 17, 2009
    Messages:
    375
    Likes Received:
    0
    Location:
    Portugal
    And that is relevant for potential buyers in...? Right...absolutely nothing!

    What do you mean by this "price-segment" ? 50% more expensive in die-size ? Again, where does that affect any potential buyer's decision ?
    If it's not die-size, then do explain what you mean.

    How to you figure ? How does 15-20% more performance for 25% more money compared to the HD 5870, make it at the same league as the HD 5970 ? :shock: The GTX 480 is priced considerably lower than the HD 5970...
     
  19. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,062
    Likes Received:
    3,119
    Location:
    New York
    Luckily for Nvidia everyone didn't upgrade to a Cypress based card in the last six months. I didn't even realize we were considering Cypress owners here - I agree it would be silly for them to upgrade (or downgrade according to some people here :))

    Let's take the ever popular PhysX. By switching I eliminate that option and the decision is made for all titles - upcoming Mafia II included for example. What each person should do is discount their future perceived value of all PhysX effects in all games they will play during the lifetime of the card. For some that will be zero. For others it will be higher, maybe significantly so. I for one own Batman and though I haven't played it yet I have seen what the additional PhysX effects bring and whenever I get around to it I plan to have those effects enabled.

    Same goes for CUDA. Thus far its only use to me has been running benchmarks, demos, playing with the SDK, contributing to OpenCL/CUDA comparisons, messing around with Just Cause 2 settings etc. Nothing practically useful but still something I have access to and have used in the past and therefore is of value to me as someone interested in the technology. I don't have to just read about it on the internet.

    Now the argument on the other side of the coin is that all of that is useless and I should give it up for the quieter fan. Not very convincing....
     
  20. Silus

    Banned

    Joined:
    Nov 17, 2009
    Messages:
    375
    Likes Received:
    0
    Location:
    Portugal
    So you are stating (like some before) that NVIDIA wanted to be late ? Ok...that makes no sense, but if that's what you want to believe, then go for it...

    Single die ? Maybe you wanted to say big-die, but I would love to see those inside sources. Must be the same that state that NI will be, its performance against undefined competition and also that everything will go smoothly with it :lol:
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...