GF100 evaluation thread

Discussion in 'Architecture and Products' started by rpg.314, Mar 27, 2010.

?

Whatddya think?

Poll closed Apr 6, 2010.
  1. Yay! for both

    13 vote(s)
    6.5%
  2. 480 roxxx, 470 is ok-ok

    10 vote(s)
    5.0%
  3. Meh for both

    98 vote(s)
    49.2%
  4. 480's ok, 470 suxx

    20 vote(s)
    10.1%
  5. WTF for both

    58 vote(s)
    29.1%
  1. Squilliam

    Squilliam Beyond3d isn't defined yet
    Veteran

    Joined:
    Jan 11, 2008
    Messages:
    3,495
    Likes Received:
    114
    Location:
    New Zealand
    I don't believe the rated typical design power is a description fitting the current crop of tested and released GTX cards, I believe they reflect the wafers put through this year as the early wafers even with tweaks still reflect most of the TSMC 40nm problems which have subsequently been resolved. Once the fresh batches of wafers come through im pretty sure the TDPs seen for those cards will be close to what we expect from their respective rated TDPs.
     
  2. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    On the flipside they do have the multi monitor gaming support that nVidia promised but didn't deliver, and they aren't frying cards with hastily released drivers either.

    It should be interesting to see how hard nVidia's driver teams are taxed with SLI over multiple screen eye-nfinity type gaming.

    If you look at the whole situation it makes you realise how hard ATI has been trying, and how far behind nVidia still is.
     
  3. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    Fixed-function pipelines are on borrowed time. Once the GPU reaches ~80% ALUs (or whatever the number turns out to be) fixed-function stuff will be on the cusp of disappearing all together, I reckon.

    So, the fun bit is: when does that happen?

    I guess Fermi's in the region of 40%, hard to separate the ALUs from the other stuff in the GPCs though, could be 30%. Cypress got substantially bigger than expected, so it'll be lower I expect - RV770 is around 29% (though some ALU control/scheduling hardware is prolly missing from that assessment - and I've got no way of refining that).

    Generally ALUs aren't limited by bandwidth, so while bandwidth curbs things like ROP/TMU area increases - with the caveat that bandwidth-efficiency measures cost extra in area - ALUs are relatively free to breed like rabbits.

    GF100 apparently ditches some "fixed function" stuff by generalising buffers within L1 and L2. A lot of what's described as "fixed function" in the PolyMorph Engine looks ripe for running as a kernel in my view. I'm mystified why they aren't kernels in GF100.

    I'm not convinced the "geometry focus" is going to win NVidia any significant medium- or long-term advantage. This isn't like PCF and fundamentally there's no reason to expect ATI will continue to scrape the barrel. And if Larrabee works some magic within the next couple of years, well, fixed function stuff is going to look super-dated.

    Jawed
     
  4. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,742
    Likes Received:
    152
    Wouldn't it then require a context switch?
     
  5. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,641
    Likes Received:
    64
    Tex Samplers will remain. Will likely always remain. Their operation and requirements for the majority of workloads are just too different from the computation side.
     
  6. RecessionCone

    Regular Subscriber

    Joined:
    Feb 27, 2010
    Messages:
    505
    Likes Received:
    189
    I 100% agree wrt the impending death of fixed function units, in fact this assumption makes me more confident GF100 will end up looking rather prescient, as it is a more flexible architecture than Evergreen, specifically due to its memory subsystem. In order for AMD to fix its geometry performance, I think they're going to have to ditch their fixed function tesselator and move to a more general purpose solution - probably including caches similar to GF100 in order to avoid storing and loading huge amounts of tesselated geometry to and from DRAM. My point is that when AMD makes that change, their perf/W and perf/mm^2 on DX10 games will probably be reduced.

    All of this is just to justify why I'm not judging GF100's architecture based on DX9/10/early DX11 benchmarks. I think the future of gaming workloads looks rather different from DX10 rendering, and I think GF100 has made some expensive tradeoffs to be ahead of the curve. Time will tell.

    (Although as I said earlier, I don't buy a card on guesses of how advanced its architecture might seem in a few years... :wink:)
     
  7. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Well, I definitely think that the GF100 is ahead in terms of tech compared to ATI's DX11 parts, but they clearly made some mistakes in implementation that have cost them.
     
  8. XMAN26

    Banned

    Joined:
    Feb 17, 2003
    Messages:
    702
    Likes Received:
    1
    Its not flawed at all. If he has some engineering reason as to why it is flawed, point to where and why, otherwise he is just assuming it is flawed because of the high heat/power.
     
  9. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    IF GF100 was 50% smaller and 100w worse off, do you think it would still be equal in Crysis? Or how far behind would it be in BF2?

    As a pure gaming gpu, it's not even close. I'm not sure what you think it has that puts it ahead in terms of tech. What does it have that ATI couldn't do a lot better if they had 50% more area and 100w more power to play with?
     
  10. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    The main mistake appears to have been with the memory controller. They may also have been able to do better with a smaller part built for power consumption, and relied on a dual-GPU card for max performance.
     
  11. FrameBuffer

    Banned

    Joined:
    Aug 7, 2005
    Messages:
    499
    Likes Received:
    3
    LOL Silustroll makes me LOLIRL.. REALLY...

    "That's for another thread clearly..." .. yes thats for another thread. except you are the one who brought the initial point up... with the post "I guess the "other" campaign that tried to make GF100 worse that it turned out to be, has nothing to do with the bleak expectations for GF104 ?"

    "Also, noisier than anything ever made ? I'm guessing you forgot the GeForce FX's cooler ?" .. well the claim to noisiest cooler ever may be for some debate (until someone posts decibal tests for both the FX and the GTX) however the crown clearly bleongs to NVidia.

    "Availability was already known to be pushed to April, so what's your point ?" You mean that HARD Launch you proclaimed the GTX400 series was going to be ?? Apparently someone still hasn't figured out the difference between soft/hard and paper launches yet..

    "Also source on it "being planned to compete with dual-GPU part". Is it you guessing or was this ever confirmed ?".. IIRC NV themselves said the Fermi based Geforce would be the fastest video card EVER..

    "As far as I remember, most rumors hinted at a dual GPU Fermi based card to be released as well, and that's the one that would take on the HD 5970".. yeah I guess that would be the dual fermi based product that was supposed to launch "shortly after the GF100 initial launch" back in November and in time for the holiday season.. unless they meant THIS upcoming holiday season...

    "Given the power constraints, that certainly won't happen soon.".. oh yeah those supposed power constraints that you and your fellow greenies claimed a certain other less than accurate website's numbers were totally off (despite being almost on the dot according to mosts results so far).. and don't you dare plug in two displays to one card unless you have a certified windtunnel case.. unless you like 80C+ idle temps.. just imagine what 3D Surround will bring..

    .. so on and so on.. like shooting fermis in a barrel..
     
  12. FrameBuffer

    Banned

    Joined:
    Aug 7, 2005
    Messages:
    499
    Likes Received:
    3
    Yeah I guess you were right.. and NV never meant to say stuff like this:

    Facebook (google cache): "NVIDIA The wait is almost over! The world's most anticipated--and fastest--PC graphics gaming technology ever created will be unveiled at PAX 2010"

    "GF100 the world’s fastest and most innovative consumer graphics ever built."

    http://www.donanimhaber.com/Nvidia_Fermi_mimarisi_ile_en_hizli_grafik_kartlarini_sunacagiz-17028.htm
    " Fermi based GPU (GF100) will be fastest GPU solution in the industry"

    IIRC, NV said that the GF100 was to be twice as fast as the GTX 285, which would have put it within range of the 5970. It was around October(most notably right after the 5970 launch) that NV's tune of Fastest Graphics Card ever suddenly changed to Fastest GPU Ever,,
     
  13. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,493
    Likes Received:
    474
    I'm not sure where Nvidia claimed geometry is more important, but shaders are very important in film rendering (including REYES) so regardless of whether or not they are per pixel shaders they will continue to take up a significant amount of render time.
     
  14. Vincent

    Newcomer

    Joined:
    May 28, 2007
    Messages:
    235
    Likes Received:
    0
    Location:
    London
    Faster GPU does not equal to the affordable gpu series in the following months.
     
  15. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    That's true, but IIRC REYES relies on sub-pixel tessellation/micropolygons, which causes inefficiencies in current GPUs, so total ALU flops alone is not a good measure of how well you'll perform on REYES. (e.g. even if 90% of the time is spent in shaders, you could end up with inefficient pipeline usage where big chunks of the chip are idle waiting for work)
     
  16. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,742
    Likes Received:
    152
    Even if it isn't as fast as they would like, bandwidth certainly doesn't seem to be an issue. I'd say the main mistake was their implementation of half rate DP.
     
  17. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    If I was nv's ceo I would get the guy who thought 3 monitor gaming would be better if it required 2 cards and shoot him in the head...
    along with the pr department
     
  18. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    I don't see how this has anything to do with my post, which was about the tech, not about raw performance.
     
  19. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    That's possible, but it does depend upon how much die space that particular feature took up.
     
  20. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Obviously that particular feature was tacked on later after ATI released their own product. Obviously if they actually intend on supporting it, they're going to release cards down the road that support at least 3 displays on just one card.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...