NVIDIA Fermi: Architecture discussion

Discussion in 'Architecture and Products' started by Rys, Sep 30, 2009.

  1. Tchock

    Regular

    Joined:
    Mar 4, 2008
    Messages:
    849
    Likes Received:
    2
    Location:
    PVG
    That seems more like GF108 to be honest. Not GTX460 definitely (unless the 336 rumors are totally invalid, which still makes little sense on where nVidia is positioning this)


    (And seems right for ATI to focus SI on its lowest end since GF108 seems to be packing quite a big punch, a first so far for mid/lower-end mobile chips; Redwood and Pine feel rather lethargic for the power they're using)
     
  2. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    7SM's in the lowest end part would be quite a jump from it's immediate predecessor. GF106 is more likely then gf108, imo.
     
  3. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Official scores will shows that GTX460=GTX465 in general, heck the 460 will score better in some benches (texturing, I'm looking at you!)

    I.O.W. you'd be crazy to buy a GTX465.
     
  4. Picao84

    Veteran Regular

    Joined:
    Feb 15, 2010
    Messages:
    1,487
    Likes Received:
    648
    If that was to be true with those specs (224) it would be epic win for nVIDIA :p
     
  5. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Now, if only the card would be a lot cheaper than a HD5850, it would be "epic", but alas.
     
  6. Picao84

    Veteran Regular

    Joined:
    Feb 15, 2010
    Messages:
    1,487
    Likes Received:
    648
    I meant epic win because of a 224 shader chip beating a 352 one :wink:
    It would mean nVIDIA had transformed a terribly ineficient architecture in an eficient one. Or at least maintained GT200 level of efficiency, which thinking a bit more, was not brilliantly efficient also :???:
     
  7. TKK

    TKK
    Newcomer

    Joined:
    Jan 12, 2010
    Messages:
    148
    Likes Received:
    0
    Is it guaranteed that GPU-Z can really detect the correct number of cores per SM? Maybe it can only really detect the number of active SMs and the 32 cores-per-SM was a guess by the developer of GPU-Z when that version of the tool was developed.
    But whatever, what matters in the end is performance, price, noise, power-consumption. I'm really looking forward to all the reviews.
     
  8. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,865
    Likes Received:
    192
    Location:
    Seattle, WA
    Not efficient in what sense? The GF100, like the GT200 before it, are vastly more efficient in terms of making use of their compute power than ATI's hardware. They are less efficient in terms of performance per watt, granted, but when we're talking about a change in the number of shaders versus performance, we're talking about the former type of efficiency, where nVidia currently rules supreme.
     
  9. Picao84

    Veteran Regular

    Joined:
    Feb 15, 2010
    Messages:
    1,487
    Likes Received:
    648
    Yes I was talking about energy efficiency.
     
  10. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,293
    Likes Received:
    239
    Depends on the point of view. Theoretical GFLOPs compared to gaming performance puts nVidia's GPUs in better light. But theoretical Z and theoretical geometry numbers compared to gaming performance lead into opposite conclusion.
     
  11. TKK

    TKK
    Newcomer

    Joined:
    Jan 12, 2010
    Messages:
    148
    Likes Received:
    0
    If adding additional compute resources adds less transistors and power consumption then adding logic to make more efficient use of existing ressources, then I think the former is the more efficient approach.

    I mean, what is it worth that Nvidias shader cores are 4 times more efficient if they consume 6 times as much power and die area?
     
  12. Tchock

    Regular

    Joined:
    Mar 4, 2008
    Messages:
    849
    Likes Received:
    2
    Location:
    PVG
    GPU-Z doesn't detect anything AFAIK. It's a database with preconfigged, NDAble data.

    What's ridiculous is that Coolaler has a card already and refuses to take any pics of the control panel to show stats while flaunting 3D06/3DV results and card pics. :???:
     
  13. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Since world+dog already has the GTX460 cards in-house I'd expect way more than these small leaks right now. One good thing for GF104 is that it's development time took quite long so you'd see custom cards from day 1, that's good for pricing.
     
  14. Novum

    Regular

    Joined:
    Jun 28, 2006
    Messages:
    335
    Likes Received:
    8
    Location:
    Germany
    So does anyone know for certain how GF104 does supply its three vec16-ALUs with only two warp schedulers?

    Or are there really three schedulers and the fancy graphics are at fault?
     
  15. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,318
    Likes Received:
    21
    Location:
    msk.ru/spb.ru
    Both schedullers of GF104 are able to issue two instructions per clock to SIMDs/LSUs/SFUs.
    Look at this thread for some discussion on this topic. Read Anand's GTX460 article also.
     
  16. iwod

    Newcomer

    Joined:
    Jun 3, 2004
    Messages:
    179
    Likes Received:
    1
    New Rumors suggest that Nv is working on GTX 490 with Dual GF104 inside. Which would outperform the GF100 based GTX480. That is an interesting move as it is basically doing the same as ATI, Sticking 2 Chips for Enthusiast market with chips build off for performance / mainstream.

    Of coz GF100 would continue to live as it Fit into the best Single Chip Performance once its yield and power can be tuned down. As well as the Workstation and GPGPU segment,

    The Current GF104 is A01 silicon, which means it is very early production. I suppose with further revision we could get 1 more core unlocked, and higher frequency, to combat the Half Generation ATI chips; South Island.
     
  17. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,318
    Likes Received:
    21
    Location:
    msk.ru/spb.ru
    It's not exactly the same because they'll still have GF100 (in it's B revision by the time probably).
     
  18. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Is this thing still coming?
     
  19. NathansFortune

    Regular

    Joined:
    Mar 3, 2009
    Messages:
    559
    Likes Received:
    0
    It wouldn't make sense for Nvidia to stick A2/3 and get a neutered chip when a silicon respin would fix a lot of their problems and increase yields...
     
  20. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    15,926
    Likes Received:
    4,879
    Unless they were ditching GF100 entirely as ATI did with R520/R600. And instead there's a new modified chip coming out for the fall. Not saying there is but it's entirely within the realm of possibility.

    Regards,
    SB
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...