NVIDIA GT200 Rumours & Speculation Thread

Discussion in 'Architecture and Products' started by Arun, Feb 10, 2008.

Thread Status:
Not open for further replies.
  1. Twinkie

    Regular

    Joined:
    Oct 22, 2006
    Messages:
    386
    Likes Received:
    5
    Hopefully this thing can eat Crysis for lunch. :twisted:
     
  2. DSC

    DSC
    Banned

    Joined:
    Jul 12, 2003
    Messages:
    689
    Likes Received:
    3
    http://www.tgdaily.com/content/view/37611/140/

     
  3. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
    It looks like the points assigned by FAH, rather than sheer Flop numbers.

    edit: Ignore, this post is now redundant. :oops:
     
  4. ZerazaX

    Regular

    Joined:
    Oct 29, 2007
    Messages:
    280
    Likes Received:
    0
    So where are you puling these numbers from? And how are you doing the math on the GFlops? If they're going 3 flops/shader, then 240 shaders * 1300 Mhz clock * 3 flops/shader would be ~900-950 GFlops as CJ suggested
     
  5. juan789123498

    Newcomer

    Joined:
    Sep 8, 2007
    Messages:
    9
    Likes Received:
    0
    I just came from editor's day, and broke my NDA, that's where i got those clocks...... :lol:

    Now seriously, this is only what i believe more reasonably, based on the information we have (240SP, 930 Gflops).
    There aren't a lot of possibilites for clock speeds, in fact now it only depends on the architecture -> flops/SP.
    I expect at least clocks equal than G92 products, that discards 3 flops/shader.
    What makes me believe i am right is that Nvidia usually clocks the core at a rounded number (multiple of 25), and 2 flops/SP and assuming G92 core/shader clock ratio gives exactly 775, a coincidence?
    With 3 flops/clock it gives 522, not a nice number.


    I think these rumours were based on a misinterpretation of information.
    Read the final slides please, if you don't have:
    http://forum.donanimhaber.com/m_23372076/tm.htm
    I think the +50% improvement is not per clock per shader, it's total vs a 9800 Gx2.

    And why Nvidia would make more efficient shaders, but that clock a lot lower. As i showed the same gflops are attainable with 'old' shaders of G80 at G92 clock speeds.

    Either way, it will rock...
     
  6. ZerazaX

    Regular

    Joined:
    Oct 29, 2007
    Messages:
    280
    Likes Received:
    0
    Well its hard to say if they would indeed clock it higher than G92 only because G92 is a different architecture. The reason G92 was higher clocked than G80 was because the die shrink allowed it to be clocked higher and maintain and manageable TDP and all that.

    GT200 is a bigger beast and is relatively different architecture wise, and as G80 showed, initial clocks were pretty low (relative to what G92 did). In fact, rev A2 of G80's (the initial GTS's and all GTX's) didn't clock much higher than 620ish core whereas A3 revision G80's (later GTS 640/320 and the Ultra's) could reach 690 core with decent airflow. With increased capacitors, heat, power usage, etc., Nvidia might go conservative with clocks with these early revisions.
     
  7. ZerazaX

    Regular

    Joined:
    Oct 29, 2007
    Messages:
    280
    Likes Received:
    0
    Oh and you are absolutely right. The rumored +50% per clock improvement is indeed a misinterpretation:

    The direct quote is 2nd gen. Unified shader architecture delivers %50 more performance over 1st generation through 240sp.

    All it says is that there is 50% more performance over 1st generation shader architecture. That doesn't immediately mean 50% more performance shader per shader at the same clocks. In fact, it could just as well mean 50% more performance over the previous generation period. We don't know for a fact whether it is indeed 50% more per shader. I'm sure they made optimizations to the shader architecture and it may run more efficiently but that doesn't mean that those 240 shaders are in fact 50% faster per shader than the previous gen.
     
  8. leoneazzurro

    Regular

    Joined:
    Nov 3, 2005
    Messages:
    518
    Likes Received:
    25
    Location:
    Rome, Italy
    And it's not sure that the +50% improvement is over the GX2 or it could be in the best case only...
     
  9. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    Jeez it hasnt even been a day and the Folding at Home shots are already all over the web.
     
  10. apoppin

    Regular

    Joined:
    Feb 12, 2006
    Messages:
    255
    Likes Received:
    0
    Location:
    Hi Desert SoCal
    Well, your forum is even more popular now then you ever were before =P
    --what is wrong with that? You are getting the respect and attention you deserve imo! You might also get a few new members, but that is not my fault!

    And the one more thing i will say .. it really appears that GT280 is going to be a monster! i still say 'new architecture', but it isn't long now and we will all know for sure. Nvidia clearly intends these leaks; that says Confidence to me and i am dying to get my hands on CUDA and a pair of the new GTXes [when i can afford them]; damn .. and i need a new display. My vacation will include NVISION this year in San Jose for all 3 days and i am dying to see it and am looking for a small 25x16 for me.

    it appears that double g80 GTX performance might be similar to the +50% improvement over the GX2 claims; usually it is best case.
     
  11. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    What are you talking about?
     
  12. Kowan

    Newcomer

    Joined:
    Sep 6, 2007
    Messages:
    136
    Likes Received:
    0
    Location:
    California
    My guess... your sig lists Nvidia's SLI forum.
     
  13. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    Yes but what does SLIZONE have to do with editors day or folding at home? heh
     
    #1433 ChrisRay, May 24, 2008
    Last edited by a moderator: May 24, 2008
  14. CJ

    CJ
    Regular

    Joined:
    Apr 28, 2004
    Messages:
    816
    Likes Received:
    40
    Location:
    MSI Europe HQ
    Smart boy. ;)
     
  15. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
    Such low clocks and a TDP of ~250W, seems a bit strang.:???:

    Or does the efficiency improvements increase consumption so much?
    I had bet more on that these near 1 TFLOPs refer to MADD-FLOPs, what would match with the performance rumors.
     
  16. Domell

    Newcomer

    Joined:
    Oct 17, 2004
    Messages:
    247
    Likes Received:
    0
    Yeah so how could it achieve such a great result in 3D mark Vantage (7k in Extreme mode!) and run Crysis playable in 1920x1200 with open AA/AF? With only 1.3ghz Shader clock? There MUST be some major improvement in Shader efficiency.
     
  17. CJ

    CJ
    Regular

    Joined:
    Apr 28, 2004
    Messages:
    816
    Likes Received:
    40
    Location:
    MSI Europe HQ
    If my sources are correct, it looks like Arun was spot on about that 50% improved efficiency.

    Looks like NV found what they were missing. ;)
     
  18. Vincent

    Newcomer

    Joined:
    May 28, 2007
    Messages:
    235
    Likes Received:
    0
    Location:
    London


    Could you tell us potential performance increase between one with missing MUL disabled and the other one with extra MUL enabled ?


    G70 to NV40 Again ?:shock:
     
  19. NVNDA

    Newcomer

    Joined:
    Apr 14, 2004
    Messages:
    68
    Likes Received:
    3
    uh oh :D
     
  20. Berek

    Regular

    Joined:
    Oct 17, 2004
    Messages:
    274
    Likes Received:
    4
    Location:
    Austin, TX
    At this stage I'm more wondering what the price will be on the GTX 280 and 260, as we are getting a good grasp on performance potential. We also already have estimates on the ATI 48xx prices. My guesstimate is somewhere around the low 500 USD range for the 280, although it could be higher given the size of the chip and its complexity.

    I can't imagine it going over 600 though if it's going to be competitive or realistic. The 9800GX2 was a bit high in price for what it gave. 600 bucks would break that price ceiling imo (unless this card is so fast it's twice as fast as a 300 buck card :p).
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...