Nvidia GT300 core: Speculation

Discussion in 'Architecture and Products' started by Shtal, Jul 20, 2008.

Thread Status:
Not open for further replies.
  1. dnavas

    Regular

    Joined:
    Apr 12, 2004
    Messages:
    375
    Likes Received:
    7
    Hmm, are you arguing for moving some of the "SF" instructions into the SPs? I would think log/rcp would be really useful there (hence my previous link). SF could be relegated to sin/cos approximations, or those blue dots might be something else entirely.

    ...caches?

    Just trying to keep up.
     
  2. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    No, I am not arguing for that.
     
  3. w0mbat

    Newcomer

    Joined:
    Nov 18, 2006
    Messages:
    234
    Likes Received:
    5
    That would be a hughe, negative, surprise. I see GF100 ahead or minimum on par with HD5870 X2.
     
  4. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    Seems unlikely to me. Such a chip would need to be well over twice as fast as GT200. On paper, sure, but in real world apps, I don't think so.
     
  5. w0mbat

    Newcomer

    Joined:
    Nov 18, 2006
    Messages:
    234
    Likes Received:
    5
    I think 2.5*GT200 was the main goal.
     
  6. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    KonKorT has his doubts too ;)
    http://www.hardware-infos.com/news.php?news=3222

     
  7. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    well that would be pretty impressive if they do that.
     
    #2627 Razor1, Sep 29, 2009
    Last edited by a moderator: Sep 29, 2009
  8. flynn

    Regular

    Joined:
    Jan 8, 2009
    Messages:
    400
    Likes Received:
    0
    How big is the market for that? I know about DirectCompute but I'm wondering how much is Microsoft is willing to invest on that to have software vendors adopt it. I'd rather they had adopted OpenCL instead of pushing their proprietary and incompatible version, though.
     
  9. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    I've heard that number as well, but honestly no one outside the design team really knows if that was the internal goal. Also, 2.5x specs doesn't lead to 2.5x performance gain, as RV770->RV870 has shown us.
     
  10. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    not the same thing as doubling units the gf100 is different.

    Is it me or is anyone else see that the forums are going very slow when posting?
     
  11. Arty

    Arty KEPLER
    Veteran

    Joined:
    Jun 16, 2005
    Messages:
    1,906
    Likes Received:
    55
    Well anything short of CypressX2 would be disappointing, according to the same folks who are 'dissappointed' by Cypress. :D

    Personally anything over 40% (average) faster than Cypress makes me feel funny inside.
     
  12. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    Also, what does it do for me ... I don't need office to run any faster and I don't do video encoding ... so unless it can make porn look a whole lot better the only thing it can do for me is run games.
     
  13. jaredpace

    Newcomer

    Joined:
    Sep 28, 2009
    Messages:
    157
    Likes Received:
    0
    Anyone have a guess at the GF100 die area? If transistor density increases by 1.75x (as for ati going 55nm -> 40nm), then nv's 3.0bln 40nm tran = 575mm^2 and 3.2bln = 615mm^2.
     
  14. dnavas

    Regular

    Joined:
    Apr 12, 2004
    Messages:
    375
    Likes Received:
    7
    So by "less regular computations" you were referring not to, specifically, instruction set (which your reference to MADD seemed to indicate), but rather to the diversity of problems that need to be adequately tackled (MADD being pretty specifically targeted, and hence a flop rating that relied on it being increasingly unuseful)?

    I am curious where you think the instruction architecture is going, though. Dedicated simple ALUs with VLIW front-ends, or fewer, more complicated ALUs? Vector/SIMD vs. MIMD? Fixed/managed on-chip memory, or relatively flexible, coherent caches?
     
  15. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    They are probably both already VLIW+SIMD at this point (well NVIDIA is more LIW+SIMD but same difference). Whatever else happens VLIW is there to stay for a while yet IMO.
     
  16. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    That's absolutely true; however the deeper the architectural changes the more the chances for achieving higher efficiency.
     
  17. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,249
    Likes Received:
    3,419
    I understand. But it looks to me like that "pure gaming" you're speaking about is moving towards parallel processing fast.

    Why would a chip with +50% complexity be "minimum on par" with dual HD5870?..
    If anything it should be approximately +50% to Cypress and Hemlock will probably be around +70% to Cypress. Of course it can end up being +20% as GT200 compared to RV770 did or +70% which will put it against Hemlock. But I don't see any reason to expect Hemlock performance level as a minimum.
     
  18. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    -20% on average in the typical benchmarks to the competitions best X2 card is a clear victory as it would dominate the X2 in the worst case scenarios. Getting a draw to the X2 would be domination.
     
  19. KonKort

    Newcomer

    Joined:
    Dec 29, 2008
    Messages:
    89
    Likes Received:
    0
    Location:
    Germany, Ennepetal
    Yes, I have got my doubts, but as I say: The performance difference between Geforce 380 and Radeon HD 5870 X2 will be much smaller than Geforce GTX 285 and Radeon HD 4870 X2 and I will not exclude that Geforce 380 is even faster.
    Let's look to the worst case: HD 5870 X2 is 20% faster than Geforce 380. Then you must ask you for which price, because Geforce 380 is a Single-GPU (no Multi-core profiles, no micro stuttering etc.), who will not consume more energy than GTX 280. If I look to HD 5870 X2, I hope it will consume under 275 watts.
     
  20. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,247
    Likes Received:
    4,465
    Location:
    Finland
    Worst case? For all we know, it could be another NV30-case, the worst case is much, much worse than that
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...